1 Raindrop

Gunnar Peterson's loosely coupled thoughts on distributed systems, security, and software that runs on them.

Recent Posts

  • Security Champions Guide to Web Application Security
  • Security > 140 Conversation with Pamela Dingle on Identity
  • 6 Things I Learned from Robert Garigue
  • The Curious Case of API Security
  • Security Capability Engineering
  • Ought implies can
  • Security > 140 Chat with T. Rob Wyatt on MQ and Middleware Security
  • Privilege User Management Bubble?
  • The part where security products solve the problem
  • Four Often Overlooked Factors to Give Your Security Team a Fighting Chance

Blogroll

  • Adding Simplicity - An Engineering Mantra
  • Adventures of an Eternal Optimist
  • Andy Steingruebl
  • Andy Thurai
  • Anton Chuvakin
  • Beyond the Beyond
  • cat slave diary
  • Ceci n'est pas un Bob
  • ConnectID
  • Cryptosmith
  • Emergent Chaos: Musings from Adam Shostack on security, privacy, and economics
  • Enterprise Integration Patterns: Gregor's Ramblings
  • Financial Cryptography
  • infosec daily: blogs
  • Jack Daniel
  • James Kobielus
  • James McGovern
  • John Hagel
  • Justice League [Cigital]
  • Kim Cameron's Identity Weblog
  • Krypted - Charles Edge's Notes from the Field
  • Lenny Zeltser
  • Light Blue Touchpaper
  • Mark O'Neill
  • Off by On
  • ongoing
  • Patrick Harding
  • Perilocity
  • Pushing String
  • Rational Survivability
  • rdist: setuid just for you
  • RedMonk
  • RiskAnalys.is
  • Rudy Rucker
  • Software For All Seasons
  • Spire Security Viewpoint
  • TaoSecurity
  • The New School of Information Security
  • Windley's Technometria
  • zenpundit
Blog powered by Typepad

What's 20 or so million SSNs between friends?

This is ridiculous. Yahoo:

Thieves took sensitive personal information on 26.5 million U.S. veterans, including Social Security numbers and birth dates, after a Veterans Affairs employee improperly brought the material home, the government said Monday.
...
Nicholson said there was no evidence the thieves had used the data for identity theft, and an investigation was continuing.

Sure they are probably just using it as a test bed for arbitarily large data sets for a charitable open source project

Ramona Joyce, spokeswoman for the American Legion, agreed that the theft was a concern. "In the information age, we're constantly told to protect our information. We would ask no less of the VA," she said.

Nicholson declined to comment on the specifics of the incident, which involved a midlevel data analyst who had taken the information home to suburban Maryland on a laptop to work on a department project.
...
"I want to emphasize there was no medical records of any veteran and no financial information of any veteran that's been compromised," Nicholson said, although he added later that some information on the veterans' disabilities may have been taken.
...
Sen. John Kerry, D-Mass., who is a Vietnam veteran, said he would introduce legislation to require the VA to provide credit reports to the veterans affected by the theft.

"This is no way to treat those who have worn the uniform of our country," Kerry said. "Someone needs to be fired."

Sorry, but firing people is not going to fix this problem. Instead, maybe GWB could increase his popularity by adopting Pete Lindstrom's modest plan to Eliminate the SSN Facade. And while we are at it, why not write the Laws of Identity into the Constitution? Ok, maybe not on that last one, but how about we use the Laws in the systems we build?

May 22, 2006 in Identity Services, Security | Permalink | Comments (0)

150,000 person shared "secret" considered broken

I like Pete's Modest Proposal to publish SSNs, I mean everyone gets up in arms about not taping your password to your monitor, but do 150,000 people walk by your monitor?

April 11, 2006 in Identity Services, Security, Security Metrics | Permalink | Comments (0)

Times they are a changin

Well, it was not too long ago, all the security pundits were telling us how bad web services are for security, and some are still saying this right? Of course when someone tells you some technology is "bad" for security, the proper response is always: compared to what? So SOAP is somehow worse for security than DCOM or RMI-IIOP? Remind me all the great security tools that shipped with those protocols again?

Funny, but when all the big vendors and standards bodies have to sit down and work interop, they sometimes get something pretty nice. Rickard Oberg on SAML, XACML (emphasis added):

There are many other crucial technologies today, which are important for integrating webapps into a portal. It is curious to read forums where people bash things like portlets and WSRP and XACML and such. "Why would we want to run portlets in two servers? My Struts app works just fine on one Tomcat instance". Amazing. When you have been exposed to real-life DMZ environments a couple of times you start to wonder how the heck we get anything done *without* stuff like WSRP and security integration specifications like XACML and SAML. The conclusion appears to be that developers that don't get it simply aren't exposed to the realities of actually running the apps. For development it might make no sense to have several servers, or integration technologies like SAML and WSRP, but for real deployments they are essential. As long as application developers are in the dark with regard to these basic realities systems integrators are going to keep using hack upon hack in order to make house-of-cards type integration of apps.

Precisely the point, not only are Web Services not turning out to be "bad" for security, but the standards that they are generating - SAML, XACML, WS-* - are giving developers better security tools than they have ever had before.

March 24, 2006 in Computer Security, Deperimeterization, Identity Services, Risk Management, SAML, Security, Security Architecture, SOA, Software Architecture, SOS, Web Services | Permalink | Comments (0)

Ident-ebonics

Identity projects, like any other integration project, yield confusion because of disparate stakeholder needs and understanding of definition. This is a new discipline for security projects which are typically nowhere near as customer-facing as software development projects, but with identity management security team are exposed directly to lots of stakeholders. On top of that identity projects are laden with lots of legacy terminology baggage like roles. If you ask 5 IT people about roles you will get (at least) 6 different answers, managers talk about functional roles, BAs talk about domain roles, developers talk about user principal roles, and directory architects...of course know what roles really are...er..talk about LDAP roles. This makes this word harmful to building up understanding, it is this way with many concepts in identity and access management. Vendors make the problem even worse. Start with an identity architecture that defines the entities and their relationships. Publish prescriptive, positive definitions and examples help to reduce confusion. Fight ident-ebonics misunderstanding with use cases to generate shared understanding of the space, structure, and behavior. You cannot integrate what you cannot agree on.

March 14, 2006 in Identity Services, Security, Security Architecture, Use Cases | Permalink | Comments (0)

Service Oriented Security Architecture

My paper on Service Oriented Security Architecture from the Nov. issue of ISB is now online. The paper describes an approach to dealing security design and architecture issues in developing Web Services and SOA software.

The primary goals are to illustrate a set of key analytical areas, and a way to synthesize these relationships. As Kruchten and others observed separation of concerns is an useful technique in software architecture. In security architecture, it is useful as well, and in addition separation of assets yields a more robust risk management model. In the case of this paper, the assets are separated as Identity, Message, Service, Deployment Environment, and Transaction. This way the risks and countermeasures can be understood and the elements and constraints dealt with in their own domain to the extent possible.

January 19, 2006 in Deperimeterization, Federation, Identity Services, SDLC, Security, Security Architecture, Separation of Assets, SOA, Software Architecture, SOS, Use Cases | Permalink | Comments (0)

SAML 2.0 Federated Identity Standards Convergence

Patrick Harding articulates what SAML 2.0 may mean to the industry:

Until this year, identity federation has suffered from the problem of too many standards. Companies that deployed federation before the fourth quarter were forced to deal with five incompatible protocols: OASIS Security Assertion Markup Language 1.0 and 1.1, Liberty Alliance ID-FF 1.1 and 1.2 and Shibboleth. The result was a complex matrix of enterprise and consumer use cases, protocols and implementations that slowed the growth and increased the cost of federation deployments.

The Organization for the Advancement of Structured Information Standards (OASIS), the Liberty Alliance and Shibboleth have since joined forces to create a single standard that would make their previous work obsolete. The result is SAML 2.0, which OASIS ratified in March and is beginning to appear in vendor products. SAML 2.0 radically alters the federation landscape by removing the largest barrier to increased federation adoption: multiprotocol complexity.
...

SAML 2.0 incorporates every critical-use case and feature from every predecessor protocol into a single standard. As it represents a superset of all the functionality in all five predecessors, SAML 2.0 makes them obsolete.

                                    

SAML 2.0 describes two roles for enabling federation; the service provider is the entity that makes an application or resource available to the user, while the identity provider is responsible for authenticating the user. The service provider and the identity provider exchange messages to enable single sign-on and single log-out. These message exchanges can be initiated by the identity provider or the service provider.

                                    

For single sign-on, the identity provider is responsible for creating a SAML assertion that contains the identity of a user and then securely sends that assertion to the service provider. The service provider is responsible for validating the SAML assertion before letting the user access the application.

Since the whole point of federation is to port identity information across domains and render it usefully, convergence of the myriad of standards is a welcome development.

January 18, 2006 in Deperimeterization, Federation, Identity Services, SAML, Security, Security Architecture, Software Architecture | Permalink | Comments (2)

Scaling Federation in 06

Andre's blog also contains another entry on Federation in 06, starting with Burton Group's take:

  • Long term, federation isn’t a separate product
  • Federation standards already seeping into many product classes: Firewalls, gateways, application servers, and IdM products
  • Federation likely won’t be point-to-point like SSL; various tiers of the infrastructure will act on claims as necessary
  • Systems need to federate, but that doesn’t necessitate an uber-federation system
  • Then he pulls in Ping's CTO, Patrick Harding, in an email excerpt in which Patrick points out the limitations of scaling federation without some shared infrastructure on a separate layer. Drawings of two options below:

    Federation everywhere

    Federation with a separate federation layer

    I have described the need to separate an Identity Abstraction layer in the BuildSecurityIn paper on Identity in Assembly and Integration, Patrick points out that a system that lacks a separate federation system is analogous to having all systems run their own PKI (scary, I know). Patrick describes the positive impact that this separation can have on scalability of the federated identity system. There are many other gains such as simplifying the developer's experience through abstraction of the back end resources and technologies, interoperability, and pluggability. Lastly, the separate fedaration layer supports the 5th Law of Identity: Pluralism of Operators and Technologies which states:

    So when it comes to digital identity, it is not only a matter of having identity providers run by different parties (including individuals themselves), but of having identity systems that offer different (and potentially contradictory) features.

    A universal system must embrace differentiation, while recognizing that each of us is simultaneously—and in different contexts—a citizen, an employee, a customer, and a virtual persona.


    Abstraction is about the best tool we have in programming, it is nice that in 2005 we are actually using it for identity.

    December 23, 2005 in Federation, Identity Services, Security, Security Architecture, Software Architecture, STS | Permalink | Comments (2)

    Andre Durand Interview on Federation

    In this interview, Ping Identity CEO/Founder, Andre Durand discusses the recent Trustgenix acquisition, the state of Federated idenity (are we still waiting for the big bang?), and the ideas behind starting Ping.

    The next five years might be characterized by the entire ID management stack becoming standardized, where the interfaces between everything are standards. What's interesting about all of this is while you have a tightly integrated, proprietary suite from the vendors on the one hand, which is very self-serving, on the other hand, you have almost the opposite thing happening: a modular, loosely-coupled stack with standards in between.

    With this, companies can pick and choose best-of-breed authentication and tie it to best-of-breed policy, where all of the vendor products are interoperable. Therein lies the big, long-term opportunity for Ping because we started at the beginning of this modularization.


    December 23, 2005 in Federation, Identity Services, Security, Security Architecture, Software Architecture | Permalink | Comments (0)

    Proportion of Security Applied to "dumb" part of system

    Richard Bejtlich blogs

    One of the strengths of the Internet has been the fact that it inverted the telecom model, where the network was smart and the end device (the phone) was dumb. The traditional Internet featured a relatively dumb network whose main job was to get traffic from point A to point B. The intelligence was found in those end points. This Internet model simplified troubleshooting and allowed a plethora of protocols to be carried from point A to point B.

    With so-called "intelligent networking," points A and B have to be sure that the network will transmit their conversation, and not block, modify, or otherwise interfere with that exchange to the detriment of the end hosts. As a security person I am obviously in favor of efforts to enforce security policies, but I am not in favor of adding another layer of complexity on top of existing infrastructures if it can be avoided.

    Now network security mechanisms are great. And networks are a great place to deploy some securiyt mechanisms, because they have the potential for visibility and scalability at a system level. But what about the "smart" part of the system? Shouldn't they have security, too? What is the proportion of your organization's IT security spend on security for the network versus the other areas of the system?  When you focus on securing the network you are improving assurance of your dialtone, but what about payloads, logic, behavior?

    It is much more difficult to map security onto the apps, databases, hosts, et. al. in some cases, this is why we have not seen a huge vendor presence in this space, and vendors drive a lot of the security market. But that does not mean we should not do it, after all that is the solution space. Bruce Sterling wrote in Tomorrow Now that our current healthcare and medicine models are obsessed with hygiene, but in reality hygiene is just admitting that we are clueless about microbes, and the immune system -- you know -- the stuff that actually keeps us alive. Hey, I am not against hacking away and using things like network security and hygiene, I even use them both myself, but let's not think that these are the end goals in and of themselves.

    December 22, 2005 in Deperimeterization, Identity Services, Risk Management, Security, Security Architecture, Software Architecture, SOS | Permalink | Comments (0)

    Assurance Techniques Review

    Earlier, I blogged about an excellent paper by Brian Snow of the NSA called "We Need Assurance!" Brian Snow explores several techniques we can use to increase assurance in our systems: operating systems, software modules, hardware features, systems engineering, third party testing, and legal constraints. From a security architecture point of view this breadth is useful since none of these mechanisms alone is sufficient, but together may create a greater level of assurance.

    Snow on Operating systems:

    Even if operating systems are not truly secure, they can remain at least benign (not actively malicious) if they would simply enforce a digital signature check on every critical module prior to each execution. Years ago, NSA's research organization wrote test code for a UNIX systems that did exactly that. The performance degraded about three percent. This is something that is doable!

    Operating systems should be self-protective and enforce (at a minimum) separation, least-privilege, process-isolation, and type enforcement.

    They should be aware of and enforce security policies!Policies drive requirements. Recall that Robert Morris, a prior chief scientist for the National Computer Security Center once said: "Systems built without requirements cannot fail; they merely offer surprises - usually unpleasant!"

    In the section on Operating Systems, Snow goes onto call them the Black Hole of security. The current state of operating system quality is far from where it needs to be in terms of achieving Snow's pragmatic goal of not achieving security only remaining benign. As with all of the techniques Snow discusses in the paper, we have techniques currently today to improve the situation.Thinking architecturally, with an actively mailicious operating system mediating application, data, and network activity we are building on a "foundation" made of plywood and termites. RBAC systems, digital signatures, MLS systems, and diversification all have potential to improve what we have out there today.

    Snow on Software modules:

    Software modules should be well documented, written in certified development environments...,and fully stress-tested at their interfaces for boundary-condition behavior, invalid inputs, and proper command in improper sequences.

    In addition to the usual quality control concerns, bounds checking and input scrubbing require special attention.
    ...
    A good security design process requires review teams as well as design teams, and no designer should serve on the review team. They cannot be critical enough of their own work.

    BuildSecurityIn has many techniques for improving assurance in software. As do books by Gary McGraw, Mike Howard, Ken van Wyk, and others. Source code analysis and binary analysis tools, again, are tools we can work with *today* to ensure our code is not as vulnerable as what is currently in production. Collaboration by security in the separate disciplines of requirements, architecture, design, development, and testing is absolutely critical. The security team should align its participation to how software development is done. In turn, software development teams must build security into their processes. Architects, in particular, bear responsibility here. Architects own the non-functional requirements (of which security is one) and they have a horizontal view of the system, they need to collaborate with security team to generate the appropriate involvement and plaement of security system-wide. The paradigm of blaming the business ("they did not give me enough time to write secure code") or blaming the developer ("gee- why didnt they just write it securely") is a non-starter. Architects need to synthesize concerns in harmony with the risk management decisions from the security team.

    Snow on Hardware Features:

    Consider the use of smartcards, smart badges, or other critical functions. Although more costly than software, when properly implemented the assurance gain is great. The form factor is not as important as the existence of an isolated processor and address space for assured operations - an "Island of Security" if you will.

    I blogged yesterday that smartcards have a ton of promise here. The economics of hardware and advances in software security (like PSTS) are really increasing the practicality of deploying these solutions. Having a VM (jvm or clr), a XML parser, STS, some security mechanisms, and an IP will make smartcards a primary security tool that is cost effective to deploy now and/or in the very near future.

    Snow on Software systems engineering:

    How do we get high assurance in commercial gear?
    a) How can we trust, or
    b) If we cannot trust, how can we safely use, security gear of unknown quality?
    Note the difference in the two characterizations above: how we phrase the question may be important.

    This is a fundamental software architecture question. In a SOA world it is even more relevant because we cannot "trust" the system. The smartcard example from above allows us to gain traction on solutions that can help answer "the safely use" challenge. Security in Use Case Modeling can help to show the actual use of the system and what assurance concerns need to be addressed.

    More on Software systems engineering:

    Synergy, where the strength of the whole is greater than the sum of the strength of the parts, is highly desirable but not likely. We must avoid at all costs the all-too-common result where the system strength is less than the strength offered by the strongest component, and in some worst cases less than the weakest component present. Security is fragile under composition; in fact, secure composition of components is a major research area today.

    Good system security design today is an art, not a science. Nevertheless, there are good practitioners out there that cna do it. For instance, some of your prior distinguished practitioners fit the bill.

    This area of "safe use of inadequate" is one of our hardest problems, but an area where I expect some of the greatest payoffs in the future and where I invite you to spend effort.

    I have written about Collaboration in Software Development (2, 3) too often system design which is essentially an exercise in tradeoff analysis is attempted to be dealt with by dualistic notions of "secure" and trusted. In many cases the word security creates more harm than good.Specificity of the assurance goals at a granular level from requirements to architecture, design, coding, and testing is a big part of the way forward. Again, we do not need to wait for a magical security tool to do this, we can begin today.

    The last two areas discussed are 3rd party testing which reinforces the integrity of separation of duties at design and construction time, and market/legal/regulatory constraints. These groups and forces can be powerful allies in gaining empowerment for architects wishing to create more assurance in their system. One of the challenges for technical people when they deal with these sorts of individuals is an effective translation of geekspeak to something that is understandable by the wider audience. In particular, the ability to quantify risk is a very effective way to show what the impact could be for the enterprise. The Ponemon Institute has some excellent data on this, and more is coming all the time, quantifiable risk data says in black and white business terms what the business needs to deal with.

    Last thought, Brian Snow:

    It is not adequate to have the techniques; we must use them!

    We have our work cut out for us; let's go do it.

    December 22, 2005 in Deperimeterization, Identity Services, Risk Management, Security, Security Metrics, SOA, Software Architecture, SOS, STS | Permalink | Comments (1)

    »
    My Photo

    SOS: Service Oriented Security

    • The Curious Case of API Security
    • Getting OWASP Top Ten Right with Dynamic Authorization
    • Top 10 API Security Considerations
    • Mobile AppSec Triathlon
    • Measure Your Margin of Safety
    • Top 10 Security Considerations for Internet of Things
    • Security Checklists
    • Cloud Security: The Federated Identity Factor
    • Dark Reading IAM
    • API Gateway Secuirty
    • Directions in Incident Detection and Response
    • Security > 140
    • Open Group Security Architecture
    • Reference Monitor for the Internet of Things
    • Don't Trust. And Verify.

    Archives

    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015

    More...

    Subscribe to this blog's feed