1 Raindrop

Gunnar Peterson's loosely coupled thoughts on distributed systems, security, and software that runs on them.

Recent Posts

  • Security Champions Guide to Web Application Security
  • Security > 140 Conversation with Pamela Dingle on Identity
  • 6 Things I Learned from Robert Garigue
  • The Curious Case of API Security
  • Security Capability Engineering
  • Ought implies can
  • Security > 140 Chat with T. Rob Wyatt on MQ and Middleware Security
  • Privilege User Management Bubble?
  • The part where security products solve the problem
  • Four Often Overlooked Factors to Give Your Security Team a Fighting Chance

Blogroll

  • Adding Simplicity - An Engineering Mantra
  • Adventures of an Eternal Optimist
  • Andy Steingruebl
  • Andy Thurai
  • Anton Chuvakin
  • Beyond the Beyond
  • cat slave diary
  • Ceci n'est pas un Bob
  • ConnectID
  • Cryptosmith
  • Emergent Chaos: Musings from Adam Shostack on security, privacy, and economics
  • Enterprise Integration Patterns: Gregor's Ramblings
  • Financial Cryptography
  • infosec daily: blogs
  • Jack Daniel
  • James Kobielus
  • James McGovern
  • John Hagel
  • Justice League [Cigital]
  • Kim Cameron's Identity Weblog
  • Krypted - Charles Edge's Notes from the Field
  • Lenny Zeltser
  • Light Blue Touchpaper
  • Mark O'Neill
  • Off by On
  • ongoing
  • Patrick Harding
  • Perilocity
  • Pushing String
  • Rational Survivability
  • rdist: setuid just for you
  • RedMonk
  • RiskAnalys.is
  • Rudy Rucker
  • Software For All Seasons
  • Spire Security Viewpoint
  • TaoSecurity
  • The New School of Information Security
  • Windley's Technometria
  • zenpundit
Blog powered by Typepad

WResting Security Interop

Andre Durand posted some thoughts of mine WS-* and REST

WS-* and REST are often portrayed as competing technologies. While the use cases they deal with do overlap in some cases, there are many instances where each will have its logical place in a given system architecture. Both WS-* and REST are focused on interoperability, so if any two technologies should be able to work together, it is these two.

REST's approach gives developers an efficient way to build and deploy web services using existing technologies that are typically already deployed and scaled out in the enterprise. REST does not provide frameworks to handle the declarative, configuration driven qualities like security, QoS, etc. that WS-* does, but that is not the point of REST. REST style services benefit from integration with security services such as authentication services that can provide increased assurance and security through strong authentication mechanisms like two factor, OTP, etc., as long as these services plug into the existing infrastructure that REST deals with.

Many of the core WS-* use cases are designed for transactional middleware systems like an enterprise service bus. REST was not designed for this purpose, though there are many cases where architects will want their REST services to integrate with their ESB. The goal of this integration is to enable the development and deployment efficiency gains that REST to plug into systems like an ESB and other WS-* systems by bridging the protocols with a STS that brokers communications and allows for a robust, integrated security model.

So what does all this mean to security architects? WS-* has a very useful set of security standards and tools. REST does not. WS-* has ways to deal with securing identity, messages, tokens, etc. while securing REST is more analogous to securing a web app - things are likely to be much more unstructured and customized. Each requires an unique security model, the integration point between the two security realms benefits from interoperability in the security service like a STS that exchanges tokens between the two realms.

June 27, 2006 in Computer Security, REST, Security Architecture, Software Architecture, STS, Web Services | Permalink | Comments (0)

SOA and Web Service Security Metrics in Leuven

In Leuven at OWASP App Sec conference, the participants in my SOA, Web Services, and XML Security class, we built this set of security metrics for measuring security in a Web Services environment.

The base case includes a Distributor's Enterprise Service Bus that brokers services between a manufacture web service client and a set of supplier Web Services providers

The metrics map examines specific metrics for a XML Security Gateway, a Security Token Server (STS), the ESB, the system, and services. This is not a complete set, but it addresses many areas where commercial systems are blind.
Leuvensoasecuritymetrics

May 29, 2006 in Computer Security, Defense in Depth, Deperimeterization, Economics, OWASP, Security, Security Architecture, Security Metrics, SOA, Software Architecture, STS, Web Services | Permalink | Comments (0)

Web Services and XML Security Training at OWASP Europe (Belgium)

I will be teaching a one day course on Web Services and XML Security at the OWASP Europe conference. I enjoy the OWASP conferences, there is a good mix of security folks, developers, and architects, plus it is vendor neutral, many different industries are represented, and usually in a nice location.

The focus areas of my class are:

  • Web Services attack patterns
  • Common XML attack patterns
  • Data and XML security using WS-Security, SAML, XML Encryption and XML Digital Signature
  • Identity services and federation with SAML and Liberty
  • Hardening Web Services servers
  • Input validation for Web Services
  • Integrating Web Services securely with backend resources and applications using WS-Trust
  • Secure Exception handling in Web Services

The class explores standard secure coding and application security issues and looks at new risks and countermeasures that are present in Web Services, SOA, and XML paradigms.

January 24, 2006 in Deperimeterization, Federation, OWASP, Risk Management, SAML, SDLC, Security, Security Architecture, SOA, Software Architecture, SOS, STS, Web Security, Web Services, XML | Permalink | Comments (0)

Design for Failure

ACM's interview of Bruce Lindsay by Steve Bourne is a classic. Designing for failure is something a lot of people talk about, but not with much specificity. Not so with this interview:

SB Are you really thinking of system failures as opposed to user errors?

BL I don’t think of user errors, such as improper input kinds of things, as “failures.” Those are normal occurrences. I don’t think of a compiler saying you misspelled goto as really being an error. That’s expected.

If you look at the OWASP Top Ten for example, or other lists like SANS, so many of the so-called security vulnerabilities go back input validation issues. But they are normal and even input that is not intended to malicious, for example due to poor training, causes faults that can compromise the system.

SB One thing we could explore here is what techniques we have in our toolkit for error detection.

BL Fault handling always begins with the detection of the fault—most often by use of some kind of redundancy in the system, whether it be parity, sanity checks in the code where we may spend 10 percent of the lines of code checking the consistency of our state to see if we should go into error handling. Timeout is not exactly a redundancy but that’s another way of detecting errors.

The key point here is that if you go to sea with only one clock, you can’t tell whether it’s telling you the right time. You need to have some way to check. For example, if you read a message from a network, you might want to check the header to see if it is really a message that you were expecting—that is, look for some bits and some position that says, “Aha, this seems like I should go further.”

Or if you read from the disk, you might want to check a label on the disk block to see if it was the block you thought you were asking for. It’s always some kind of redundancy in the state that allows you to detect the occurrence of an error. If you hadn’t thought about failures, why would you put the address of a disk block into the disk block?

SB So, really what you’re trying to do is establish confidence in your belief about what’s going on in the system?

BL In a large sense, that’s right. And to validate, as you go, that the data or state upon which you’re going to operate is self-consistent.

Self-consistency echoes an assurance goal also advocated for by Brian Snow where he asks for how can we safely use security gear that we cannot trust?

SB Once you’ve detected the error, now what? You can report it, but the question is who do you report it back to and what do you report back?

BL There are two classes of detection. One is that I looked at my own guts and they didn’t look right, and so I say this is an error situation. The other is I called some other component that failed to perform as requested. In either case, I’m faced with a detected error. The first thing to do is fold your tent—that is, put the state back so that the state that you manage is coherent. Then you report to the guy who called you, possibly making some dumps along the way, or you can attempt alternate logic to circumvent the exception.

In our database projects, what typically happens is it gets reported up, up, up the chain until you get to some very high level that then says, “Oh, I see this as one of those really bad ones. I’m going to initiate the massive dumping now.” When you report an error, you should classify it. You should give it a name. If you’re a component that reports errors, there should be an exhaustive list of the errors that you would report.

That’s one of the real problems in today’s programming language architecture for exception handling. Each component should list the exceptions that were raised: typically if I call you and you say that you can raise A, B, and C, but you can call Joe who can raise D, E, and F, and you ignore D, E, and F, then I’m suddenly faced with D, E, and F at my level and there’s nothing in your interface that said D, E, and F errors were things you caused. That seems to be ubiquitous in the programming and the language facilities. You are never required to say these are all the errors that might escape from a call to me. And that’s because you’re allowed to ignore errors. I’ve sometimes advocated that, no, you’re not allowed to ignore any error. You can reclassify an error and report it back up, but you’ve got to get it in the loop.

Security exceptions in many cases require special handling, so that they may be softened before returning any data to a client or log file. The programming langauge support for error detection, softening, and reporting is pathetic, programmers must literally grow their own wheat to make bread in this case.

SB One of the interesting aspects of this is the trade-off between how long it takes to detect something and how much time you really have to recover in the system.

BL
And what it means to remove the failed component, because there is a split brain problem that I think you’re out and you think I’m out. Who’s in charge?

SB
Right, and while they’re arguing about it, nothing is happening.

BL
That’s also possible, although some of these systems can continue service. It’s rare that a distributed system needs the participation of all active members to perform a single action.

There is also the issue of dealing out the failed members. If there are five of us and four of us think that you’re dead, the next thing to do is make sure you’re dead by putting three more bullets in you.

We see that particularly in the area of shared management of disks, where you have two processors, two systems, connected to the same set of storage. The problem is that if one system is going to take over the storage for the other system, then the first system better not be using the storage anymore. We actually find architectural facilities in the storage subsystems for freezing out participants—so-called fencing facilities.

So if I think you’re dead and I want to take over your use and responsibility for the storage, the first thing I want to do is tell the storage, “Pay no attention to him anymore. I’m in charge now.” If you were to continue to use the storage while I blithely go forward and think I’m the one who’s in charge of it, terrible things can happen.

There are two aspects of collaboration in distributed systems. One is figure out who’s playing, and the second one is, if someone now is considered not playing, make damn sure somehow that they’re not playing.

This lesson applies to a lot of security in distirbuted system, including authentication systems, SE(I)Ms, XML gateways, and a lot more. Part of the solution is to attempt to use tools, for example a STS, enabling "security" to not function as some dualistic boolean, but rather a composeable domain, with its own rules, behavior, and logic to adapt to these situations, since the languages and servers are not able without a tremndous amount of fu. Remember that Survivability has three R's: Resistance, Recognition, and Recovery, the Anasazi certainly did.

 

January 18, 2006 in Security, Security Architecture, Software Architecture, STS, Survivability | Permalink | Comments (0)

Scaling Federation in 06

Andre's blog also contains another entry on Federation in 06, starting with Burton Group's take:

  • Long term, federation isn’t a separate product
  • Federation standards already seeping into many product classes: Firewalls, gateways, application servers, and IdM products
  • Federation likely won’t be point-to-point like SSL; various tiers of the infrastructure will act on claims as necessary
  • Systems need to federate, but that doesn’t necessitate an uber-federation system
  • Then he pulls in Ping's CTO, Patrick Harding, in an email excerpt in which Patrick points out the limitations of scaling federation without some shared infrastructure on a separate layer. Drawings of two options below:

    Federation everywhere

    Federation with a separate federation layer

    I have described the need to separate an Identity Abstraction layer in the BuildSecurityIn paper on Identity in Assembly and Integration, Patrick points out that a system that lacks a separate federation system is analogous to having all systems run their own PKI (scary, I know). Patrick describes the positive impact that this separation can have on scalability of the federated identity system. There are many other gains such as simplifying the developer's experience through abstraction of the back end resources and technologies, interoperability, and pluggability. Lastly, the separate fedaration layer supports the 5th Law of Identity: Pluralism of Operators and Technologies which states:

    So when it comes to digital identity, it is not only a matter of having identity providers run by different parties (including individuals themselves), but of having identity systems that offer different (and potentially contradictory) features.

    A universal system must embrace differentiation, while recognizing that each of us is simultaneously—and in different contexts—a citizen, an employee, a customer, and a virtual persona.


    Abstraction is about the best tool we have in programming, it is nice that in 2005 we are actually using it for identity.

    December 23, 2005 in Federation, Identity Services, Security, Security Architecture, Software Architecture, STS | Permalink | Comments (2)

    Assurance Techniques Review

    Earlier, I blogged about an excellent paper by Brian Snow of the NSA called "We Need Assurance!" Brian Snow explores several techniques we can use to increase assurance in our systems: operating systems, software modules, hardware features, systems engineering, third party testing, and legal constraints. From a security architecture point of view this breadth is useful since none of these mechanisms alone is sufficient, but together may create a greater level of assurance.

    Snow on Operating systems:

    Even if operating systems are not truly secure, they can remain at least benign (not actively malicious) if they would simply enforce a digital signature check on every critical module prior to each execution. Years ago, NSA's research organization wrote test code for a UNIX systems that did exactly that. The performance degraded about three percent. This is something that is doable!

    Operating systems should be self-protective and enforce (at a minimum) separation, least-privilege, process-isolation, and type enforcement.

    They should be aware of and enforce security policies!Policies drive requirements. Recall that Robert Morris, a prior chief scientist for the National Computer Security Center once said: "Systems built without requirements cannot fail; they merely offer surprises - usually unpleasant!"

    In the section on Operating Systems, Snow goes onto call them the Black Hole of security. The current state of operating system quality is far from where it needs to be in terms of achieving Snow's pragmatic goal of not achieving security only remaining benign. As with all of the techniques Snow discusses in the paper, we have techniques currently today to improve the situation.Thinking architecturally, with an actively mailicious operating system mediating application, data, and network activity we are building on a "foundation" made of plywood and termites. RBAC systems, digital signatures, MLS systems, and diversification all have potential to improve what we have out there today.

    Snow on Software modules:

    Software modules should be well documented, written in certified development environments...,and fully stress-tested at their interfaces for boundary-condition behavior, invalid inputs, and proper command in improper sequences.

    In addition to the usual quality control concerns, bounds checking and input scrubbing require special attention.
    ...
    A good security design process requires review teams as well as design teams, and no designer should serve on the review team. They cannot be critical enough of their own work.

    BuildSecurityIn has many techniques for improving assurance in software. As do books by Gary McGraw, Mike Howard, Ken van Wyk, and others. Source code analysis and binary analysis tools, again, are tools we can work with *today* to ensure our code is not as vulnerable as what is currently in production. Collaboration by security in the separate disciplines of requirements, architecture, design, development, and testing is absolutely critical. The security team should align its participation to how software development is done. In turn, software development teams must build security into their processes. Architects, in particular, bear responsibility here. Architects own the non-functional requirements (of which security is one) and they have a horizontal view of the system, they need to collaborate with security team to generate the appropriate involvement and plaement of security system-wide. The paradigm of blaming the business ("they did not give me enough time to write secure code") or blaming the developer ("gee- why didnt they just write it securely") is a non-starter. Architects need to synthesize concerns in harmony with the risk management decisions from the security team.

    Snow on Hardware Features:

    Consider the use of smartcards, smart badges, or other critical functions. Although more costly than software, when properly implemented the assurance gain is great. The form factor is not as important as the existence of an isolated processor and address space for assured operations - an "Island of Security" if you will.

    I blogged yesterday that smartcards have a ton of promise here. The economics of hardware and advances in software security (like PSTS) are really increasing the practicality of deploying these solutions. Having a VM (jvm or clr), a XML parser, STS, some security mechanisms, and an IP will make smartcards a primary security tool that is cost effective to deploy now and/or in the very near future.

    Snow on Software systems engineering:

    How do we get high assurance in commercial gear?
    a) How can we trust, or
    b) If we cannot trust, how can we safely use, security gear of unknown quality?
    Note the difference in the two characterizations above: how we phrase the question may be important.

    This is a fundamental software architecture question. In a SOA world it is even more relevant because we cannot "trust" the system. The smartcard example from above allows us to gain traction on solutions that can help answer "the safely use" challenge. Security in Use Case Modeling can help to show the actual use of the system and what assurance concerns need to be addressed.

    More on Software systems engineering:

    Synergy, where the strength of the whole is greater than the sum of the strength of the parts, is highly desirable but not likely. We must avoid at all costs the all-too-common result where the system strength is less than the strength offered by the strongest component, and in some worst cases less than the weakest component present. Security is fragile under composition; in fact, secure composition of components is a major research area today.

    Good system security design today is an art, not a science. Nevertheless, there are good practitioners out there that cna do it. For instance, some of your prior distinguished practitioners fit the bill.

    This area of "safe use of inadequate" is one of our hardest problems, but an area where I expect some of the greatest payoffs in the future and where I invite you to spend effort.

    I have written about Collaboration in Software Development (2, 3) too often system design which is essentially an exercise in tradeoff analysis is attempted to be dealt with by dualistic notions of "secure" and trusted. In many cases the word security creates more harm than good.Specificity of the assurance goals at a granular level from requirements to architecture, design, coding, and testing is a big part of the way forward. Again, we do not need to wait for a magical security tool to do this, we can begin today.

    The last two areas discussed are 3rd party testing which reinforces the integrity of separation of duties at design and construction time, and market/legal/regulatory constraints. These groups and forces can be powerful allies in gaining empowerment for architects wishing to create more assurance in their system. One of the challenges for technical people when they deal with these sorts of individuals is an effective translation of geekspeak to something that is understandable by the wider audience. In particular, the ability to quantify risk is a very effective way to show what the impact could be for the enterprise. The Ponemon Institute has some excellent data on this, and more is coming all the time, quantifiable risk data says in black and white business terms what the business needs to deal with.

    Last thought, Brian Snow:

    It is not adequate to have the techniques; we must use them!

    We have our work cut out for us; let's go do it.

    December 22, 2005 in Deperimeterization, Identity Services, Risk Management, Security, Security Metrics, SOA, Software Architecture, SOS, STS | Permalink | Comments (1)

    Splendid Isolation: Smart Card Security

    Was down in Austin yesterday, and got to spend some time with Kapil Sachdeva at AxAlto. Kapil has blogged about his work on PSTS, and how smart cards fit in federated identity. The meeting was good, but the best part was talking with Kapil about where things are going. Like I blogged earlier in The Road to Assurance, Brian Snow points out that one of the vastly underutilized elements for improving our collective security is hardware features. Smart cards have excellent utility as the "Island of security" that Brian advocates for. Like I blogged in another post about Thinking in Layers, many of the smartest people in the last 100 years have come to realize "A problem cannot be properly resolved at the same logical level that it was created." The smart card is so killer because it not only provides a level of indirection (with which we can solve anything, right?), but it also takes us out of the physical stack and all its dependent defense in depth layers. So when we think about increasing the security in a system, sometimes the most cost effective answers are not about cramming more stuff into the box we are trying to protect, but using composition of separate physical elements to achieve our aim. I believe this is part of what Brian Snow is after when he asks the question:

    How do we get high assurance in commercial gear?

    a) How can we trust or,
    b) If we cannot trust, how can we safely use, security gear of unknown quality?

    As Bob Blakley says: "Trust is for suckers", and this really needs to guide our design thinking. It is not about blithely assigning "trust" to some network, app, host, or physical element. It is about considering all of the options at all layers (Physical, Identity, Network, Host, App, Data) to compose a system where the controls can collectively achieve higher assurance.

    December 21, 2005 in Deperimeterization, Identity Services, Risk Management, Security, Smart Card, SOA, Software Architecture, SOS, STS | Permalink | Comments (0)

    My Photo

    SOS: Service Oriented Security

    • The Curious Case of API Security
    • Getting OWASP Top Ten Right with Dynamic Authorization
    • Top 10 API Security Considerations
    • Mobile AppSec Triathlon
    • Measure Your Margin of Safety
    • Top 10 Security Considerations for Internet of Things
    • Security Checklists
    • Cloud Security: The Federated Identity Factor
    • Dark Reading IAM
    • API Gateway Secuirty
    • Directions in Incident Detection and Response
    • Security > 140
    • Open Group Security Architecture
    • Reference Monitor for the Internet of Things
    • Don't Trust. And Verify.

    Archives

    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015

    More...

    Subscribe to this blog's feed