1 Raindrop

Gunnar Peterson's loosely coupled thoughts on distributed systems, security, and software that runs on them.

Recent Posts

  • Security Champions Guide to Web Application Security
  • Security > 140 Conversation with Pamela Dingle on Identity
  • 6 Things I Learned from Robert Garigue
  • The Curious Case of API Security
  • Security Capability Engineering
  • Ought implies can
  • Security > 140 Chat with T. Rob Wyatt on MQ and Middleware Security
  • Privilege User Management Bubble?
  • The part where security products solve the problem
  • Four Often Overlooked Factors to Give Your Security Team a Fighting Chance

Blogroll

  • Adding Simplicity - An Engineering Mantra
  • Adventures of an Eternal Optimist
  • Andy Steingruebl
  • Andy Thurai
  • Anton Chuvakin
  • Beyond the Beyond
  • cat slave diary
  • Ceci n'est pas un Bob
  • ConnectID
  • Cryptosmith
  • Emergent Chaos: Musings from Adam Shostack on security, privacy, and economics
  • Enterprise Integration Patterns: Gregor's Ramblings
  • Financial Cryptography
  • infosec daily: blogs
  • Jack Daniel
  • James Kobielus
  • James McGovern
  • John Hagel
  • Justice League [Cigital]
  • Kim Cameron's Identity Weblog
  • Krypted - Charles Edge's Notes from the Field
  • Lenny Zeltser
  • Light Blue Touchpaper
  • Mark O'Neill
  • Off by On
  • ongoing
  • Patrick Harding
  • Perilocity
  • Pushing String
  • Rational Survivability
  • rdist: setuid just for you
  • RedMonk
  • RiskAnalys.is
  • Rudy Rucker
  • Software For All Seasons
  • Spire Security Viewpoint
  • TaoSecurity
  • The New School of Information Security
  • Windley's Technometria
  • zenpundit
Blog powered by Typepad

9 Things Talk in Minnesota

Will be doing a talk called "9 Things Every Software Architect Should Know About Security" in Minnesota on August 6, for the Twin Cities Software Architect group, location and details on how to attend.


9 Things Every Software Architect Should Know About Security
The 3 biggest problems in security are lack of education, lack of education, and lack of education. We have a situation where systems are breached on a regular basis that's caused by developers not knowing enough about security and security people not knowing enough about software. Who is in the best position to resolve this? Software architects are supposed to be in charge of marshaling non-functional requirements like usability, performance and *security* into the distribution builds, but too often security means "firewalls + SSL", this clearly is not getting it done.

This sessions will look at 9 practical things every software architect can do to improve security in the software systems they are building.

July 17, 2009 in Security, Software Architecture | Permalink | Comments (0)

Richard Monson-Haefel on 9 Things Every Software Architect Should Know

Last night, I saw RIchard Monson-Haefel talk on 9 things every software architect should know. The funniest line was on EJB "I feel like I had a kid and he grew up and went on a crime spree", Richard's list of 9 things:

1. People are the plarform (ui is often the weakest link)

2. "All solutions are legacy" (My old partner used to say nothing more permanent than a temporary solution)

3. Data is forever (everything changes - new technology, new processes, but data is evergeen)

4. Flexibility breeds complexity (My security corollary is you empower developers you empower attackers

5. Nothing works as expected (Finally a security principle - design for failure)

6. Know the business 

7. Maintain the vision. Should be cto but how can they do it? They are in too many meetings

8. Software architects should also be coders

9. There is no substitute for experience

I think its a good list, I think #5 is under emphasized of course. Securing software often comes down to developers and security people, but this is not enough. Software people don't know enough about security and security people don't know enough about software. So we need another layer to resolve this. Software architects are supposed to own the non-functional requirements like scalability and security, but too often security is just network firewalls and SSL. Software architects are in a good position to see the failure modes and drive projects to fix vulnerabilities and establish countermeasures.

June 26, 2009 in Security, Software Architecture | Permalink | Comments (0)

Security Architecture Blueprint

The security industry has little strategic cohesiveness, instead the market is comprised of vendors selling an aggregation of tactical one off point solutions. The problem is that security is of strategic concern to the enterprise but the market does not reflect this. This security architecture blueprint shows one way to take a strategic approach to security in the enterprise.

Securityarchitectureroadmap

The purpose of the security architecture blueprint is to bring focus to the key areas of concern for the enterprise, highlighting decision criteria and context for each domain. Since security is a system property it can be difficult for Enterprise Security groups to separate the disparate concerns that exist at different system layers and to understand their role in the system as a whole. This blueprint provides a framework for understanding disparate design and process considerations; to organize architecture and actions toward improving enterprise security.


This blueprint distills what I have learned on a number of enterprise security engagements into a generic framework. Enterprises know they have security problems today, and will not be perfect tomorrow, next month or next year -- so the question is how to define a pragmatic way forward?

I was fortunate to get great feedback on drafts from a number of folks including James McGovern, Jim Nelson, and Brian Snow.


May 01, 2007 in Assurance, Deperimeterization, Security, Security Architecture, Software Architecture | Permalink | Comments (3)

Of Claims and Coins (or For a Few Dollars More)

Eastwoodffdm

After providing some details in an earlier on a question from James McGovern, I happened to take a gigantic jar of coins that I keep around (with all my loose change) into my local bank. What happened next further enlightens James' questions regarding claims versus roles.

I took a large jar of coins into my local bank, where they have a tall cylinder that lets you dump a bunch of coins in the top in exchange for the denomination you choose. You simply walk into the bank and dump the coins in. No authentication, no authorization, no roles. The bank audits access through monitoring of course, but the access is not authenticated as such. The coin service is free if you are a bank customer, but they charge you about 9% (ouch!) if you are not a customer.

So I dump the big jar of coins in. The machine does same basic validation. Canadian coins (aka the Northern Peso) stick to the side so that I can remove them.

The machine gurgles away for a while, and then spits out a receipt. The bank (service provider) so far has had no involvement other than letting me in the door. The value of my jar of coins has been transferred onto the receipt (token) that the machine spit out. The token is timestamped and says the bank owes $109 (if I am a bank customer) or about $100 (if I am not a customer).

Now it is up to me to protect the token. I have to walk about 8 feet over to the teller. The bank up to this point has next to no risk in the transaction. Their machine has the money, if I lose the ticket/token I am out $109 and they are up $109.

When I get to the teller, she asks for the ticket -- I could give her nothing else and still get $100! No role, no mapping my identity to the coins or the token. If I want the 9 bucks extra (and I do) I need to show an ATM card (which they interestingly enough did not ask me to authenticate via PIN). At this point based on the fact that I am a bank customer I get 9 bucks more, this $9 is the only difference in this scenario, between an anonymous, no identity required transaction and authorized customer transaction.

In no case was my identity mapped onto the coins, it was mapped onto the token for a very brief 2 minute window. being a bank customer could be described as a claim/assertion in a group or role membership or simply as an attribute. The service provider and PEP has a number of ways to mitigate its own liabilities, and ultimately the riskiest part of the transaction is the service requester losing the token after the value has been exchanged.

The coin exchanged is WS-Trust - style STS. The subject/object mappings are centered on the request, the value and token, not on mapping subjects directly to objects.

Let's go back to 1670 -- from Quicksilver:

The heat was too much. He was out in the street with Uncle Thomas, bathing in cool air. "They are still warm!" he exclaimed.

Uncle Thomas nodded.

"From the Mint?"

"Yes."

"You mean to tell me that the coins being stamped out at the Mint are, the very same night, melted down into bullion on Threadneedle Street?"

Daniel was noticing, now, that the chimney of Apthorp's shop, two doors up the street, was also smoking, and the same was true of diverse other goldsmiths up and down the length of Threadneedle.

Uncle Thomas raised his eyebrows piously.

"Where does it go then?" Daniel demanded.

"Only a Royal Society man would ask," said Sterling Waterhouse, who had slipped out to join them.

"What do you mean by that, brother?" Daniel asked.

Sterling was walking slowly towards him. Instead of stopping, he flung his arms out wide and collided with Daniel, embraced him and kissed him on the cheek. Not a trace of liquor on his breath. "No one knows where it goes--that is not the point. The point is that it goes--it moves--the movement ne'er stops--it is the blood in the veins of Commerce."

"But you must do something with the bullion--"

"We tender it to gentlemen who give us something in return" said Uncle Thomas. "It's like selling fish at Billingsgate--do the fish wives ask where the fish go?"

"It's generally known that silver percolates slowly eastwards, and stops in the Orient, in the vaults of the Great Mogul and the Emperor of China," Sterling said. "Along the way it might change hands hundreds of times. Does that answer your question?"

April 26, 2007 in Security, Software Architecture | Permalink | Comments (0)

Claims, Assertions, and Roles

James McGovern looks at Cardspace and asks:

It seems as if CardSpace can support claims-based security checks which is a little different than role-based security checks. Are there thoughts from the security community such as Gunnar Peterson on this?

Claims based access control (CBAC...or ABAC assertion based access control if you prefer) definitely differs from RBAC. A classic RBAC model maps subjects to roles to entitlements for certain objects. This sounds a lot easier than it is in practice. Many organizations look at the model and say "hey I know that if you are a DBA then you are in this role and you should get this type of functionality, and if you are a sales person you are the sales role and get only these functions." The problem is that mapping, while not always trivial, is by far the easier. Mapping all the object to entitlements and roles is very difficult in a distributed system.
Rbac

Basically you have all of the problems that the O/R mapping people have and almost no tool support. As a side note, we know that O/R mapping is the Vietnam of computer science, keep sending troops and resources, but good luck solving the problem. How RBAC is typically practiced in distributed system often quickly devolves into a game distributed impedance mismatch. Roles can still play an important part in the overall access control model, but they are not generally sufficient alone.

Enter CBAC/ABAC.

Instead of end to end mapping subjects and objects across multiple layers, we'd rather focus on the claim (as WS-* would have it) or assertion (as SAML would have it). In this space we are less concerned about the object-entitlement-role hierarchy and more concerned about the token(s) and the policy. The object (data, component, whatever) is still in play obviously and it is protected by a Policy Enforcement Point (PEP). The PEP has the advantage of being able to form fit to what it is protecting. If it is a Java object then it may be an interceptor pattern, if it is J2EE then it may be the container. In true loosely coupled fashion, you create a relationship between the PEP (and its own internal policies, rules, etc.) and the object, but do not have to hard wire the subject -role-entitlement-object together.

The PEP then is responsible for token validation, session validation, associating the object/rule/policy stack with token and the session, and coming to some sort of go/nogo decision (PDP). The PEP may never see a group or role or anything of this nature (these may be used for convenience), and instead enforce the policy on the token and the policies wrt granting access to the object. From a service provider view, the PEP structure looks like this:

Claims

Internal to the PEP there are a set of workflow pipelines that fire the policy rules, it should be noted that these are not typically object oriented designs in nature, but can be hidden from the rest of the app in an interceptor pattern. I will exlpore some internal PEP workflows in a future post.

So the net result is that the PEP is form fit to that which it protects *not* to the subject. This in turn allows us to form fit the token to a variety of subjects and request types - CardSpace being a good example, but there are lots of others - smart cards, otp, oath, and so on. Then optimization can occur both on the PEP side and improve usability and strength on the subject's token side. The only thing binding these two (is not binary runtimes) but the request message, which points again, to why message level security is sine qua non going forward in security.

Architectures open up security holes when they cannot use strong authentication to generate tokens, cannot protect the tokens in transit using crypto and signatures, and cannot ensure that the request and tokens are bound together correctly.

Update: Read the thrilling conclusion in Of Claims and Coins (or For a Few Dollars More)

April 26, 2007 in Security, SOA, Software Architecture | Permalink | Comments (2)

This Just In: Number of Authentication Factors Not Important When Host is Hosed

You can have 20 factor authentication, but if your host security is already compromised, what have you solved? Not much. 2 factor authN is fighting the last war, when the fields were green, the host was secure and all we needed was SSL, a network firewall, and a stiff upper lip.. I previously blogged:

At the OWASP conference last spring Andrew van Der Stock described a large banks design options for dealing with phishing. They wanted to go to two factor authN. They were looking at USB dongles, but they assessed that 2/3 of their customer's PC had some malware on them, so they did not want to put their "good stuff" (the dongle) into their customer's bad stuff. So instead they sent authZ codes via SMS to the customer's cellphone. "Perfect" security? No. Nice, incremental improvement over current state? Ya sure, ya betcha.

The Man in the Browser (MITB) attack uses Web 2.0 style attack vectors essentially a viral mashup. Finextra reports:

The bank says that its customers opened an email attachment that resulted in a virus being executed on their machines. This virus changed their browsers' behaviour so when they went to open the real ABN Amro online banking site, they were instead re-directed to a spoof site.

The customers then typed in their passwords, which the attacker in turn used to access the bank's real Web site. The customer's own transactions were passed along to the real site, so they didn't notice anything wrong right away, while the attacker simultaneously made their own fraudelent transactions using the bank's urgent payment feature.

ABN Amro has issued its customers with two-factor authentication tokens for several years. But the man-in-the middle attack gets around this security measure by passing the ever-changing part of the password from the token to the bank along with the never-changing part - essentially piggybacking on a legitimate log-in.

Financial Cryptography blogs additional commentary on MITB. The issue is that 2 factor authN is great if you are authenticating to a secure machine or channel, but is only a speed bump if the box or channel is already 0wend. That war is over and lost. It is a head scratcher why RESTians, Web 2.0, Ajaxians, et. al. blithely assume that these wonderful new technologies (and I mean that babe) do not require any new security mechanisms.

So, everyone update your threat models. I used to do an exercise when I doing security architecture with clients. We would walk through their security architecture, describe the rationale for each component and service, etc. Then I would say "imagine the firewall is not there, how does your security function now?" etc, etc. Now we can add one moe to the list - imagine a malicious agent inside your customer's browser.

**************************************************

Upcoming public SOA, Web Services, and XML Security training by Gunnar Peterson, Arctec Group
--- NYC (April 19), Unatek Web Services (May), OWASP App Sec Europe (May), Helsinki (June), DC/Baltimore (July 19).

April 03, 2007 in Security, Software Architecture | Permalink | Comments (8)

Johan Peeters on Dreams and Nightmares

Johan Peeters' Thinking Aloud blogs on the unforseen challenges of security in the SDL, specifically focusing on the challenges of dealing with security in an Agile approach:

The question is not whether and when to let economics guide planning as opposed to technical considerations. In the end, economics always win. The problem, in my view, is that we tend to see the value of a new feature, but not its cost. By cost, I do not mean the effort we need to invest into implementing the feature, but rather the cost of the nightmare scenario's that may execute as the system offers some new functionality.

This is am issue that stymied many a security mechanism. One way to look at this issue is to develop Misuse Cases that show the system from an attacker point of view in parallel with user story development, which show features and functionality. The advantage of a little extra effort spent in desgn becomes clear during future iterations and operations

Like functional requirements, non-functional requirements deserve to be revisited at each iteration. It may be comforting to think that, if you get it wrong, you get a second crack at the whip. On the other hand, you are never really done since new non-functional requirements may emerge throughout the duration of the project and old requirements that were initially deemed of secondary importance may take on an increased significance.

How many times have you seen authentication and authorization mechanisms that are weak, broken, or don't reflect a current threat model (hi MQ Series!)? But by the time the system is in production or just close to go live, it is too hard/too late to rip out all the authN and authZ because these require a full system test cycle, and so on. The examples from Johan Peeters and Paul Dyson's workshop are focused on Agile, but these issues apply across all software development methodologies.

**************************************************

Upcoming public SOA, Web Services, and XML Security training by Gunnar Peterson, Arctec Group
--- NYC (April 19), Unatek Web Services (May), OWASP App Sec Europe (May), Helsinki (June), DC/Baltimore (July 19).

April 03, 2007 in Agile, SDLC, Security, Software Architecture, Use Cases | Permalink | Comments (0)

Fortify finds Web 2.0 security hosed - "An application can be mashup-friendly or it can be secure, but it cannot be both"

Fortify released a new paper on Javascript hijacking. The paper looks at 12 Ajax frameworks, including Google's GWT, Microsoft Atlas, Yahoo! UI, and a number of open source projects, to see which are vulnerable to Javscript hijacking, described as the attacker's ability to read confidential data in Javascript messages. The vulnerability testing uses JSON which the authors state makes the attacker's job easier to exploit Javascript hijacking, because JSON arrays are standalone Javscript statements. From the paper (emphasis added)

When the JSON array arrives on the client, it will be evaluated in the context of the malicious page. In order to witness the evaluation of the JSON, the malicious page has redefined the JavaScript function used to create new objects. In this way, the malicious code has inserted a hook that allows it to get access to the creation of each object and transmit the object's contents back to the malicious site. Other attacks might override the default constructor for arrays instead (Grossman’s GMail exploit took this approach.)

Applications that are built to be used in a mashup sometimes invoke a callback function at the end of each JavaScript message. The callback function is meant to be defined by another application in the mashup. A callback function makes a JavaScript Hijacking attack a trivial affair—all the attacker has to do is define the function. An application can be mashup-friendly or it can be secure, but it cannot be both.

If the user is not logged into the vulnerable site, the attacker can compensate by asking the user to log in and then displaying the legitimate login page for the application. This is not a phishing attack—the attacker does not gain access to the user's credentials—so anti-phishing
countermeasures will not be able to defeat the attack.

More complex attacks could make a series of requests to the application by using JavaScript to dynamically generate script tags. This same technique is sometimes used to create application mashups. The only difference is that, in this mashup scenario, one of the applications involved is malicious.

The paper helpfully concludes with a look at various Ajax toolkits' ability to deal with this type of attack,  Dojo, DWR 1.1.4, GWT, jQuery, Atlas, MochiKit, Moo.fx, Prototype, Yahoo! UI, all are vulnerable. DWR 2.0 has some protections that deal with CSRF that may mitigate JAvascript hijacking, the authors find.

The Javascript hijacking attack is also noted in this Subverting Ajax presentation from last year's CCC.

As I previously blogged,I think the overarching theme is that Web 2.0 brings new ways to integrate apps and data together, but it brings no new security mechanisms, so this guarantees security issues such as the above.

 


**************************************************

Upcoming public SOA, Web Services, and XML Security training by Gunnar Peterson, Arctec Group
- NYC (April 19), Unatek Web Services (May), OWASP App Sec Europe (May), Helsinki (June), DC/Baltimore (July 19).

April 02, 2007 in Ajax, JSON, Security, Software Architecture, Web 2.0 | Permalink | Comments (0)

Openly IDentify your attributes with Open ID

As developers are inexorably drawn down the rabbit hole that is identity, they continually conflate authentication, authorization, and attributes into one big "identity" jumble. Lately, there has been a lot about OpenID, which as com.burtongroup.analyst.identerati.neuenschwander.mike (known unambigously around the identity water cooler as "Mike Neuenschwander") points out that OpenID is about attributes:

But don’t get me wrong: OpenID isn’t the problem here. OpenID simply calls into sharp focus something I’ve believed for years. It’s a kind of axiom, so I’d like to give it a name. I’ll call it, “identifiers.axiom.neunmike’s.axiomproxy.info”—that way you can easily refer to it unambiguously from anywhere. Here it is:

    There are no identifiers, only attributes

Names are slippery. Most people have many more than one legal name, none of which are unique. They also have several dozen nicknames. There’s no practical way to get any of these every-day-use names onto a global namespace. And what’s a name after all but a synthetic attribute—a foreign key that we hope the receiving party stores somewhere so we can remember them later? Names are invaluable communication aids, but they have little to do with recognition, which is what’s at issue in most identity management contexts. Biologically, creatures don’t recognize others based on names but rather the confluence of attributes appearing within a certain context.

Lao Tzu (who goes by several dozen names) had a pretty good post on this idea over 2000 years ago. In a section called “Ineffability,” he writes:

The Way that can be told of is not an Unvarying Way;
    The names that can be named are not unvarying names.
    It was from the Nameless that Heaven and Earth sprang;
    The named is but the mother that rears the ten thousand creatures, each after its kind. (chap. 1,  tr. Waley)

I understand why from a programmer’s perspective, it would be so much more convenient if everybody could simply have one globally unique, unambiguous, resolvable name. But such a quaint design constitutes a wanton disregard for reality.

The tech industry is adolescently ID-fixated. But I’ve had it to here with IDs! Would somebody please start seeing my avatars as something more than identification objects? So here’s to being an OpenAttribute power user!

Leave it to com.burtongroup.analyst.identerati.neuenschwander.mike to provide clarity on a much misunderstood issue.

org.tbray.ongoing.proprietor.bray.tim (aka com.sun.softwaredivision.canada.ubergeeks.bray.tim) kicked the tires on OpenID and came to a similar conclusion:

What Could I Use It For Today? · Here’s what I think I’d be willing to do: in the commenting system here at ongoing, I’d be inclined, if I had manually approved a comment with someone who’d authenticated via OpenID, to subsequently accept further comments from that OpenID, unmoderated. ¶

I can’t think of anything else.

The real work for developers is not about assigning an uber-GUID to people, it is about unmarshaling attribute values into a PEP and putting them into a context to make them meaningful. As Butler Lampson says, "all trust is local." Again, I mention Bob Blakley's work on Security Design Patterns, that describes several ways to do this in a distributed system.

Hunter S. Thompson said "buy the ticket, take the ride." But don't conflate yourself the ticket and the ride.

March 14, 2007 in Identity, OpenID, Security, Software Architecture | Permalink | Comments (0)

On the road to delegation - learning from QMail

Kim Cameron has been en fuego recently on one of my favorite topics - how to propagate security tokens in a distributed system? Wrong headed impersonation:

I’m going to make a categorical statement.  No one and no service should ever act in a peron’s identity or employ their credentials when they’re not present.  Ever.

How would this work in Cardspace, Kim continues

CardSpace is built on this principle.  A delegated authority coupon is just a set of claims.  CardSpace facilitates the exchange of any claims you want - including delegation claims.  So using CardSpace, someone can build a system allowing users to delegate authority to a service which can then operate - as itself - presenting delegation tokens whenever appropriate.

This is, of course, one of the major advantages to WS-Security and WS-Trust approach instead of bundling everything into a single context. Kim also expands on this issue of separation:

CardSpace is built on top of WS-Trust, though it also supports simple HTTP posts.

One of the main advantages of WS-Trust is that it allows multiple security tokens to be stapled together and exchanged for other tokens. 

There are use cases where impersonation, delegation, federation, and other approaches are valid, the question is which framework gives you the best flexibility to handle a wide variety of these use cases? (note - the best overview of apporaches I have seen in open literature is Security Design Patterns)

One of the best areas to look for guidance is widely distributed systems that are in production with a good security track record. Rest people frequently point to SSL, which is fine for point to point confidentiality, but we also know Rest struggles to deal with authentication at all, much less stapling together mutiple claims. So a better example for 2007 is QMail. One of the things that makes QMail's security model more robust is that there is a "clear separation between addresses, files, and programs." In web services I tend to look at this as separating identity, service, message, transactional and deployment concerns. The thing that WS-Security and WS-Trust give you is the ability to build this separation into your apps and messages instead of conflating everything into one token or request. A checkpoint process in QMail should not be dependent on the exact same security model for the request and the content, the security checks  that QMail performs against the address are not subject to the same concerns as the checks against the body. This also reflects a design goal of QMail which separates functions into mutually untrusting programs. In Web services world, we may have a XML Security gateway perform some functions such as auditing, and service authentication, but process confidential data farther on down the line. So Cardspace, WS-Security, and WS-Trust all help to build such an architecture.

March 08, 2007 in Security, SOA, Software Architecture | Permalink | Comments (0)

»
My Photo

SOS: Service Oriented Security

  • The Curious Case of API Security
  • Getting OWASP Top Ten Right with Dynamic Authorization
  • Top 10 API Security Considerations
  • Mobile AppSec Triathlon
  • Measure Your Margin of Safety
  • Top 10 Security Considerations for Internet of Things
  • Security Checklists
  • Cloud Security: The Federated Identity Factor
  • Dark Reading IAM
  • API Gateway Secuirty
  • Directions in Incident Detection and Response
  • Security > 140
  • Open Group Security Architecture
  • Reference Monitor for the Internet of Things
  • Don't Trust. And Verify.

Archives

  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015

More...

Subscribe to this blog's feed