1 Raindrop

Gunnar Peterson's loosely coupled thoughts on distributed systems, security, and software that runs on them.

Recent Posts

  • Security Champions Guide to Web Application Security
  • Security > 140 Conversation with Pamela Dingle on Identity
  • 6 Things I Learned from Robert Garigue
  • The Curious Case of API Security
  • Security Capability Engineering
  • Ought implies can
  • Security > 140 Chat with T. Rob Wyatt on MQ and Middleware Security
  • Privilege User Management Bubble?
  • The part where security products solve the problem
  • Four Often Overlooked Factors to Give Your Security Team a Fighting Chance

Blogroll

  • Adding Simplicity - An Engineering Mantra
  • Adventures of an Eternal Optimist
  • Andy Steingruebl
  • Andy Thurai
  • Anton Chuvakin
  • Beyond the Beyond
  • cat slave diary
  • Ceci n'est pas un Bob
  • ConnectID
  • Cryptosmith
  • Emergent Chaos: Musings from Adam Shostack on security, privacy, and economics
  • Enterprise Integration Patterns: Gregor's Ramblings
  • Financial Cryptography
  • infosec daily: blogs
  • Jack Daniel
  • James Kobielus
  • James McGovern
  • John Hagel
  • Justice League [Cigital]
  • Kim Cameron's Identity Weblog
  • Krypted - Charles Edge's Notes from the Field
  • Lenny Zeltser
  • Light Blue Touchpaper
  • Mark O'Neill
  • Off by On
  • ongoing
  • Patrick Harding
  • Perilocity
  • Pushing String
  • Rational Survivability
  • rdist: setuid just for you
  • RedMonk
  • RiskAnalys.is
  • Rudy Rucker
  • Software For All Seasons
  • Spire Security Viewpoint
  • TaoSecurity
  • The New School of Information Security
  • Windley's Technometria
  • zenpundit
Blog powered by Typepad

Future is Federation

Good post on Federation (of which globalization is the chief enabler) by Tom Barnett

The basic lesson I learned with my Packers is the same one cited here: "Technology has given individuals, microcultures and larger groups the means to survive."

The homogenizing/particularizing dichotomy of globalization on culture is not dissimilar to the impacts found in politics: the breaking down of fake states into constituent parts while simultaneously encouraging regional integration.

To me, this is the quintessential structure dynamic of globalization: more freedom below while more integration above. People faint over the notion of a world state, but that misreads the trends completely. We are headed for more forms of federation, not unification. Rather than have some global ID stamped on your forehead, Orwell-style, you're far more likely to have multiple passports.

Of course that future is already here, we now need to distribute the Claims based security protocols to support it, as described in the Laws of Identity

The use of the word claim is therefore more appropriate in a distributed and federated environment than alternate words such as “assertion,” which means “a confident and forceful statement of fact or belief.” (OED) In evolving from a closed domain model to an open, federated model, the situation is transformed into one where the party making an assertion and the party evaluating it may have a complex and even ambivalent relationship.
The world is already federated, its the computer that need to catch up, specifically the security protocols. CBAC gets us out of the world of point to point security and into a world where the security protocols reflect the assets which are composed of diverse resources. You can look at the evolution of security protocols on the web (oauth) and enterprise (SAML) that deliver on exactly this need.

April 25, 2010 in Federation, Security | Permalink | Comments (1) | TrackBack (0)

Killer Identity App Unleashed in Cloud

Remember the 90s, when everyone was in search of the next killer app? Whatever happened to that? I guess it got replaced with lots of little microservices, all mashed up together; well anyhow I just found one-  PingConnect is game changer - sign on to Google Apps, Salesforce, Cisco Webex, Rearden, and a number of other SaaS vendors. Check the demo video and the data sheet. Given this level of integration, PingConnect lets us realize the promise of SAML and enables companies to get the benefits of SaaS and cloud.

PingConnect-Product-Diagram-v2

I quite like Hoff and CSO Andy's A6 model which proposes a Security API for the cloud. One of the things I like about it is that the A6 model has identity as a first class citizen for security architecture in the Cloud. This avoids a trap that many security and application models fall into, thinking that identity can be dealt with later or even worse that our current username/password identity super soaker model (accounts/identity/secrets sprayed ever) can be reused again. The above diagram would feature a minimum of eight accounts and eight passwords, and likely much more to accomplish routine tasks. Does this seem like a scalable model to you? Does this seem like a good model to foster going forward?

To me, it seems like one that's more than ready to consign to the dustbins of history. Hey kids, gather around and let Grandpa tell you about the days when he had to sign on to eight systems with eight different passwords...

If we are going to get the benefits of the cloud we need to think different about identity, before you think you have a Cloud you better think about how you are mediating subjects and objects, and even more than thinking we need tools that enable it to work. Assertions are only good if they can be simply generated, communicated, and understood. This is integration engineering. PingConnect is a concrete step in the direction and has all the markings of a next killer app.

July 28, 2009 in Cloud, Federation, Identity, Security | Permalink | Comments (0)

GloboFederation Wisdom from the Twitterverse

This is too good to leave in the twitterverse, from Paul Madsen or @paulmadsen: 


Globalization- going into a store & most of the stock came from elsewhere 


Federation- going to a site & most data came from elsewhere


In both cases its identity that makes the linkage. As Chris Ceppi told me one time The World is Flat explains federation better than anything else. Going the next logical step, its your identity that tells you the right balance between Lexuses and Olive Trees.

May 26, 2009 in Federation, Security | Permalink | Comments (0)

Federated Guanxi

One of the underexplored areas in Service Oriented Security is what types of federated relationships are valuable, and what new composite identity architectures emerge from these connections. In my view, the main weakness of security architectures is their limited scope and lack of flexibility. Most software is built using composition, but most security protocols do not compose and certainly most lack the ability to deal with multiple namespaces, domains, and symmetric/asymmetric relationships, at least until WS-Security, SAML and friends came along. Further, PKI and X.509 are fine, but most the data you need to assert and make authorization decision lives deep inside a directory or database not in a key store. So we need to be able to bring together multiple elements in security architecture.


Consider the Chinese verb Guanxi:

The Chinese web is notable for a large number of mutually linking web sites. We hypothesize that this is in part a manifestation of a social construct known as guanxi, which can be widely observed in Chinese culture. Guanxi has been described as “an informal … personal connection between two individuals who are bounded by an implicit psychological contract to [maintain] a long term relationship, mutual commitment, loyalty and obligation.” Dyadic relationships are the fundamental units of guanxi networks. To establish guanxi, two parties must first establish a guanxi base: a tie between two individuals, e.g., same birthplace, same workplace, same family, close friendship. Also, two individuals can claim to have guanxi by acquaintance through a third party with whom they both have guanxi. Once a guanxi base is formed, guanxi can be developed through the exchange of resources ranging from moral support and friendship to favors and material goods.


Sure sounds like federation to me, a tie between two parties but not just a simple key exchange a la PKI, the relationship in federation is deeper, and agrees on schema, types, values, and resources (or at least resource identifiers).

56minus1 goes on to look at strong and cheap guanxi based on how nodes are linked and who they are linked to. My guess is that something like is the logical next step for SAML, CBAC and other web facing identity protocols to help service providers distinguish the guanxi and better evaluate the identity claims.

December 16, 2008 in Federation, Security | Permalink | Comments (0)

SSO Summit Day One Morning Session

I am at the SSO Summit, high in the Colorado mountains (9200 feet elevation to be exact), the I-70 West sign is one of my favorite road signs. Ping Identity has done a great job putting this together. It is the perfect size around 125 people. Most of the best conferences I have been to have been around 60-150 people. There are a *lot* of enterprises involved here. 

John Haggard who has an extensive background in SSO and lately is at Passfaces kicked off the sessions with a SSO history talk. Going through a lot of mainframe centric SSO protocols from the 80s and 90s, I am no expert in these areas and it was fascinating to see the way things vacillated between strength and weakness of SSO protocols.

A couple of points from the presentation:

The history of SSO is a story of extreme complexities, compromises, vulnerabilities and unintended consequences.


SSO is a story of one simple objective - to spin off units of computation work to execute on behalf of an authenticated user without requiring the original user's password.


Phishing has always been completely avoidable


He went through the various incarnations of mainframe SSO from logon id through things like ACF2, VTAM Session managers, terminal emulators, multiplatform access to web access through facades. The implication he drew from this last step are well worth repeating: "Time to rethink everything." Problem is - of course, people don't rethink, they put MQ Series in front of the mainframe and hook a web app in front of that and go. 

Finally, he connected some interesting dots to SAML and SOA security issues. 

SSO without strong auth is and always will be simply nuts


SAML gets its right

His points around common weaknesses in integration in SOA and Web 2.0 technologies for companies that are *not* using SAML were excellent. Of course, I will go into some more details on this tomorrow.

Ping's CTO Patrick Harding took the stage and gave an overview of the next generation of SSO options from Kerberos to present and as is his wont demonstrated various real world strengths and weaknesses, quoted a Gartner analyst (shock!) saying OpenID is the hare and Cardspace is the tortoise. Nice.

Andrew Cameron from GM is speaking now on GM's experiences implementing SSO, and there are a lot of real world lessons learned in his presentation.  Plus my favorite identity architecture, user has Kerberos, services speak SAML. very nice, very scalable. All in all, its my starting point for how to identity in an enterprise. He also spoke about a pet peeve of mine - how to globalize authorization. This is not a problem that vendors have historically attacked with relish. They are very happy to help you solve authentication, but they are perfectly happy to keep their authorization internal either for vendor lock in reasons and/or for sloppy authorization design. This will take a LIberty-esque consortium of enterprises to resolve. 

So many conferences are dominated by vendors and consultants who conspire to what I call the "sacred church of things YOU should be doing." Instead this conference is bringing together a great mix of real world in the trenches practitioners who have problems to solve today, with rubber meets the road deployable solutions and an eye towards longer term strategy for SSO and identity.

July 24, 2008 in Federation, Security | Permalink | Comments (1)

Security Services Deployment in Federated World

Its easy to get hung up on security protocol design, finding the right fit for your architecture and so on. Its really hard to find security mechanisms that do something useful and scale as well. In my opinion, this is the single biggest issue we face in security today - how do find useful things that solve security problems, that can scale in real world systems.


To that end, my favorite paper in a long time was written by Patrick Harding, Leif Johansson, and Nate Klingenstein called "Dynamic Security Assertion Markup Language: Simplifying Single Sign-On." They begin by asking questions companies face when entering into a federation:

How should trust between providers be managed? 

How should information about providers (metadata) be provisioned? 

Which SAML profiles and bindings should be used? 

Which messages and what part of each message should be signed? 

Which identifiers and attributes should be exchanged? 

What are the semantics of those attributes and identifiers?

Organizations understand the benefits of SSO and federation, but don't always answer the above questions in a way that sets them to scale. Harding, et al.'s work describes a way to partition the architecture into separate layers - a metadata trust fabric,a metadata publishing fabric, and a metadata validation and signing fabric - enabling dynamic federation. If you design stuff for a large scale distributed systems, this is a really big deal. No wonder these guys win all the good awards. 

I have often used Federation as an example of security as a business enabler - improving the brakes, ABS, and airbags so we can drive fast and safe. Federation creates value for the business through more deeply linking it with its customers, partners, and business users; it solves some security architecture problems through encrypted and signed tokens; and it gives the user SSO/SLO. You don't always get a win like this in security architecture, but when you find one its good to ride it for all its worth.

June 25, 2008 in Federation, Identity, Security | Permalink | Comments (0)

Authorization: Standards and Granularity

James McGovern, whose questions in blog comments usually take me a whole additional blog post to answer appropriately, responding to an earlier post on security for Integrated Transactions asks

Would you agree that enterprises need to go beyond just building better authentication mechanisms such as support for SAML and go deeper in terms of authorization? What would it take to get security folks to also add XACML to their list of frequently mentioned acroynms...

I definitely agree with this. Enterprises need to look at authentication, and also authorization, and auditing as well. Plus they need to realize that these security landscapes have fundamentally changed 2 or 3 times in the past decades (OO, web apps, and SOA/Web Services) all require different types of security mechanisms, but many enterprises try in vain to find one silver bullet.

So it is definitely not just about authentication, but the larger point I think (and the one that SAML, WS-Security, WS-Trust, and XACML all to varying degrees help you solve) is how do you get interoperability for security tokens at runtime. A transaction can span dozens of namespaces, technical runtimes, governance and policy zones, etc. So job 1 is to be able to move the tokens and recognize them on the other end (vetting them in the process).

When it comes to authorization, I agree with Butler Lampson that "all trust is local". The aforementioned standards should be able to port the tokens (subjects) used for authorization across domains so that enforcement (which may be authorization) can be done at the endpoint (mediating the subjects access to the object). Part of this is coarse grained versus fine grained authorization.

Now there are some different nuances to how WS-* vs. SAML/XACML accomplishes this. WS-* format is more or less neutral -- a claim by any other name is still a claim. Whereas SAML differentiates authentication, attribute, and authorization authorities. But I don't see this as a big deal, because the endpoint can still enforce how it wants. My take is that SAML/XACML's approach could allow for tool vendors to support (and optimize) their assertions at a more granular level through decomposing authentication, authorization, and attributes. This relates to a point discussed by James and Mark O'Neill way back in 2006.

To James' other point, XACML does some things fundamentally different in describing policy for resources, which makes this really interesting esp. for high value resources. Earlier I made the point that for cases where you need higher assurances, the policy is mapped down to the object level:

The XACML mapping of the subject to authorized resources, conditions, and actions allows for the security policy to be applied to the subject as well as the representation of the object. One example of this in the real world at railroad crossings, normal cars pass straight over the tracks as long as the gate is open and the light is not flashing. School buses, because of their cargo, stop and perform additional visual checks before proceeding

The parents of the kids drive their cars right over the railroad tracks without stopping, other buses (containing presumably less precious cargo) sail right over the railroard tracks, but school buses stop and check - resource, condition, action.

So XACML has great utility for things like this, it would be great if there was more momentum in the tool space so that we could see XACML being built into as many stacks as WS-* and SAML already are. On the plus side all of the aforementioned standards work at the message level, so we are already moving in a more secure direction than relying on network firewalls, SSL, and a prayer.

January 10, 2007 in Deperimeterization, Federation, Identity, Security, SOA, Software Architecture | Permalink | Comments (0)

O, Web 2.0 App Security, where art thou?

How about Web 2.1?...and this time, can we upgrade the security along with the rest of the stack? Where *do* I download the security patches for Web 2.0?

So instead of revving Web 1.0, but leaving the "security" model with the same old, same old network firewalls and SSL, how about we rev the security model too?

Tim O'Reilly points to a comparison of the evolution from Web 1.0 to Web 2.0, and it is pretty cool...but where is security?

Programming20thumb

My take?

Security as someone else's job --> Build Security In

I could go on.

Writing your own identity/app layer --> Using a standards based framework

Assuming a benign environment --> Designing/building with assurance in mind

Outside the firewall/DMZ/inside the firewall pattern --> Deperimeterization

What's cool about SAML based federation is that you get browser based SSO (it just works with the browsers, so the Web 2.0 should love that), you get SSO that spans companies and technologies. Oh, and the credentials are protected.

Why does this matter? Well, attackers have evolved a lot since 1.0, so rolling out the same old 1997 security model doesn't cut it any more.

Brian Krebs "Cyber Crime Hits the Big Time in 2006: Experts Say 2007 Will Be Even More Treacherous"

Call it the "year of computing dangerously."

Computer security experts say 2006 saw an unprecedented spike in junk e-mail and sophisticated online attacks from increasingly organized cyber crooks. These attacks were made possible, in part, by a huge increase in the number of security holes identified in widely used software products.

Few Internet security watchers believe 2007 will be any brighter for the millions of fraud-weary consumers already struggling to stay abreast of new computer security threats and avoiding clever scams when banking, shopping or just surfing online.

One of the best measures of the rise in cyber crime this year is spam. More than 90 percent of all e-mail sent online in October was unsolicited junk mail messages, according to Postini, a San Carlos, Calif.-based e-mail security firm. The volume of spam shot up 60 percent in the past two months alone as spammers began embedding their messages in images to evade junk e-mail filters that search for particular words and phrases.

As a result, network administrators are not only having to deal with considerably more junk mail, but the image-laden messages also require roughly three times more storage space and Internet bandwidth for companies to process than text-based e-mail, said Daniel Druker, Postini's vice president of marketing.

"We're getting an unprecedented amount of calls from people whose e-mail systems are melting down under this onslaught," Druker said.

Spam volumes are often viewed as a barometer for the relative security of the Internet community at large, in part because most spam is relayed via "bots," a term used to describe home computers that online criminals have compromised surreptitiously with a computer virus or worm. The more compromised computers that the bad guys control and link together in networks, or "botnets," the greater volume of spam they can blast onto the Intenet.

At any given time, there are between three and four million bots active on the Internet, according to Gadi Evron, a botnet expert who managed Internet security for the Israeli government before joining Beyond Security, an Israeli firm that consults with companies on security. And that estimate only counts spam bots. Evron said there are millions of other bots that are typically used to launch "distributed denial-of-service" attacks -- online shakedowns wherein attackers overwhelm Web sites with useless data if the targets refuse to pay protection money.

"Botnets have become the moving force behind organized crime online, with a low-risk, high-profit calculation," Evron said. He estimated that organized criminals would earn about $2 billion this year through phishing scams, which involve the use of spam and fake Web sites to trick computer users into disclosing financial and other personal data. Criminals also seed bots with programs that can record and steal usernames and passwords from compromised computers.

When 90% of email traffic is spam, you cannot assume that the environment is benign. What will it look like 5 years from now?

These past 12 months brought a steep increase in the number of software security vulnerabilities discovered by researchers and actively exploited by criminals. The world's largest software maker, Microsoft Corp., this year issued software updates to fix 97 security holes that the company assigned its most dire "critical" label, meaning hackers could use them to break into vulnerable machines without any action on the part of the user.

In contrast, Microsoft shipped just 37 critical updates in 2005. Fourteen of this year's critical flaws were known as "zero day" threats, meaning Microsoft first learned about the security holes only after criminals had already begun using them for financial gain.

This year began with a zero-day hole in Microsoft's Internet Explorer, the browser of choice for roughly 80 percent of the world's online population. Criminals were able to exploit the flaw to install keystroke-recording and password-stealing software on millions of computers running Windows software.

At least 11 of those zero-day vulnerabilities were in the Microsoft's Office productivity software suites, flaws that bad guys mainly used in targeted attacks against corporations, according to the SANS Internet Storm Center, a security research and training group in Bethesda, Md. This year, Microsoft issued patches to correct a total of 37 critical Office security flaws (that number excludes three unpatched vulnerabilities in Microsoft Word, two of which Microsoft has acknowledged that criminals are actively exploiting.)

But 2006 also was notable for attacks on flaws in software applications designed to run on top of operating systems, such as media players, Web browsers, and word processing and spreadsheet programs. In early February, attackers used a security hole in AOL's popular Winamp media player to install spyware when users downloaded a seemingly harmless playlist file. In December, a computer worm took advantage of a design flaw in Apple's QuickTime media player to steal passwords from roughly 100,000 MySpace.com bloggers, accounts that were then hijacked and used for sending spam. Also this month, security experts spotted a computer worm spreading online that was powered by a six-month old security hole in a corporate anti-virus product from Symantec Corp.

What is scary about the Microsoft numbers is the implication to the industry as a whole...that while Microsoft still have widely publicized security issues (and an insanely large code base), they have made huge strides in secure coding. Where does this leave competitors that have yet to adopt secure coding practices (read: virtually every other large software company)? That Microsoft still faces security challenges even as they upgrade their SDLC to incorporate security concerns, is not the takeaway. Microsoft is like the USMC facing the best the insurgents can throw at them, and as they coevolve what happens when those same insurgents throw their attacks at the Norwegian or Brazilian armed forces (read every other major software company or your Web 2.0 app for that matter as the Norwegians/Brazilians)?

So it seems clear to me that we have Attacker 2.0 and we have Web 2.0, but Web 2.0's security model relies on Attackers being stuck on Attacker 1.0.

January 08, 2007 in Assurance, Deperimeterization, Federation, OWASP, Security, Software Architecture, Web 2.0 | Permalink | Comments (0)

Neal Stephenson on Transport Level Security Versus Message Level Security

030923_quicksilver_book
Update: see this post on REST Threat Models and Attack surface for more ideas


As a security architect, do not assume that you know all the intermediaries and endpoints your message will traverse. Let's go back to 1670 -- from Quicksilver:

The heat was too much. He was out in the street with Uncle Thomas, bathing in cool air.

"They are still warm!" he exclaimed.

Uncle Thomas nodded.

"From the Mint?"

"Yes."

"You mean to tell me that the coins being stamped out at the Mint are, the very same night, melted down into bullion on Threadneedle Street?"

Daniel was noticing, now, that the chimney of Apthorp's shop, two doors up the street, was also smoking, and the same was true of diverse other goldsmiths up and down the length of Threadneedle.

Uncle Thomas raised his eyebrows piously.

"Where does it go then?" Daniel demanded.

"Only a Royal Society man would ask," said Sterling Waterhouse, who had slipped out to join them.

"What do you mean by that, brother?" Daniel asked.

Sterling was walking slowly towards him. Instead of stopping, he flung his arms out wide and collided with Daniel, embraced him and kissed him on the cheek. Not a trace of liquor on his breath. "No one knows where it goes--that is not the point. The point is that it goes--it moves--the movement ne'er stops--it is the blood in the veins of Commerce."

"But you must do something with the bullion--"

"We tender it to gentlemen who give us something in return" said Uncle Thomas. "It's like selling fish at Billingsgate--do the fish wives ask where the fish go?"

"It's generally known that silver percolates slowly eastwards, and stops in the Orient, in the vaults of the Great Mogul and the Emperor of China," Sterling said. "Along the way it might change hands hundreds of times. Does that answer your question?"

Transport level security assumes good security on both endpoints in a point to point scenario and everything beyond those endpoints within the transaction span. Message level security lets the message traverse numerous business, organizational, and technical boundaries, with a modicum of security intact.

December 04, 2006 in Deperimeterization, Enterprise Architecture, Federation, REST, Security, Software Architecture, Web Services | Permalink | Comments (0)

Decentralization and "Good Enough" security Part 3

I blogged parts 1 and 2 about issues relating to decentralizing security models that support horizontal integration of security models. Parts 1 and 2 mainly dealt with the problems, the problems should be familiar to anyone who has tried to scale security in a distributed system. Section 3.1 lays out the goals, including:

2. It should be risk-based, and measurably so.

3. It should be agile and extensible, accommodating variable risk acceptance and shifting coalition partners.

4. It should enable incentives for information sharing.

7. Its decisions should be auditable.

8. It should recognize qualitative distinctions between different kinds of secrets (e.g., tactical vs. strategic)

9. It should be capable of being implemented incrementally.

These are all themes that enterprise information security shoudl embrace, because they align well with business goals and are necessary in most distributed systems. Risk management is the best lens from which to infosec. Many security personnel continue to view security architecture as analogous to jails, but you know what? Not much commerce happens in prison. The kind that does is black market and it specifically routes around security controls. Sound familiar? As they say in Minnesota: ya sure, ya betcha.

Incremental adoption, and the ability to layer on security iteratively instead of big bang style projects is likewise a necessary condition for successful deployment. Again this aligns with how software development is done, the trend from RUP to XP to Agile is towards lighter and lighter weight processes with more iterations. The more the security architecture dictates massive access management suites the less it is going to fit into that kind of mode, and the less it will deployed. As Duke Ellington said, "It don't mean a thing, if it ain't in the build dist".

The paper offers three guiding principles for building a new system.

1.

Measure risk. “If you can’t measure it, you can’t manage it.” 

When risk factors can’t be measured directly, they can often be plausibly estimated (“guessed”); and subsequent experience can then be used to derive increasingly better estimates. This can be formalized in various ways, for example as an updatable Bayesian belief net.

2. Establish an acceptable risk level. As a nationwe can afford to lose X secret and Y top secret documents per year. We can afford a Z probability that a particular technical capability or HUMINT source is compromised. If we set X,Y,Z, . . . all to exactly zero, then all operations stop, becauseall operations entail some nonzero risk. What, then, are acceptable rangesof values for X, Y,Z and a long list of similar parameters? Even better, roughly what values would optimize our operational effectiveness in the long run, when today’s security breaches show up not simply as harmful in theabstract, but as actually imperiling future operational effectiveness?

3. Ensure that information is distributed

all the way up to the acceptable risk level. This is a very fundamental point. We have been living with systems that try to minimize risk. That is the wrong metric! We actually want to maximize information flow, subject to the overall constraint of not exceeding the acceptable risk levels set according to principle number 2, above. This means that instead of minimizing risk, we actually want to ensure that it is increased to its tolerable maximum (but no higher).

Regarding points one and two, I have blogged many times about a risk and metrics-centric approach to your information security program. See the MetriCon Digest for some specific ideas on how to accomplish this. The point I love in this paper is #3 "Ensure that information is distributed all the way up to the acceptable risk level", what a great concept. If you do your risk management properly, this should be a great enabler, everything else is security by obscurity in some sense. It does turn least privilege on its head.

Section 4.2.4 discusses NetTop

NetTop, an NSA project, is a technology for virtualizing machines and networks at different security levels onto a single hardware platform. The NetTop user has a single desktop or laptop machine, and has the ability to open windows on “virtual machines” at different security levels. Each virtual machine has its own operating system (e.g., Windows XP or Linux), and runs as if it were on a physically separate machine, connected only to the appropriate level of classified network.

If you combine this approach with where SOA and Web Services and where it is going, the virtualization approach should be inherent in how services are bound together. For example a Security Pipeline Interface such as a XML Security Gateway in front of your web service. Otherwise, as Richard Thieme says we will be functioning according to an increasingly obsolete trust model.

NetTop instantiates and extends the concept of Multiple Signal Levels (MSL), rather than Multi-Level Security (MLS). MSL is the concept, fully accepted by the information security community, that different security levels can share the same physical hardware infrastructure, as long as high-grade cryptographic techniques are used to keep the differing levels completely separated, even when they are in the “same wire.” Indeed, MSL is today’s way of doing business in long-haul network transport. The encrypted text output of an NSA-approved crypto device is (generally) unclassified. It can, as a rule, be mixed with ordinary unclassified traffic, and sent over unclassified military or civilian channels

...NetTop pushes the MSL solution out of the wiring and onto the desktop. In a NetTop solution there are still (in this hypothetical example) five separate classification levels, all kept separate; but there is only one physical infrastructure, and one machine on the desktop...In NetTop, SE Linux runs in a particularly secure mode, without an IP stack (the point of entry for remotely hacking ordinary computers).

The virtualization concept is important across the board in an SOA. From the user's desktop where Microsoft is making cardspaces as a way to support multiple identities to enterprise services buses and orchestration services that aggregate services, transform data, and route content based on widely varying policy requirements. the ability to virtualize endpoints in separate policy domains is a fundamental element.

Section 4.3 gets to the bits on the wire - risk tokens. In a phase 1 risk token, the token allows for example:

1 token = risk associated with one-day, soft-copy-only access to one document by the average Secret-cleared individual.

This example associates risk with a token predicated on length of time, type of access, and access rights of the individual. I am quite fond of the notion of a risk token. Risk tokens would be represented in an SOA by SAML assertions (SOAP or browser-based federation) and WS-Security tokens (attached to XML documents). The WS-* stack takes this one step further with WS-Trust, which provides a framework for moving the token around on a network, and exchanging them for other risk tokens. This integration, which allows for say a Windows client authenticating off of AD to exchange risk tokens and make authentication and authorization SAML assertions to Java based web services provider, allows for the security model to truly scale across technical and organizational domains.

Next, in phase 2 tokens, there is a subtle but important change "In Phase 1, tokens were denominated by “risk”; in Phase 2, they must be denominated by “harm.”"

Several major espionage cases have shown a systemic weakness in the present security system, namely the fact that individuals are most often treated as either “fully trusted” (cleared) or “full untrusted” (uncleared). That is, trust is treated as a discrete, not a continuous, variable. A major reason for this is that a down-transition between these two states — revoking someone’s clearance — is so drastic an action that line managers, and even security managers, try to avoid it at almost any cost.

The Aldrich H. Ames case is a particularly famous, and perhaps egregious, example of this phenomenon.

9 Not wanting to rock the boat, managers at multiple levels dismissed, or explained away, warning signs that should have accumulated to quite a damning profile. In effect, each “explanation” reset Ames’ risk odometer back to zero.ffect, a continuously variable level of trust. The individual manager, therefore, is never faced with an all-or-nothing decision about whether to seek suspension of an employee’s security access. Instead, the manager has clearly defined, and separable, responsibilities in functional (“getting the job done”) and security (“work securely”) roles.

This trusted v. untrsuted anti-pattern is immediately recognizable in enterprise security.

The risk model that we have posited for a tokenized system (both Phase 1 and Phase 2) provides, in effect, a continuously variable level of trust. The individual manager, therefore, is never faced with an all-or-nothing decision about whether to seek suspension of an employee’s security access. Instead, the manager has clearly defined, and separable, responsibilities in functional (“getting the job done”) and security (“work securely”) roles.

The manager’s security responsibility is simply to ensure that relevant facts known to him about an employee are also known to the risk model. The manager is not the only source of these facts, and he or she is not a “cop.” But it is reasonable to require that managers report, according to objective criteria, security-related factual information about their employees.

They will be more likely than now to do this knowing that the effect of such information will be incremental only, and not likely to trigger anything as drastic as the revocation of an employee’s clearance (as it might today). The manager’s functional responsibility is to manage his or her unit’s tokens wisely. If the risk model assigns a high risk to one particular employee, then the cost of transactions (access to classified information) for that employee will become high. The manager may make the purely economic decision to change that employee’s job assignment, so that it requires less access to secret information and is less costly to the token budget. If an employee’s transaction cost becomes prohibitively high, then the manager may counsel the employee either (i) to take actions that would lower that cost, or (ii) to seek other employment.

"it is reasonable to require that managers report, according to objective criteria, security-related factual information about their employees." Assertion-based access control anyone? What differentiates phase 1 and phase 2 tokens is that phase 2 tokens map back to the object. In this case we get to the point where SAML and WS-Security reach their limit. At this point we need to look at XACML to deal with mapping decisions back to the object resource. James McGovern has been beating the XACML drum, and in the area of phase 2 token concept XACML really pays off. The XACML mapping of the subject to authorized resources, conditions, and actions allows for the security policy to be applied to the subject as well as the representation of the object. One example of this in the real world at railroad crossings, normal cars pass straight over the tracks as long as the gate is open and the light is not flashing. School buses, because of their cargo, stop and perform additional visual checks before proceeding

October 12, 2006 in Deperimeterization, Enterprise Architecture, Federation, Identity, SAML, Security, Security Architecture, Software Architecture, Web Services | Permalink | Comments (1)

»
My Photo

SOS: Service Oriented Security

  • The Curious Case of API Security
  • Getting OWASP Top Ten Right with Dynamic Authorization
  • Top 10 API Security Considerations
  • Mobile AppSec Triathlon
  • Measure Your Margin of Safety
  • Top 10 Security Considerations for Internet of Things
  • Security Checklists
  • Cloud Security: The Federated Identity Factor
  • Dark Reading IAM
  • API Gateway Secuirty
  • Directions in Incident Detection and Response
  • Security > 140
  • Open Group Security Architecture
  • Reference Monitor for the Internet of Things
  • Don't Trust. And Verify.

Archives

  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015

More...

Subscribe to this blog's feed