1 Raindrop

Gunnar Peterson's loosely coupled thoughts on distributed systems, security, and software that runs on them.

Recent Posts

  • Security Champions Guide to Web Application Security
  • Security > 140 Conversation with Pamela Dingle on Identity
  • 6 Things I Learned from Robert Garigue
  • The Curious Case of API Security
  • Security Capability Engineering
  • Ought implies can
  • Security > 140 Chat with T. Rob Wyatt on MQ and Middleware Security
  • Privilege User Management Bubble?
  • The part where security products solve the problem
  • Four Often Overlooked Factors to Give Your Security Team a Fighting Chance

Blogroll

  • Adding Simplicity - An Engineering Mantra
  • Adventures of an Eternal Optimist
  • Andy Steingruebl
  • Andy Thurai
  • Anton Chuvakin
  • Beyond the Beyond
  • cat slave diary
  • Ceci n'est pas un Bob
  • ConnectID
  • Cryptosmith
  • Emergent Chaos: Musings from Adam Shostack on security, privacy, and economics
  • Enterprise Integration Patterns: Gregor's Ramblings
  • Financial Cryptography
  • infosec daily: blogs
  • Jack Daniel
  • James Kobielus
  • James McGovern
  • John Hagel
  • Justice League [Cigital]
  • Kim Cameron's Identity Weblog
  • Krypted - Charles Edge's Notes from the Field
  • Lenny Zeltser
  • Light Blue Touchpaper
  • Mark O'Neill
  • Off by On
  • ongoing
  • Patrick Harding
  • Perilocity
  • Pushing String
  • Rational Survivability
  • rdist: setuid just for you
  • RedMonk
  • RiskAnalys.is
  • Rudy Rucker
  • Software For All Seasons
  • Spire Security Viewpoint
  • TaoSecurity
  • The New School of Information Security
  • Windley's Technometria
  • zenpundit
Blog powered by Typepad

The Real Migration Problem

Preview of Tom Friedman's thinking for his new book - Hot, Flat and Crowded. Killer quote (emphasis added):


FP: And what about drilling? Republican presidential candidate Sen. John McCain, his running mate Gov. Sarah Palin, and President George W. Bush are implying that lifting environmental restrictions on drilling is the way to promote energy independence.


TF: Well, I think it’s patent nonsense. No one believes that somehow offshore, there’s enough oil in any near term and even the long term to provide us oil independence. It’s the wrong approach because in a world that’s hot, flat, and crowded, fossil fuels—and particularly crude oil—are going to be expensive and exhausting. Therefore the focus should be on the next great global industry: clean energy technology. When I hear McCain pounding the table for “drill, drill, drill,” it reminds me of someone pounding the table for IBM Selectric typewriters on the eve of the IT revolution.


I’m not against offshore drilling, by the way, because I believe the technology and the safety has improved far beyond where it was back in the 70s, 80s, and 90s, even. What I’m against is making it the centerpiece of our energy policy. If all McCain said was, “Let’s drill, but let’s also throw everything into innovating the next generation of clean-energy technologies,” I’d say, “You’ve got it exactly right, pal.”


Its funny because as someone who has done a half dozen legacy migration projects (with mental and emotional scars to prove it), I was thinking the same thing. The entrenched mindset. "If we just dig our trench deeper (in this case literally) then we will be ok."...at least until the person in question retires...

One of the legacy migration project I worked on, I was the third consultant that tried to get this company off of mainframe and onto distributed systems (which are no panacea but this company really did need to make the move). The core developers of the mainframe were actively hostile to change, as opposed to simply passive aggressive, which we expect. For example, if you asked about how a piece of functionality worked, say a report writer, the developer would not answer, stand up, walk out of the room, come back with a 800 page "data model", slam it on the table and walk out of the room. Good times.

A chief objection beyond fear of the unknown was the perceived lack of elegance in the distributed systems as opposed to the control from say JCL. Anyway, what progress I made was due to analogizing that we were leaving Greece which has a rich culture, history, philosophy and moving to Rome which maybe was not as elegant as Greece but still people like circuses, roads and acqueducts. So when, several times a day, a perceived go/ no go issue arose, I would gently remind  the developers that "we are now in Rome and things work differently here."

Intransigently digging the trench deeper is not the way, instead we need to better understanding the energy  problem in a larger context, and finding deployable technologies to help address it. If you think drill, drill, drill is the answer, then I think the answer for you is the same as someone who knows COBOL and flat refuses to learn modern languages even when that is required - a nice retirement house on a golf course somewhere.

September 09, 2008 in Enterprise Architecture, Politics | Permalink | Comments (0)

Neal Stephenson on Transport Level Security Versus Message Level Security

030923_quicksilver_book
Update: see this post on REST Threat Models and Attack surface for more ideas


As a security architect, do not assume that you know all the intermediaries and endpoints your message will traverse. Let's go back to 1670 -- from Quicksilver:

The heat was too much. He was out in the street with Uncle Thomas, bathing in cool air.

"They are still warm!" he exclaimed.

Uncle Thomas nodded.

"From the Mint?"

"Yes."

"You mean to tell me that the coins being stamped out at the Mint are, the very same night, melted down into bullion on Threadneedle Street?"

Daniel was noticing, now, that the chimney of Apthorp's shop, two doors up the street, was also smoking, and the same was true of diverse other goldsmiths up and down the length of Threadneedle.

Uncle Thomas raised his eyebrows piously.

"Where does it go then?" Daniel demanded.

"Only a Royal Society man would ask," said Sterling Waterhouse, who had slipped out to join them.

"What do you mean by that, brother?" Daniel asked.

Sterling was walking slowly towards him. Instead of stopping, he flung his arms out wide and collided with Daniel, embraced him and kissed him on the cheek. Not a trace of liquor on his breath. "No one knows where it goes--that is not the point. The point is that it goes--it moves--the movement ne'er stops--it is the blood in the veins of Commerce."

"But you must do something with the bullion--"

"We tender it to gentlemen who give us something in return" said Uncle Thomas. "It's like selling fish at Billingsgate--do the fish wives ask where the fish go?"

"It's generally known that silver percolates slowly eastwards, and stops in the Orient, in the vaults of the Great Mogul and the Emperor of China," Sterling said. "Along the way it might change hands hundreds of times. Does that answer your question?"

Transport level security assumes good security on both endpoints in a point to point scenario and everything beyond those endpoints within the transaction span. Message level security lets the message traverse numerous business, organizational, and technical boundaries, with a modicum of security intact.

December 04, 2006 in Deperimeterization, Enterprise Architecture, Federation, REST, Security, Software Architecture, Web Services | Permalink | Comments (0)

New article - Defining Misuse within the Development Process

The latest IEEE Security & Privacy Journal has an article that I co-wrote with John Steven on Misuse Cases. The goal of the series and this article is to get beyond the hand wavy "security needs to get involved in the development lifecycle", and instead provide the where, what, and how that security needs to get involved with specific ways to doing this. Gary McGraw's latest book has excellent methods for doing this, as does Mike Howard's. One idea that Mike Howard talks about is particularly important which is that everything that security attempts to impose on developers is a like a tax on the developers time. As such, security's requests must be targeted at the areas of strategic importance, and in my view part of what this means is that security should be involved early enough in the lifecycle so that security can work in parallel with developers on building more secure code. It is hard to get involved to much early than use case modeling, which is what this article is about -- Defining Misuse in the Development Lifecycle.

In the article we explore how misuse cases are related to use cases, what to model, how misuse cases relate to architecture, and related considerations. We will continue on this theme over the next several issues of S&P.

November 30, 2006 in Enterprise Architecture, Security, Security Architecture, Software Architecture, Use Cases | Permalink | Comments (0)

Decentralization and "Good Enough" security Part 3

I blogged parts 1 and 2 about issues relating to decentralizing security models that support horizontal integration of security models. Parts 1 and 2 mainly dealt with the problems, the problems should be familiar to anyone who has tried to scale security in a distributed system. Section 3.1 lays out the goals, including:

2. It should be risk-based, and measurably so.

3. It should be agile and extensible, accommodating variable risk acceptance and shifting coalition partners.

4. It should enable incentives for information sharing.

7. Its decisions should be auditable.

8. It should recognize qualitative distinctions between different kinds of secrets (e.g., tactical vs. strategic)

9. It should be capable of being implemented incrementally.

These are all themes that enterprise information security shoudl embrace, because they align well with business goals and are necessary in most distributed systems. Risk management is the best lens from which to infosec. Many security personnel continue to view security architecture as analogous to jails, but you know what? Not much commerce happens in prison. The kind that does is black market and it specifically routes around security controls. Sound familiar? As they say in Minnesota: ya sure, ya betcha.

Incremental adoption, and the ability to layer on security iteratively instead of big bang style projects is likewise a necessary condition for successful deployment. Again this aligns with how software development is done, the trend from RUP to XP to Agile is towards lighter and lighter weight processes with more iterations. The more the security architecture dictates massive access management suites the less it is going to fit into that kind of mode, and the less it will deployed. As Duke Ellington said, "It don't mean a thing, if it ain't in the build dist".

The paper offers three guiding principles for building a new system.

1.

Measure risk. “If you can’t measure it, you can’t manage it.” 

When risk factors can’t be measured directly, they can often be plausibly estimated (“guessed”); and subsequent experience can then be used to derive increasingly better estimates. This can be formalized in various ways, for example as an updatable Bayesian belief net.

2. Establish an acceptable risk level. As a nationwe can afford to lose X secret and Y top secret documents per year. We can afford a Z probability that a particular technical capability or HUMINT source is compromised. If we set X,Y,Z, . . . all to exactly zero, then all operations stop, becauseall operations entail some nonzero risk. What, then, are acceptable rangesof values for X, Y,Z and a long list of similar parameters? Even better, roughly what values would optimize our operational effectiveness in the long run, when today’s security breaches show up not simply as harmful in theabstract, but as actually imperiling future operational effectiveness?

3. Ensure that information is distributed

all the way up to the acceptable risk level. This is a very fundamental point. We have been living with systems that try to minimize risk. That is the wrong metric! We actually want to maximize information flow, subject to the overall constraint of not exceeding the acceptable risk levels set according to principle number 2, above. This means that instead of minimizing risk, we actually want to ensure that it is increased to its tolerable maximum (but no higher).

Regarding points one and two, I have blogged many times about a risk and metrics-centric approach to your information security program. See the MetriCon Digest for some specific ideas on how to accomplish this. The point I love in this paper is #3 "Ensure that information is distributed all the way up to the acceptable risk level", what a great concept. If you do your risk management properly, this should be a great enabler, everything else is security by obscurity in some sense. It does turn least privilege on its head.

Section 4.2.4 discusses NetTop

NetTop, an NSA project, is a technology for virtualizing machines and networks at different security levels onto a single hardware platform. The NetTop user has a single desktop or laptop machine, and has the ability to open windows on “virtual machines” at different security levels. Each virtual machine has its own operating system (e.g., Windows XP or Linux), and runs as if it were on a physically separate machine, connected only to the appropriate level of classified network.

If you combine this approach with where SOA and Web Services and where it is going, the virtualization approach should be inherent in how services are bound together. For example a Security Pipeline Interface such as a XML Security Gateway in front of your web service. Otherwise, as Richard Thieme says we will be functioning according to an increasingly obsolete trust model.

NetTop instantiates and extends the concept of Multiple Signal Levels (MSL), rather than Multi-Level Security (MLS). MSL is the concept, fully accepted by the information security community, that different security levels can share the same physical hardware infrastructure, as long as high-grade cryptographic techniques are used to keep the differing levels completely separated, even when they are in the “same wire.” Indeed, MSL is today’s way of doing business in long-haul network transport. The encrypted text output of an NSA-approved crypto device is (generally) unclassified. It can, as a rule, be mixed with ordinary unclassified traffic, and sent over unclassified military or civilian channels

...NetTop pushes the MSL solution out of the wiring and onto the desktop. In a NetTop solution there are still (in this hypothetical example) five separate classification levels, all kept separate; but there is only one physical infrastructure, and one machine on the desktop...In NetTop, SE Linux runs in a particularly secure mode, without an IP stack (the point of entry for remotely hacking ordinary computers).

The virtualization concept is important across the board in an SOA. From the user's desktop where Microsoft is making cardspaces as a way to support multiple identities to enterprise services buses and orchestration services that aggregate services, transform data, and route content based on widely varying policy requirements. the ability to virtualize endpoints in separate policy domains is a fundamental element.

Section 4.3 gets to the bits on the wire - risk tokens. In a phase 1 risk token, the token allows for example:

1 token = risk associated with one-day, soft-copy-only access to one document by the average Secret-cleared individual.

This example associates risk with a token predicated on length of time, type of access, and access rights of the individual. I am quite fond of the notion of a risk token. Risk tokens would be represented in an SOA by SAML assertions (SOAP or browser-based federation) and WS-Security tokens (attached to XML documents). The WS-* stack takes this one step further with WS-Trust, which provides a framework for moving the token around on a network, and exchanging them for other risk tokens. This integration, which allows for say a Windows client authenticating off of AD to exchange risk tokens and make authentication and authorization SAML assertions to Java based web services provider, allows for the security model to truly scale across technical and organizational domains.

Next, in phase 2 tokens, there is a subtle but important change "In Phase 1, tokens were denominated by “risk”; in Phase 2, they must be denominated by “harm.”"

Several major espionage cases have shown a systemic weakness in the present security system, namely the fact that individuals are most often treated as either “fully trusted” (cleared) or “full untrusted” (uncleared). That is, trust is treated as a discrete, not a continuous, variable. A major reason for this is that a down-transition between these two states — revoking someone’s clearance — is so drastic an action that line managers, and even security managers, try to avoid it at almost any cost.

The Aldrich H. Ames case is a particularly famous, and perhaps egregious, example of this phenomenon.

9 Not wanting to rock the boat, managers at multiple levels dismissed, or explained away, warning signs that should have accumulated to quite a damning profile. In effect, each “explanation” reset Ames’ risk odometer back to zero.ffect, a continuously variable level of trust. The individual manager, therefore, is never faced with an all-or-nothing decision about whether to seek suspension of an employee’s security access. Instead, the manager has clearly defined, and separable, responsibilities in functional (“getting the job done”) and security (“work securely”) roles.

This trusted v. untrsuted anti-pattern is immediately recognizable in enterprise security.

The risk model that we have posited for a tokenized system (both Phase 1 and Phase 2) provides, in effect, a continuously variable level of trust. The individual manager, therefore, is never faced with an all-or-nothing decision about whether to seek suspension of an employee’s security access. Instead, the manager has clearly defined, and separable, responsibilities in functional (“getting the job done”) and security (“work securely”) roles.

The manager’s security responsibility is simply to ensure that relevant facts known to him about an employee are also known to the risk model. The manager is not the only source of these facts, and he or she is not a “cop.” But it is reasonable to require that managers report, according to objective criteria, security-related factual information about their employees.

They will be more likely than now to do this knowing that the effect of such information will be incremental only, and not likely to trigger anything as drastic as the revocation of an employee’s clearance (as it might today). The manager’s functional responsibility is to manage his or her unit’s tokens wisely. If the risk model assigns a high risk to one particular employee, then the cost of transactions (access to classified information) for that employee will become high. The manager may make the purely economic decision to change that employee’s job assignment, so that it requires less access to secret information and is less costly to the token budget. If an employee’s transaction cost becomes prohibitively high, then the manager may counsel the employee either (i) to take actions that would lower that cost, or (ii) to seek other employment.

"it is reasonable to require that managers report, according to objective criteria, security-related factual information about their employees." Assertion-based access control anyone? What differentiates phase 1 and phase 2 tokens is that phase 2 tokens map back to the object. In this case we get to the point where SAML and WS-Security reach their limit. At this point we need to look at XACML to deal with mapping decisions back to the object resource. James McGovern has been beating the XACML drum, and in the area of phase 2 token concept XACML really pays off. The XACML mapping of the subject to authorized resources, conditions, and actions allows for the security policy to be applied to the subject as well as the representation of the object. One example of this in the real world at railroad crossings, normal cars pass straight over the tracks as long as the gate is open and the light is not flashing. School buses, because of their cargo, stop and perform additional visual checks before proceeding

October 12, 2006 in Deperimeterization, Enterprise Architecture, Federation, Identity, SAML, Security, Security Architecture, Software Architecture, Web Services | Permalink | Comments (1)

Diagramming Tool: Enterprise Integration Patterns Omnigraffle Stencil

If you like Macs and Asynchronous Middleware Messaging systems, then you are in luck. Here is an OmniGraffle Stencil for the integration patterns described in Enterprise Integration Patterns. The stencil contains notation based on the authors' descriptions of the patterns. The book is one of the best resources available on messaging systems.

Eipogstencil_1

The Stencil, like the book, has Message Endpoints, Message Construction, Message Routing, Message Transformation, Messaging Channels, and Systems Management patterns included.

October 03, 2006 in Enterprise Architecture, Software Architecture | Permalink | Comments (0)

WAF and XSG Risk and Effectiveness at 20,000 feet

I already blogged about one of my favorite presentations at Metricon in which Bryan Ware showed a model of combining risk and effectivness to identify areas to focus on.

Riskeffectiveness

Of course, the upper right quadrant of highly effective controls for high risk items make sense to fund, but the harder choices lie in quadrants 2 & 3. Would you rather really make a dent in solving a relatively lower risk problem or partially solve a higher risk item? Bryan's guidance was to incentivize high effectiveness (quadrant 3) over lower effectiveness (quadrant 2). This is a bold move, because it may be seen as ignoring higher risk items, however in the long run it may drive organizations to work towards higher effectiveness.

One area that organizations struggle with is - how to get started in application security? There are a lot of types of controls to consider like static analysis tools, sdlc, threat modeling, strong authentication, federated identity and so on. I have seen several enterprises conduct tradeoff analysis on Web App Firewalls (WAF) which offer some protection web applications from SQL injection and the like and XML Security Gateways (XSG) which protect XML messages and Web Services. So I used Bryan Ware's quadrant to analyze where it would make sense to focus efforts if an organization could choose only one. This is at the 20,000 foot level and both risk and effectiveness are subjective, so YTMMV (Your Threat Model May Vary).

If we assume that risk is higher in the apps that WAFs generally protect, Internet facing portals, than in what XSGs generally protect, say B2B apps on VPNs. Then the WAF goes in the higher risk quadrant than the XSG.

If we further assume that WAFs use a lot of educated guesswork due to performance constraints of signature and anomaly detection on unstructured data in web apps speed then their relative effectiveness is downgraded versus XSGs that may operate on asychronous systems on structured data, and, of course, XSGs can leverage interoperable security standards like WS-Security whereas WAFs rely on patter recognition. Then the relative effectiveness may prove higher for XSGs, again YTMMV. Assuming the above, the decision looks this way.

Riskeffectivenesswafxsg

Depending architecture, threat model and other factors a XML Security Gateway may be a logical choice if you have to choose since it may prove to have higher effectiveness than a WAF. Of course, these need to be mapped to your particular organization, and there are a lot of other controls to consider besides these two, but in my view this model works really well as a decision support tool for hard security architecture choices.

September 12, 2006 in Computer Security, Enterprise Architecture, Security, Security Architecture, SOA, Software Architecture, WAF | Permalink | Comments (0)

SOA, Web Services, and XML Security class in Twin Cities

I will be teaching a public one day class in the Twin Cities on SOA, Web Services, and XML security. The date is 9/21/06. The training is focused on identity, messages, services, deployment, and transaction security in SOA & Web Services systems. The class is designed for people who are building SOA & Web Services and want to build security into the system from the design and development stages. The focus is on the real risks in SOA and Web Services, what security standards and protocols are there to help, how to use them, and finally where you still need to code and configure to address security issues. I also gave a version of this class at OWASP AppSec Europe and on-site for a variety of clients. The content is aimed at security practitioners, developers, and architects.

July 10, 2006 in Enterprise Architecture, Security, Security Architecture, SOA, Software Architecture, Web Services | Permalink | Comments (0)

Pragmatic Speculation

Arctec's own Patrick Christiansen just released a paper on Pragmatic Speculation (...or, what the @#%# is a Technical Use Case?), which looks at ways to model cross cutting concerns across a number of use cases.

July 10, 2006 in Enterprise Architecture, Software Architecture, Use Cases | Permalink | Comments (0)

My Photo

SOS: Service Oriented Security

  • The Curious Case of API Security
  • Getting OWASP Top Ten Right with Dynamic Authorization
  • Top 10 API Security Considerations
  • Mobile AppSec Triathlon
  • Measure Your Margin of Safety
  • Top 10 Security Considerations for Internet of Things
  • Security Checklists
  • Cloud Security: The Federated Identity Factor
  • Dark Reading IAM
  • API Gateway Secuirty
  • Directions in Incident Detection and Response
  • Security > 140
  • Open Group Security Architecture
  • Reference Monitor for the Internet of Things
  • Don't Trust. And Verify.

Archives

  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015

More...

Subscribe to this blog's feed