1 Raindrop

Gunnar Peterson's loosely coupled thoughts on distributed systems, security, and software that runs on them.

Recent Posts

  • Security Champions Guide to Web Application Security
  • Security > 140 Conversation with Pamela Dingle on Identity
  • 6 Things I Learned from Robert Garigue
  • The Curious Case of API Security
  • Security Capability Engineering
  • Ought implies can
  • Security > 140 Chat with T. Rob Wyatt on MQ and Middleware Security
  • Privilege User Management Bubble?
  • The part where security products solve the problem
  • Four Often Overlooked Factors to Give Your Security Team a Fighting Chance

Blogroll

  • Adding Simplicity - An Engineering Mantra
  • Adventures of an Eternal Optimist
  • Andy Steingruebl
  • Andy Thurai
  • Anton Chuvakin
  • Beyond the Beyond
  • cat slave diary
  • Ceci n'est pas un Bob
  • ConnectID
  • Cryptosmith
  • Emergent Chaos: Musings from Adam Shostack on security, privacy, and economics
  • Enterprise Integration Patterns: Gregor's Ramblings
  • Financial Cryptography
  • infosec daily: blogs
  • Jack Daniel
  • James Kobielus
  • James McGovern
  • John Hagel
  • Justice League [Cigital]
  • Kim Cameron's Identity Weblog
  • Krypted - Charles Edge's Notes from the Field
  • Lenny Zeltser
  • Light Blue Touchpaper
  • Mark O'Neill
  • Off by On
  • ongoing
  • Patrick Harding
  • Perilocity
  • Pushing String
  • Rational Survivability
  • rdist: setuid just for you
  • RedMonk
  • RiskAnalys.is
  • Rudy Rucker
  • Software For All Seasons
  • Spire Security Viewpoint
  • TaoSecurity
  • The New School of Information Security
  • Windley's Technometria
  • zenpundit
Blog powered by Typepad

Security Champions Guide to Web Application Security

I have a new eBook available at Akamai, its called Security Champions Guide to Web Application Security. Why Security Champion? Well, AppSec is an area that often falls betwixt and between different groups, it blurs traditional lines. Basically it comes down to who cares enough to dig and try to solve the company's WebApp security problems, they may come from Dev team or Security team or Network team or any number of places. There is usually not a role called security champion, but there is a need for someone to champion the cause of WebAppSec to craft the security plan, to get designs right, to implement the code, and to deploy. To do all of this is a broad mix of skills.

The book is broken down into the following chapters-

Chapter 1.  Behavioral Perimeter - explores how the traditional structural perimeter needs to factor in a behavioral component, to deliver security where its needed

Chapter 2 Security at Scale - simply put scale is tablestakes. If your security doesn't scale then you do not even get invited to the party

Chapter 3 Intelligent Security - security cannot just be passive and static, co-evolution is required now.

Chapter 4 Integration - an effective boundary requires thinking through From-To Integration layers at both Tech and Process integration level

Chapter 5 SecDevOps - security test instrumentation

Chapter 6 Security Architecture Process - fostering a living, breathing boundary

November 17, 2015 | Permalink | Comments (0)

Security > 140 Conversation with Pamela Dingle on Identity

PdFor this Security > 140, I discuss identity with Pam Dingle (@pamelarosiedee). Pam is Principal Technical Architect at Ping Identity, a veteran of building, innovating and riding the many waves of the identity ecosystem. We discuss an appropriately wide range of topics from how developers should approach identity to unempowered frogs:

Gunnar Peterson:
Consultants always talk about people, process and technology. However there is an old consulting truism - its never a process problem, its never a tech problem Its always a people problem. So when I go to a security conference and I talk about SAML or OAuth, I frequently get the thousand yard stare, and when I go to an Identity conference and talk about CSRF or DOR I also get blank looks. Thinking back, there are very smart and capable people at these conferences, and yet, in the main, the security people know little about identity and vice versa, It struck me that I do not even see the same people from one place to the next, about the only people I see at both types of conferences are Bob Blakley and you. These two domains, identity and security, are totally intertwined - so how do we get better cross-pollination?

Pam Dingle:
I see a number of things we can do to get that cross-pollination going. I think proximity breeds curiosity, and the best thing that can happen is for conferences to start creating cross-discipline tracks. A lot of professionals have limited travel budgets - so it needs to be possible for identity folks to be introduced to security concepts (and vice versa) without the barrier of cost. I also think that there are some interesting thought experiments that we could curate to merge what often feels like extremely separate paradigms. CSRF is in fact a really good example, because it has a direct bearing on the identity world (receiving a form submission and making assumptions about where it came from is not materially different from receiving an authentication or authorization response and making assumptions about where it came from). Putting security concepts into identity framing and vice versa could be a good way to start to entice people into common conversations, pulling us each out of our respective echo chambers.

GP:
It seems like companies’ approaches to mobile identity really cover a wide range. For better and worse, there is not a lot of consistency with how people try to solve mobile identity. You see all manner or different home grown solutions and a hodge podge of different standards (password, certs, SAML, FIDO, OAuth, MDM). Are there different challenges driving people to different architectures? Or is this just normal early days of a new technology?

PD:
A hodge podge is exactly the right way to characterize it. The use cases are distinct and sometimes only tangentially related, and yet somehow they all still fall under this useless umbrella classification of "mobile identity". I would say that the three most common mobile challenges we see today in Enterprises are: 1) How to secure native mobile applications that rely on REST APIs without caching user credentials on the device, 2) How to use the unique location, hardware, and proximity of a device to a user to enrich the security of an authentication ceremony, and 3) Is the device involved in either Challenge #1 or Challenge #2 a known and/or trusted and/or uncompromised device?

Standards like OAuth and OpenID Connect are primarily centered on challenge #1. Standards like FIDO are centered on challenge #2, and can play a fascinating role in combining device-local authenticators with central systems in a standardized way. Product sets like MDM and EMM are centered on challenge #3. Passwords and certs could be part of the implementation of any of those challenges, although the question of securing a private key or other secret on a mobile device tends to push back into challenge #3 territory.

I do think this is early days for mobile identity, and I have to say, it's fascinating to watch the space evolve. If you are developing consumer-facing mobile apps for example, your risk assessment is radically different than an Enterprise use case - you don't necessarily have the luxury that many Enterprises have, of refusing to run on a poorly secured device. As such, the last thing you might choose to do is put a big heavy trusted credential like a private key or a password on the device. Your best bet there might be to create a lightweight, short-lived, very tightly scoped token there, so that the attacker you assume is living on the device with your app can only steal limited stuff for a little while. Compare that to the way Enterprises think of devices, where they want to *believe* that their corporate devices are trustworthy, and are willing to pay for the promise of a safe space, so that they can build a solid foundation for a chain of trust. Is it real? Well that takes us back to identity-security cross-pollination...

GP:
For a long time, “identity” has been a grand topic with many facets and edge cases, but once you reduce it to implementation it ends up primarily being about SSO and Provisioning, with a side of MFA. What are you seeing in terms of new use cases that will drive identity going forward on the different platforms in play today? Is it pushing towards stronger auth, more attribute granularity, portability, bidirectional communications? What do you see as the next killer use cases for identity?

PD:
I think we're already neck deep in the next killer use case for identity, and the most critical thing we have to do is to realize it. In my opinion the next killer use case is not (as you might think) granting consent, but un-granting it.

The problem we've always had with delegation & consent is that the consequences of giving consent are rarely clear. We balance our desire for a thing we want right now in the moment, against a hazy set of possible future outcomes, and most of us choose the instant gratification, even those of us who are suspicious or afraid - because there are almost no alternatives. We are accustomed to granting things -- and the services we use are accustomed to asking for anything they want, but there is a tipping point coming. Our own devices have been consumerized, and the line between security and surveillance on those devices is blurring. Part of it is for our own good, and we SHOULD be allowing it. The rest is dangerous and exploitative. If, with SAML and OpenID Connect, we have a layer of plumbing that could become the lingua franca of identity and access, then we could now have a solid basis upon which to specify policy. And those policies don't have to only belong to corporations.

In case you think I've got a terminal case of "the world should be a better place", no, I don't think this type of thing will come about by general consensus. Somebody will build a thing people want, and will give them innovative tools to intuitively start and stop the flow of identity data, device context, and other identifying information. When that innovation happens, I believe the response will be enormous. If that doesn't happen, we'll continue as a population of uncomfortable yet unempowered frogs, turning up the water in our own saucepan.

GP:
So the inverse of SSO is Selective Service Revocation, then, interesting. Ok, for traditional identity protocols like SAML, OpenID Connect, OAuth, and so on, you’ve been working with these for awhile, since the beginning basically. When you work with developers on integrating these protocols into applications, what are a couple of things you wish you knew at the beginning that you know now on things to do or avoid on implementation and integration?

PD:
For me, the #1 hard-won life lesson is: fear the hello world identity integration. You know, that thing that you make as a developer where you just get the thing working in the simplest case, and THEN you'll go back and make it compliant, check your inputs, and so on. Except sometimes you never get back to it, and you end up with something that seems like it works but is in fact exceptionally dangerous, because the true measure of a successful integration should be what it rejects, not what it accepts. Mitigation of risk around hello world identity integrations should be a built-in part of any running identity deployment, but when every integration is bespoke, the cost is tough to swallow: this is why I'm such a huge fan of OpenID Connect. As a 'last mile' integration strategy for applications, OpenID Connect allows non-experts in identity to easily create integrations, but more importantly, if you use OpenID Connect you can also run conformance tests built by the community. The OpenID Foundation has invested in official, public conformance tests for productized software, but they have also made the conformance software available as open source for use in private testing scenarios. This combination of "easy to write a hello world integration" and "easy to make sure the hello world integration never goes into production" must be the industry's standard for the future, if we want adoption and security to go hand in hand.

November 11, 2015 in Security > 140 | Permalink | Comments (0)

6 Things I Learned from Robert Garigue

Robert Garigue was the CISO of Bell Canada and Bank of Montreal. I only met him and heard him speak once, and that was over a decade ago, but I learned a tremendous amount from him. Garigue's insights continue to resonate, and its an impressive accomplishment, because back then Infosec simply did not have the traction that it does today, yet he could see where things were moving and better still had great ideas on how to organize security practices to deal with unfolding events.

Here are some of the main things I learned from him.

Finding a way to be effective

Garigue talked about the two most prevalent CISO models - the good cop (jester) and the bad cop. The jester CISO

Sees a lot

Can tell the king he has no clothes

Can tell the king he really is ugly

Does not get killed by the king

Nice to have around but…how much security improvement comes from this ? 

The jester has happy customers! At least for awhile.

 We have all seen bad cop CISOs who

Changes happened faster that he was able to move

Did not read the signs

Good intentions went unfulfilled

A brutal way to ending a promising career

Sad to have around but…how much security improvement comes from this ?

Obviously these models of CISOs are not solving our information security problems. Instead Dr. Garigue points us to Charlemagne as a better model

King of the Franks and Holy Roman Emperor; conqueror of the Lombards and Saxons (742-814) - reunited much of Europe after the Dark Ages.

He set up other schools, opening them to peasant boys as well as nobles. Charlemagne never stopped studying. He brought an English monk, Alcuin, and other scholars to his court - encouraging the development of a standard script.

He set up money standards to encourage commerce, tried to build a Rhine-Danube canal, and urged better farming methods. He especially worked to spread education and Christianity in every class of people.

He relied on Counts, Margraves and Missi Domini to help him.

Margraves - Guard the frontier districts of the empire. Margraves retained, within their own jurisdictions, the authority of dukes in the feudal arm of the empire.

Missi Domini - Messengers of the King.

This is the way forward! Find and grow security champions in the dev, test, architecture and development groups,help them understand the real security issues. They will find solutions you have not thought of. Same for DBAs, same for business analysts even. Better still, these people will know ways to integrate security into the architecture, and security is all about integration. Its all about beating the bushes, education, and decentralizing security services.

Decentralization

Back in 2004, security did not have the mandate it has today. It was mostly losing 9 of 10 battles. For the one security did win the most common response was to centralize security services and delivery. In practice, this meant cramming together on service after another into a network DMZ and assuming this Helm's Deep approach would somehow help hold Rohan and Middle Earth together. Never mind, that Helm's Deep was supposed to the last resort for the final stand, not the first and only option. Anyway, Garigue saw early on that centralization was not going to scale, and more to the point did not map to how business or technology actually works. Instead, security must establish ways to decentralize security services. In Garigue's thinking, the goal is to find the "locus of control" which could be policy (rules), infostructure (content) and/or infrastructure (technology).

Importance of application, data, and identity in security

When I met Robert Garigue in 2004, the identity and AppSec world was very small, I think you could fit everyone in the space in a mid size suburban Starbucks. Garigue was way out in front in recognizing that information security was a not a separate discipline rather its organization should be "the result of the knowledge transfer process."

Knowledge

And so really our job is not just to move along the curve above, its to map and deliver those security services to knowledge networks.

Knowledgenets

What I really appreciate about this view is that its not security as a static topology, instead Garigue shows security working at different levels in a living, breathing system.

Garigue compared Infosec to dentistry, you need a group that specializes and keeps up with the latest developments in the field. But you do not go to the dentist to brush your teeth. Way back in 2004, Garigue said that security teams should eschew running the firewall team, if you have a network, you have to have a firewall. The security team should have a policy and run tests, but setting firewall rules is the digital equivalent of teeth brushing.

The relationship of business and security

Garigue said security and business are not enemies. Security is a business enabler. Because we have brakes in a car, you can drive 60 mph on the highway and get home safely. Anyone who has been in an accident at 25 mph knows that that hurts, so without brakes we cannot even drive 25 safely, but with brakes we can move much faster and safer.

Information Security is fundamentally about integration

This one is my favorite

Oxymoron

Information is messy, unpredictable, and powerful. Security is an illusion that we can partition our system a priori into known secure and insecure states. The combination of these two opposing forces will always be a debate. Our job in security is to make sure its a constructive debate and not a destructive debate as it so often becomes.

Coping with risk

Garigue:

"Knowledge of risky things is of strategic value

How to know today tomorrow’s unknown ?

How to structure information security processes in an organization so as to identify and address the NEXT categories of risks ?

This is the mandate of Information Security"

 

 

November 03, 2015 | Permalink | Comments (0)

The Curious Case of API Security

Paget_holmesJoin Mark O'Neill and I for a webinar this Tuesday. We have two actually, on EMEA and North American time. Register here. The webinar goes through the topics in my forthcoming whitepaper (whitepaper is here) The Curious Case of API Security. 

In this project we channel the spirit of Father Brown, Hercule Poirot, and Sherlock Holmes. We approach the top security issues in APIs the same way a detective would. Our basic process is as follows:  

  • Understand the context in which our APIs exist
  • Look for clues of possible vulnerabilities
  • Catalog what tools we use to identify and track vulnerabilities
  • Identify countermeasures that we can use to close out vulnerabilities
  • Provide evidence that can measure the efficacy of the countermeasures

I will briefly summarize the top ten issues here, but join the webinar and read the paper for the full version. To help crystallize the issue, I included a representative Sherlock Holmes quote.

1. Unprotected APIs

Most enterprise cores are as soft and chewy as the center of a candy bar. That means that once inside an attacker has free reign. Therefore, the API layer is a table pounding, must have security priority. 

2. Weak Authentication

“There is nothing more deceptive than an obvious fact” -Sherlock Holmes

Just because your APIs have authentication, does not mean it will withstand scrutiny from an opponent. Its on security engineers to build stronger forms of authentication.

3. Brute Force

“The world is full of obvious things which nobody by any chance ever observes” -Sherlock Holmes, Hound of the Baskervilles

APIs are particularly vulnerable here because they are always on. Yet most APIs lack basic scrutiny of the type and velocity of inbound requests. These are deemed uninteresting, but as Marcus Ranum says the number of times an uninteresting thing happens is interesting.

4. Injection

“I never guess. It is a shocking habit, —destructive to the logical faculty” -Sherlock Holmes, The Sign of Four

Assuming they have access control, many APIs then simply guess that the input is ok and pass it along to the backend. Guessing that input is ok, may be habit forming and can destroy your backend systems integrity.

5. Lateral Movement

“As a rule, the more bizarre a thing is the less mysterious it proves to be.” -Sherlock Holmes, The Red-Headed League

Who would have guessed a priori that HVAC systems and payment systems would be linked? But a better question to ask may have been, what is in place to stop any linkage between the systems? Once an attacker gains an entry point what is to stop or even detect follow on actions?

6. Session Promiscuousness

"I had," he said, "come to an entirely erroneous conclusion, my dear Watson, how dangerous it always is to reason from insufficient data"” -Sherlock Holmes, The Adventure of the Speckled Band

For Web apps, session cookies have all sorts of protections that engineers can use to try to provide a margin of safety around them. After all these cookies are used to mint and convey authority to use the system. For APIs though these basic protections against attacks like clickjacking, replay, and other impersonation vectors are usually missing.

7. Invisible Attacker

“You know my method. It is founded upon the observation of trifles.” -Sherlock Holmes, Boscombe Valley Mystery

Securing APIs does not stop at access control, the API security layer must also try to detect malice. This means logging and monitoring activities to develop pattern recognition for friend from foe.

8. Broken TLS/SSL

“Crime is common. Logic is rare. Therefore, it is upon logic rather than upon the crime that you should dwell.” -Sherlock Holmes, The Adventure of the Copper Beeches

While threats get all the attention, vulnerabilities and countermeasures are in fact where security engineers can be proactive. Threats are common, countermeasures are rare. We should spend our time on countermeasures. TLS/SSL has been taken for granted for a long time, but now its efficacy is challenged by BEAST, Poddle, et al. It goes to show how much engineering effort must be sustained to deliver even the basic protections we have today. For APIs, there is the additional burden of certificate management which is another long run area of struggle. If the level of attention paid to the talks at hacker cons was given to focused certificate management, we might wind up in a different place as an industry.

9. Inversion of Control

“When you follow two separate chains of thought, Watson, you will find some point of intersection which should approximate to the truth” -Sherlock Holmes, The Disappearance of Lady Carfax

The big picture of API security must include the client. Unfortunately this is an area with few good practices. The server has the DMZ as a pattern, for all its warts, its a pattern that people know how to use and we have tools there. The client side is different, now it has data and whether mobile, cloud, or IOT client its doing stuff with it. How to protect it? How to deal with server pushes where the client comes becomes the server in effect by bi directional communication channels?

10. Order of Operations

“The principle difficulty in your case,” remarked Holmes in his didactic fashion, “lay in the fact of there being too much evidence.  What was vital was overlaid and hidden by what was irrelevant. Of all the facts which were presented to us we had to pick just those which we deemed to be essential, and then piece them together in their order, so as to reconstruct this very remarkable chain of events.” -Sherlock Holmes, The Naval Treaty

APIs can appear to be a static set of getters and setters, but once they are built into other applications the combinations and permutations can drive unexpected behavior on the enterprise back end. 

11. Trusted !=Trustworthy

“Always approach a case with an absolutely blank mind. It is always an advantage. Form no theories, just simply observe and draw inferences from your observations.” -Sherlock Holmes, The Adventure of the Cardboard Box

The security architect’s most difficult opponent is probably not a malicious attacker. The security architect’s most difficult opponent is probably themselves. Microsoft’s John Lambert says it well, “Defenders think in lists, attacker think in graphs. As long as that is true attackers win.” The security mindset is most effectively deployed as mistake avoidance, which means in practice not buying into silver bullets, but instead engineering capabilities margins of safety.

October 25, 2015 | Permalink | Comments (0)

Security Capability Engineering

One of the areas security can improve in view is to be less audit-centric ("I am just going through the motions because auditors told me to do this and someone else is paying") and instead be more architecture centric. Architecture means understanding tradeoffs and making making choices on what to build.

Because of the infosec industry's audit-heavy past, we have a lot of laundry lists of things to do. That's how auditors operate. Note, I am not against audit as a function its very valuable, we need checkpoints. But it should be a function, not the only or dominant function. Laundry lists do not shed light on priorities or dependencies or other engineering questions.

Laundry lists tend to be static, they do not map to engineering processes or how to operationalize security. Gary McGraw, Sammy Migues, and Jacob West recently published BSIMM version 6 which some of these critical gaps. BSIMM goes beyond the audit mindset and it highlights key activities and touchpoints that link the activities into a living, breathing organization. These are the activities that help you make security real.

I see three areas where security engineering can be improved in many organizations and they all have their root in capability engineering. The three properties are efficacy, efficiency, and adaptability.

First is efficacy. Its surprising, but most of the least secure systems in any enterprise are security systems themselves. Security people often say (rightly) that complexity is the enemy of security, but yet when we try to solve security problems we introduce things like Kerberos and X.509. There is nothing in your enterprise anywhere that is as complex as X.509 or Kerberos. Its simply taken for granted that if a system is purchased or built for security purposes add security. Security teams are very good at testing apps and systems for vulnerabilities. The same should be done for security mechanisms themselves. In fact, these security mechanisms should be more rigorously tested because they are a foundational element and they are usually reused. Efficacy can be measured in part using threat models.

Making security efficient is challenge. Security teams should ensure that there is an ongoing effort to make security security capabilities - easy to use and cost effective. This has to do with drilling down on any barriers in security processes and optimizing scarce resources. As Fred Brooks said:

The critical thing about the design process is to identify your scarcest resource. Despite what you may think, that very often is not money. For example, in a NASA moon shot, money is abundant but lightness is scarce; every ounce of weight requires tons of material below. On the design of a beach vacation home, the limitation may be your ocean-front footage. You have to make sure your whole team understands what scarce resource you’re optimizing.

The third property is adaptability. Having a capability is not the same as using it and deploying it widely. The width and breadth of adoption is an important metric to use. A significant percent of the security team's effort and resources should be devoted to making sure that the capabilities are drop dead simple to use in applications. The effort here is often on packaging and building interfaces that can offer security capabilities to a broad set of platforms and users.

 

 

October 20, 2015 | Permalink | Comments (0)

Ought implies can

Information security is a field filled with concepts that are easy to say and very hard to do. I have never read your security policy, but I bet that it contains the principle of least privilege. Conceptually, its pretty hard to argue with the principle of least privilege. After all, why would you ever provision a user or a function with more privileges than it needs? But while its easy to say, the only logical next question for an engineer is - ok, how do I deliver that? 

Try this sometime, when you say "principle of least privilege" watch all the heads nodding and knowing looks. Of course, of course, yes that is what must be done. Then ask "ok, how do we do that?" Mostly what you get next is umms, ahhs, long pauses, people staring at their hands.

People think security's biggest problem is attackers and that we need better defenses. That's probably true, but its been true for awhile and may take awhile longer. In the meantime, Infosec teams should consider how they can function better. I think the answers here are not coming from Silicon Valley whiz kids, they are more likely to found in Immanuel Kant: 

"The action to which the "ought" applies must indeed be possible under natural conditions."

If our security policy says we "ought" to do something, principle of least privilege or strong auth or what have you, then its on the security architects and engineers to deliver capabilities that can do exactly that. 

There is a critical link missing in most Infosec programs, that link is reconciling the security policy and architecture standards to architecture and runtime capabilities. Put simply if what is mentioned in policy and standards does not exist in your existing capabilities, either you need a project working to address or you should take it out of the policy and standards. Its wasting people's time.

People are exhausted being on the other end of security assessments. I do not have a solution for that. Security does need to be more involved in arch, design, and testing. But the nature of our involvement should be dramatically scaled back and focused whenever and wherever possible. Do not drag teams through 37 different policy requirements when there is way that they can address it. Much better to invest resources and time (yours and theirs) on capabilities that have a chance at seeing the light of day in production. 

Then there is assurance:

Cormac

 

Feynman integrity dictates that its on the asserter to first try and disprove their ideas. Too often security is a dictate, but not one where security teams have done the engineering work to check the capability to deploy much less the efficacy of the recommendation.

T.S. Eliot wrote "in my beginning is my end", Infosec regimes often begin with a list of policy and standard thou shalts. Instead, the beginning point should be a sober review of the existing capabilities. First make sure you know what can you do with those capabilities, take advantage of them, and then go onto filling the gaps that lay between current state of play and desiderata.

 

October 07, 2015 | Permalink | Comments (0)

Security > 140 Chat with T. Rob Wyatt on MQ and Middleware Security

Trob

Some security topics require more than a tweet, to that end today on Security > 140 we we talk with T.Rob Wyatt (@tdotrob) who is an independent consultant specializing in security of the IBM MQ Messaging family of products.  He enjoys being a father and grandfather now and then.  In his spare time he does pretty much exactly the same things he does on the job, which is a windfall for his clients but somewhat distressing for his family who occasionally bust down his office door to stage an intervention.  He blogs about MQ security at https://t-rob.net where you can also find links to his presentations and articles.

Gunnar:You’ve spent a lot of years on middleware security with things like MQ Series. We always hear that security is supposed to be risk focused, but the majority of security budgets go to things like network controls and endpoints. These are some of the least valuable parts of the enterprise. Meanwhile messaging systems like MQ literally run the most critical transactions, the most critical data and applications. Yet these middleware systems get short shrift, no way they even get 10% of the focus, budget and effort of perimeters and endpoints, right? Why is that and how do you cope with that?

 T. Rob: There are some concrete steps to cope with it that have been effective.  The "why" part is a bit speculative but I'll take a shot.

People who work on Internet-facing servers, web admins, firewall admins, all the folks who deal with the perimeter, know all too well their technologies live in a hostile environment. Their public-facing servers are under constant attack and the technologies to deal with that are well known and mature.  When we decide the internal network is also hostile, applying the familiar controls is not difficult.  The delta in administrative overhead can be so small that in some cases it is easier to apply strong controls everywhere than to manage multiple sets of baseline configurations.

But middleware administrators ran their networks wide open for many years while web technologies were maturing.  In the case of IBM MQ (formerly WebSphere MQ, formerly MQSeries) the product has had TLS connections for many years but only recently have users been implementing it in earnest.  With little demand and even less actual implementation, there was almost no feedback to IBM as to what worked well, what needed fixing, and what was missing.

The perimeter admins can show logs and other evidence of constant threat.  Meanwhile, there have not been any publicly confirmed breaches of MQ.  We've had confirmed interceptions of data in transit on the intranet but even so it is harder to convince anyone of the threat to middleware on the intranet than to the perimeter.

The "what do you do" bit is a bit easier and more optimistic.  When I joined IBM in 2006 they were gracious enough to let me write articles and speak at conferences with a message that wasn't entirely flattering: MQ is delivered wide open by default, it's tough to lock down, and you (the users) aren't doing it right.  I also have written and delivered education sessions for auditors and PCI assessors to let them know just how vulnerable many of these middleware networks are and how to properly assess them.  

I've been beating that drum for almost 10 years now and it's moved the needle.  Enough MQ customers began enabling security that many of the latent potholes were mapped and repaired.  For example, I used to tell customers to stop making everyone an MQ admin and also to enable and monitor authorization events.  The first customer to actually do so immediately thought they were under attack when they started seeing tens of thousands of authorization errors a day.  It turned out that one MQ system queue could only be displayed by an admin.  When any low-privileged user asked MQ to enumerate the queue names - in other words they opened MQ Explorer to the Queues screen - it threw the auths error.  

In a network with hundreds of users and MQ nodes that result is obvious in hindsight once you know how it works, but literally nobody had ever done that before.  Thankfully we seem to be well past that stage now and it is much more common when I go to a new client to see the network running TLS channels and the connections authenticated.  One of my conference presentations from MQTC last week reflects that progress.  It is called "Beyond Intrusion Prevention" and asks MQ admins to consider intrusion detection, mitigation, recovery, and forensic analysis. (http://t-rob.net/links) 

Now that more customers are attempting to use MQ's security features for routine intranet links and IBM has lots of real-world feedback, momentum has been building.  The last three releases of MQ have featured major security enhancements, including that it is now shipped secure by default.  The MQ admin must explicitly provision *any* remote access, and a higher level of provisioning is required if that access is to be administrative.  As you noted, much of the world's critical infrastructure runs on MQ so this is a major win for, well, everyone.

GP: A big challenge for organizations is something I call “its perfect or its broken” mindset. If we think about the the externally facing DMZ, that usually represents about the best that an organization can do in terms of security. But many places have a hard time pivoting to what - “ok what does my ‘internal’ security model look like?” In other words, if I cannot have all the dials turned up to 11 like I do on the DMZ do I even have a security model. As you say, it used to be a total after thought, but its started to get better. I think one trap is to avoid that perfect or broken binary mindset, I noted that one of the goals in some of your design patterns is ‘containing the blast radius’, can you describe that a bit and how it applies to middleware specifically?

 T. Rob: I don't run into the perfect-or-broken mindset quite so much but again it may be because the middleware crowd haven't considered the intranet a hostile environment.  A client once said to me with a straight face "It's the trusted internal network. That's why we call it that."  Normally I have the opposite problem and have to overcome a belief that security ramps up as a function of the amount of effort expended on it rather than as a function of having addressed a specific set of threats. Case in point - MQ networks today commonly have TLS configured on their channels but do not authenticate the certificates. If the customer uses the same CA that I do, sometimes I demo the problem by connecting to their MQ node using my own personal certificate.  Whoops.

Because of this you may hear me from time to time telling audiences that having a little security is like being a little pregnant.  If people compared my advice with yours they might conclude we were saying opposite things.  I think we are actually nudging our respective communities toward the same goal, but working with communities of practice who come at the problem from very different starting points.

The blast radius analogy is useful because it is memorable and perfect for the principle I use it to illustrate.  I included one slide showing instances where we use controlled explosions for specific useful purposes - nail gun, internal combustion engine, fireworks - and another slide with an X-ray of my own hand after an explosion took a finger off of it.  If we think of a breach as having a "blast radius" then enabling the controls already in the product that might contain the blast seems obvious.

That slide deck makes the argument to the middleware community that perimeter security, which many shops have now spent some time on, is not the end of our journey.  I want to set the expectation that there's more work to be done so I ask the audience to consider blast containment and several other topics posed as questions.

If your MQ network was breached what is the probability that:

* The incident would be detected by the monitoring in place today?

* The controls in place today would prevent the breach from compromising adjacent nodes?

* There is an effective security incident plan in place and the key players are prepared to execute it?

* The scope and impact of the breach could be reliably, accurately and quickly identified?

* Affected business applications could be resumed quickly and safely?

* You would have sufficient forensic data to perform post-breach analysis to successfully identify the source?

Many shops now routinely use TLS to protect the MQ network from unauthorized connections but continue to allow administrative access from all adjacent legitimate nodes. Compromising one node on such a network grants administrative access to the entire network.  Configuring an MQ queue manager to refuse administrative commands from an adjacent node is quite easy to do and is an effective means of "containing the blast radius." Rather than treat this as an afterthought, for which they need to spin up a project and perhaps get funding, I want MQ admins to consider it early and include it in their security baseline configuration for MQ.  

I spend the most time in the presentation on breach mitigation because it is closely related to perimeter security MQ admins are familiar with and an area where we are likely to be able to make a lot of progress quickly.  But I also queue up several other security topics I hope will stay on the radar.

Obviously, some of the questions I pose describe goals that would be challenging for administrators of any technology and I don't pretend we have mature solutions for them in MQ. My goal here is simply to get people thinking about how they might accomplish some of these goals and which of the existing controls might be used to do that.  Once we start that conversation as a community and start playing with the tools we have today we'll be able to see better where the gaps are and work with IBM to address them.  

As an example, the more the community discussed archiving error logs over the years, the more functionality IBM provided.  We've gone from patchy support for tuning log sizes to formally defined, consistent controls over the last few releases.  Enough people now use this functionality that there's demand for more fully realized log management.  At last week's conference IBM's product architect mentioned they are already working on more log enhancements for the next release. Momentum is great.

GP:  Middleware systems are in place because they need to bridge environments. When you try to implement access control in them, you typically have to deal with minimum 2-3 different security domains and credential types. This is one factor that’s led people to punt to a lowest common denominator “Well the mainframe supports 6 character password so I guess that’s what I’ll use on the front end web server” kind of thing. What’s a good way to find the highest common denominator for the service requesters and service providers?

 T. Rob: Try to contain your laughter to a dull roar when I tell you that I have no freaking idea.  I have been working with IBM MQ and its family of related middleware products exclusively for twenty years.  Almost everything I know about security was learned in that context.  

So what is that context?  The base product has no message hashing or integrity assurance features.  The message context, which includes a user ID and a few other fields, is used to pass identity around, but the thing consuming the message has to trust the provenance.  There's an add-on product called Advanced Message Security or AMS for short that provides signing and encryption of messages, but even this product doesn't provide assurance of identity carried in the message headers.  The identity assured by AMS is carried in the payload's AMS signature which has no relation whatsoever to the value of the ID in the message header.

So you see, it isn't just MQ users who have concentrated on the perimeter.  The security model of the product itself is that of a hard crunchy perimeter with a soft gooey center - what I like to call the Catbox Crispy Security Model.  We trust the message headers because we have to, and we trust the perimeter security to keep the headers trustworthy.  If the assumption of the strong MQ perimeter is bad, then all bets are off.  That's the soft gooey center.

From a pure MQ standpoint the community is very pleased with the progress in the product version-to-version.  It really does keep getting better, and new security features arrive generally at a pace faster than the MQ community is prepared to adopt.  Within our little island, MQ security is the best it's ever been and it's all very well received.  However from the outside looking in, and especially when someone who administers an Internet-facing technology is doing the looking, this all seems very rudimentary.

It's not just MQ, by the way.  The JMS specification makes provision for identity fields but none for controls to assure the integrity of those fields.  There is no JMS-defined class of service, for example, that signs a message's headers.  From a middleware design standpoint, identity and policy are related to the connection and not the message.  There is some attempt to connect the two but not with assurance or auditability.

There are of course several IBM middleware technologies.  MQ Extended Reach implements the MQTT protocol.  MQ Light implements AMQP.  WebSphere App Server has an all-Java implementation of JMS that interoperates with MQ.  Integration Bus is the Universal Translator of the bunch and can communicate in a huge number of protocols.  But there is no common identity or policy repository across these platforms.  If you want to propagate identity through the middleware, you will need to do it yourself and most likely do it in the message payload.

To be fair, as of v8.0 which was released in June of last year, MQ will validate a user ID and password provided to it in a connection request, and can do that against the local operating system, LDAP or Active Directory.  If MQ authenticates against the same identity repository as, say, the web application server, then we at least have the same identity repository.  We still do not have assurance that the value in the message header identifies the entity who put the message, or that it hasn't been changed, and there is no formal mechanism for reliable identity propagation and no common repository for authorization and policy.  But ID and password validation is progress and I'll take it happily.

All of which makes your question absurdly easy to answer in an MQ context, although perhaps not terribly comforting.  The highest common denominator for identity propagation is that the user's ID and password can be validated against Active Directory or LDAP.  Assuming that's where the Enterprise accounts are held, and that the perimeter security is sufficient to assure integrity of the ID in the message header end to end, then we have a winner.  

Meanwhile, the highest common denominator for authorization policy is zero.  MQ access control entries are managed by MQ's Object Authority Manager.  These are not even common across an MQ network.  The administrator is obliged to define access control entries on a per-queue manager basis, even when those objects share a common namespace in the MQ cluster.

As you can imagine, I stay quite busy trying to show my clients how to manage this at scale.  I suppose I could stop trying so hard to get new features implemented and just enjoy the work, but I'm the one who has to explain all this and then I'm the one who gets beat up over it.  The steady work is great but I'd like it a lot more without the beatings.

GP: For the companies running TLS, do they usually have an internal CA setup, or do they do those custom for the MQ project? Do you run into any certificate lifecycle management challenges when you go operational? 

T. Rob: OK, now you are intentionally provoking me, right?  Cert management is the biggest single issue in MQ-land that I see.  My recent conference presentations are on certificate management in one way or another and I wrote them as a response to this specific issue.

The latest one is about the use of commercially signed certificates with MQ. Most CAs have tutorials for a plethora of different types of servers but nothing specifically about MQ.  Even an admin who manages certs for other technologies won't find much useful guidance for MQ.  That little validation step browsers do to check the URL against the CN and SAN?  MQ doesn't do it.  So what *are* the requirements?

My "Managing CA Certs for MQ" presentation explains the different types of validation (any domain validated cert will do) and granularity options (don't bother with wildcards or UCC for MQ).  At the end I also provide a step-by-step implementation procedure to cut down drastically on setup issues.

The most popular presentation though is "How To Build Your Own Certificate Management Center of Mediocrity."  The title is a play on the too common approach to security of implementing it gradually across the environment rather than implementing comprehensive security models against smaller subsets of the estate.  Rather than aspire to "Center of Excellence" level of security management, there's an assumption that it can be ramped up gradually, and that the effective level of security ramps up proportionally.

Unfortunately, the level of effective security achieved doesn't ramp up proportional to effort and for a variety of reasons.  Obviously, leaving out critical controls is one issue, but "anti-lock brake syndrome" plays a big part too.  When anti-lock brakes were first introduced they did not result in the crash reduction that had been forecast because people who had them felt safe enough to reduce their driving safety margins and engaged in riskier behavior.

Something similar can happen in shops after the MQ security implementation is complete.  At that point it is assumed to be secure enough for any data, including PII, PHI, PCI, etc.  But on inspection it is common to find applications and users running with full admin rights, little or no application isolation, the ability for legitimate users and apps to spoof one another's IDs and even anonymous remote administration.

A strong indicator of trouble is when that event is referred to as "*the* security implementation" as in "oh, we addressed that during the MQ security project."  The wording reflects an understanding that this was a once-and-done event.  

 

The How To Build Your Own Certificate Management Center of Mediocrity presentation is basically a retelling of some of the worst horror stories. The one I always tell is the large credit union where I asked whether they use CA signed or self-signed certs.  The admin told me self-signed so I started explaining how it works.  When he interrupted me to explain he needed signing requests to send to their internal CA we paused while I explained what self-signed means.  

 

When I asked how long it takes to turn around a signing request I was told it just takes a few minutes but we'd have to wait until "the guy" came back from lunch.  

"The guy," I asked?  "This is the internal CA that supplies ALL certificates used by databases, app servers, backup jobs, Active Directory credentials, and that the business is literally betting its continued existence on and it all depends on one person?"

"No, there are about 5 people who can run the internal CA.  But he has the laptop with the root cert with him."

 Great. No doubt it's in the food court at the mall right about now and logged onto the open Wi-Fi. 

All my presentations are linked from the same index page, by the way:

https://t-rob.net/links/ 

Over the years I've pitched the idea of lightweight certificate management to IBM, including during my stint as an MQ Product Manager, and to various vendors in the MQ 3rd party ecosystem.  None have taken me up on it so now I have a business partner writing the code, I'm doing specification and design and we are using Avada's IR-360 as the integration platform.  We do not intend to provide a complete PKI, but we will manage the public certificates centrally, manage the personal certs in situ, and provide expiry reports and lifecycle management.  While we are at it we will go a bit outside the cert management space and monitor MQ configuration settings using IR-360's native MQ management and reporting capabilities.

So, yeah, I run into a LOT of cert lifecycle management issues.  The tools available are either Enterprise-grade PKI or stone knives and bear skins.   In between is a barren wasteland right now but I believe that with a decent product in that space we would see a lot more MQ security deployed in the field.  My goal is to make bring the cost down so low that people will secure the non-Prod environments.

Mq

GP: You say your goal is to see security applied in non-Prod environments.  How do you see thins integrating into the Software Development Lifecycle?

T. Rob: Let me put it in perspective of how we develop business software applications today.  Typically the environments include some version of Development, Integration Test, User Acceptance Testing, QA, Performance Test, and eventually Production.  The idea is to systematically identify and resolve software defects through a well-defined process that minimizes risk of a variety of negative impacts.  Business application software is revenue producing and customer-facing so this rigorous and disciplined process is a no-brainer.

Now consider how we approach the infrastructure management, monitoring and security tools.  There is no question that these are required in Production so we put them there.  But they are not revenue producing and not customer facing so the non-Production environments have less risk and we tend to deploy fewer tools the further we get from Prod.  Many shops deploy infrastructure and security tools only in Prod.  To the extent we do deploy into non-Prod, it tends to look like an extremely shallow inverted pyramid.  There are many consequences of this but there are two in particular with which I am concerned.

The first of these, and we've all seen it, is that the highly visible project deployment fails due to security errors.  Rather than back it out, we disable the security controls over which it is tripping and deploy.  There are many things we are willing to miss a deployment date over but security isn't one of them.  This is never more true than with the highly visible, mission-critical projects that ironically are the ones we really are betting the business on.  

If we currently have no certificates in Dev then adding them reduces our risk, even if we automate the management and use self-signed certificates in that environment.  I believe that if the incremental cost of a single certificate at a single queue manager approached zero, it would be hard to argue for not using them.  I want security on in all environments, beginning with Dev.

The second consequence of this Prod-down approach to security is a little more abstract but a lot more significant.  The infrastructure management and security tools are not revenue-producing applications, but they *are* applications.  Because they ensure the integrity and availability of the environments that run the business applications, they are just as critical as the most critical of the business applications.  So if the first time the MQ admin turns on TLS channels, CHLAUTH rules, cluster security, or granular authorization is in Production then there has been no opportunity to test these things beforehand.  

The same business case that says we can't afford monitoring agents, certificates and granular authorization in non-Prod environments also guarantees that any defects with our implementation or configuration of these things occurs in the Prod environments, and they are not trivial to administer.

The Prod-down strategy is a deliberate choice to bypass the rigor and discipline of the software development lifecycle and bear all the risk of infrastructure and security defects in the one environment where any defect is customer-facing, reputation-risking, cash hemorrhaging, and potentially business-ending.  

Described in these terms and the average person wonders why security isn't turned on in all environments all the time.  It takes a special kind of logic to justify not doing so and when we write that logic down we call it a business case.  Without a revenue stream, there is nothing against which to offset infrastructure and security expenditures.  They are perceived almost entirely as "stuff that reduces profit" and deployed with great reluctance.  My goal of getting the incremental cost of a single cert on a single queue manager down to near zero is my attempt to address this problem head-on.

 "I want to move the risk of infrastructure and security defects out of the one environment where any defect is customer-facing, reputation-risking, cash hemorrhaging, and potentially business-ending."

 "We do that today?  Seriously?  Never mind.  What's that going to cost?"

 "With the degree of automation I want to deploy, there's a year-over-year operational cost per certificate of bit more than zero."

 "I'm sorry, how many zeros?"

 "Just the one."

 Hopefully that business case is a bit easier to make than what we do today.  Of course automation doesn't relieve anyone of the responsibilities of a competent security architect.  But if we offload the bulk of the grunt work to automation, that person is freed up to design and update the security model, manage the policies and validate that everything is working the way it should.  For example, if you need to update a signer certificate across a few thousand endpoints doing it by hand is time consuming, error prone, and expensive.  I propose not only to automate that, but to do so in Dev so the business case to deploy certificates there becomes a slam dunk.

October 05, 2015 in Security > 140 | Permalink | Comments (0)

Privilege User Management Bubble?

About five years back I might get one question every month or two on tacking privileged users. Nowadays, its more like one or two every week. I imagine that ten years ago or so the whole product space was about five guys in a garage in Israel. Why the change in uptake and urgency? This is not a new problem, its been a large scale problem for at least two decades. Why is it now top of the agenda? I assume that the CEO of CyberArk (who recently did an IPO) has a picture of Snowden above the bed that they bow to and thank each night.

In 2011, CyberArk has $36M in revenue, in the last 12 months its $134M. That is serious growth for an area that was an IT Security backwater for its entire existence.

The knock on effects are quite important for companies to understand. On the demand side, 5-10 years ago, the only companies that invested in Privileged user stuff were the usual suspects - large financials and the like. Nowadays, the demand is widespread, bog standard Fortune 500 companies who have a tiny fraction of the IT Security staff and budget are barreling down the Privileged user security path.

That brings us to the first problem, the tools were originally developed for deep pocketed institutions with a small set of use cases. Now these tools are aiming to address general Fortune 500 writ large and purport to take on a whole new set of use cases. Can they? Technically, sure. In reality, maybe not.

First problem is people. Sure we can see the revenue tripling in a short amount of time, but does anyone think there are 3x as many Privileged User pros in the market? Probably not. Five years back, the main complaint of Privileged User management systems was - they worked for a small set of use cases, but were expensive. It was rare to hear of failed deployment projects. Now they are being pushed into different kinds of companies, with smaller IT security teams, trying to handle a wider range of use cases, and we are starting to hear about failed deployments.

So how should people tackle this? Its a longstanding problem, so I am very glad to see companies finally getting around to taking a run at solving privileged user security problems. And in fact the tools in teh space are quite capable with an important caveat. The caveat is, like Charlie Munger says "you have to stay within your circle of competence. Its not a competency if you don't know the edge of it." 

Too many companies look at privileged user management as a quick win. They pull up ye ol' Magick Quadrant, pick the one that scores the best, and assume that in 2 months their decades of security holes - poof - vanish! Privileged user management is an important problem trying to address and there are good solutions out there, but this is not a quick win space. Its way more a three yards and cloud of dust, incremental win space.

People to deliver are at premium. Internal ownership can be fuzzy - is it a security thing? An ops thing? an identity thing? Do you even have an identity team? Also who watches the watchers?

Rolling it out is cumbersome. I recommend small chunks of use cases and platforms, probably one at a time. The users you are impacting are very busy and critical resources. There is the potential for emergent chaos due to the fact that many users have ids and passwords hard coded into shell scripts and configs. The Old Testament security folks will say "well those deserve to break", maybe they do, but stepping into the void and taking unknown risks on critical systems is not a recipe to handle the Availability A of CIA.

Misconfiguration is a problem, the tools have to speak all manner of arcane protocols. Plus the Privileged user tool itself is a major target. Its not clear that the end deployments are as hardened as they could be. 

Some goals - first do not tackle all systems and use cases out of the gate. Be conservative in what you can tackle. Start with a basic jump host, then password management, then session monitoring, then strong auth. This is just an example, but the key point is each should be a mini project and not all attempted at once. Each project should be very tightly scoped and focused with detailed requirements, ownership and solid test cases.

Its a long standing problem, but proceed carefully so that you are reducing not introducing (financial, infosec, or career) risk.

October 02, 2015 | Permalink | Comments (1)

The part where security products solve the problem

There is a burgeoning issue, a fatal flaw, in Infosec processes that has the potential to result in increased downside risk to companies, and that issue is found at the intersection of assumptions and security products. 

Robin Roberts former head of R&D at NSA said the "the problem is you are dealing with systems that have all sorts of assumptions built in, but you cannot query the system about its assumptions, it does not know what they are."

Andy Steingruebl and I tackled some of the security aspects here in "Software Assumptions Lead to Preventable Errors" where we discuss making implicit assumptions explicit about expectations at Design Time and Run Time. 

What happens in most Infosec teams is that there is basically this:

Compliance Spreadsheet -> Security Product -> Check Box

That is fine as far as it goes, everyone needs to work to pass their audits and I get that you have to measure processes. But there is a fatal flaw here, the quality of the security product itself. Veracode's analysis showed that security products are more likely to have vulnerabilities than any other kind. Worse than travel apps, worse than HR.

The fact that we are putting out fires with gasoline does not get nearly the attention it deserves, it should be a top of mind issue. Its not just that these are vulns, we are marinating in vulns. Its that the vulns in a security product impact the very foundation of the security architecture. 

Simply put, security products should be rigorously tested, more so than applications and systems. Why? Because its their job to defend them and they are frequently leveraged across applications. Leverage drives down cost but increases correlated downside risk. 

Pretty much everyone does some code and host scanning. Basic security testing is table stakes for applications. In reality, not only do most organizations not scrutinize security products to the same degree as webapps, they do not even scan them at all! 

This has huge knock on effects, because it attacks the security architecture itself. As Matt Bishop says "The structure of security is built upon assumptions." 

Instead of only finding holes in others' code,  Security teams should work like Charles Darwin. Darwin had an average IQ, but he became a world changing scientist because he wore an intellectual hair shirt. He was always trying to disprove his own ideas. He rigorously challenged his own thinking. 

How can we put such an idea into practice in Infosec? We focus so much time on Infosec problems, but Infosec has some strengths too. One of which is testing. Security teams can really excel at testing just as much as any other area of software engineering.

Gauntlt and Security monkey tests are a great way to verify each and every assumption in your standards. I used to say that if you do not have a readily available implementation to solve something then do not put it in a standard or policy. Now I say, if you do not have a readily available test to verify that it work, remove it as well.

Take Poodle and Beast and the litany of TLS/SSL issues. Could infosec teams have predicted them all? I doubt it. But that is not a reason not to test the protocols themselves.  There are excellent freely available tools like SSLyze and SSLLabs to uncover these. Companies should have a test suite that checks for known issues - poor cert validation, hostname mismatch, and so on.

This exact same approach should be done on across the security foundation. Does the RBAC work as expected? Do you have separation of duty test cases? Does strong auth work or can it be bypassed?  Does the logging and monitoring system actually recognize attacks? The way out is to extend assurance techniques to the security foundation. As Brian Snow said the worst position is not be insecure, its to think we are secure, act accordingly, when in fact we are not secure.

September 16, 2015 | Permalink | Comments (0)

Four Often Overlooked Factors to Give Your Security Team a Fighting Chance

Information Security is not a proposition where you come in on a Monday morning brimming with confidence that you have or will solve all the problems at hand. Its true that Infosec has a lot more visibility up and down the enterprise and we win a lot more arguments than we did in the past, even though 2015 feels like the year the security dog caught the car (my friend Carlos says that we did not so much catch the car as we are strapped to the roof), there is a lot of catch up to play. Like a decades worth.

At the same time, its important to avoid the "its perfect or its broken" trap that derails so many security efforts. There've been so many security decisions over the last decade, bets that have been made, that now have to be revisited in light of the current, shared understanding of the threat landscape. So our job in part is not only dealing with new threats, its also which of the older decisions do we now need to overturn, fix, and at what cost? Is it time to unflatten the network? Take a real run at AppSec assurance processes? Threat modeling?

For example, Privileged user has been a problem in infosec for at least three decades, now post-Snowden people are scrambling to tool up. Five years back, you would see CyberArk deployed in financials for a tiny handful of high risk use cases. Now every company is not only try to deploy, but they are also attempting to scale privileged user management widely across their organizations. In the process they are dusting off use cases (and risks) that have been resident in the organization for a very long time. Now we have companies who have a tiny fraction of security budget and team trying to tackle complex use cases and deployment beyond which they have been pushed in the past. 

Bottom line is that we're in a new world of Infosec, companies trying to push the envelope on what security tools can accomplish. Four factors play a role in navigating the new world in a practical way that yields forward progress - Decentralization, Testing as Input, Rational Budgeting, and Integration.

Decentralization

Governance can be used to mean many things. I like to keep things simple, I think of governance as who does what. Security is a chain of responsibility, so its critical to understand who is responsible to design the event message format, write the log sensors, integrate the log sensors, test the log sensors, manage the repository, write the reports, tune the log sensors, operate the log system, and respond to the events. Even for something seemingly simple like logging there's a wide range of activities to wrangle into a cohesive structure. Its a lot more than "who owns SIEM?"

Decentralization is a major factor going forward. We have been trying to raise awareness for so many years in this business that now that we actually have some executive mindshare, the knee jerk reaction is to build the security department of your dreams and centralize everything. Problem is that centralization doesn't scale, and in terms of cleaning up the years of previous infosec decisions scale is a major factor.

Effective decentralization is not abdication or giving people enough rope to hang themselves. Instead its about clear technical guidance, specific security checklists for mistake avoidance, training, and tuning testing to close the process loop. Don't try to do too much, do less and help others do it better.

Testing as Input - Design Like an Attacker

John Lambert wisely counseled that defenders think in lists and attackers think in graphs, as long as those happen the attacker wins. If you think about the DMZ, its sort of the one area of the enterprise that security was allowed to design. The normal rules don't apply to the DMZ, at the same time pretty much nothing functional gets deployed there. You would think that would give security architects pause, but mostly it hasn't.

A normal defender has a list of say 15 things they do security-wise in the DMZ, firewall, WAF, IPS, bla bla,  and if you ask them their goal for the next year they say well I want to 17 or 18 things in the DMZ. Now think about getting to the chewy center from an attacker point of view. All that time and money to go from 15 to 18 capabilities in the DMZ, the defender list gets longer. What about the attacker? One hop to the DMZ plus one hop to the chewy center. Its still a two hop graph

One problem in the defender list analysis is the input is the defender's mental model of the system, the distilled essence of the whiteboards, Visios, power points, repositories, and such. Instead the input should be based on scans, tests, and an attacker graph based view of the system. This is not about having your opponent determine your security goals, but it is about having the available attack vectors determine the security focus.

Rational Budgeting

Simply put, the while the numbers have gotten larger the relative percentage allocation of security budgets are mostly stuck in a network security era. The main threats have to do with data, identity and applications, yet the majority of spend still goes to infrastructure security. Infrastructure security obviously matters, but it does not trump the value of assets like data, identity and application functionality. Many of the long standing problems in the security are in these layers, how long have we tried to get rid of passwords or strengthen authentication, or share identity in a secure way, or have a fine grained access control system or even just classify data? Tackling problems like these (and they won't just go away on their own) requires more oomph.

Integration

We do not have security problems for the most part. We can look at an asset, look at the threats and vulnerabilities and figure out the countermeasure(s). Where we really struggle is - how in the world to integrate the countermeasure? Will we break the app? Will the CSRF nonce break the stateless model of the app? Will we break the user experience?

On top of that the security countermeasures themselves do not play all that well together. Strong auth is an island unto itself, so is regular auth, so is VPN, so is authorization (Well actually that's embedded in the spaghetti code), so are roles, so is logging. Security capabilities have to do way better at first mile (user) and last mile (resource) integration and we have to improve how the security capabilities integrate with each other, otherwise we don't get the scale and coverage. 

These four factors are not a complete set of things sufficient to win, but they are frequently overlooked factors that should be present to make an effective security team with a fighting chance to compete with the challenges that systems face.

August 18, 2015 | Permalink | Comments (0)

»
My Photo

SOS: Service Oriented Security

  • The Curious Case of API Security
  • Getting OWASP Top Ten Right with Dynamic Authorization
  • Top 10 API Security Considerations
  • Mobile AppSec Triathlon
  • Measure Your Margin of Safety
  • Top 10 Security Considerations for Internet of Things
  • Security Checklists
  • Cloud Security: The Federated Identity Factor
  • Dark Reading IAM
  • API Gateway Secuirty
  • Directions in Incident Detection and Response
  • Security > 140
  • Open Group Security Architecture
  • Reference Monitor for the Internet of Things
  • Don't Trust. And Verify.

Archives

  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015

More...

Subscribe to this blog's feed