Today's Security > 140 is with Tenable CSO Marcus Ranum who needs no introduction to readers of this blog. (Note, Marcus and I recently did a chat for SearchSecurity.com which is available here).
GP: During the financial crisis a lot of the failures in financial firms were a direct result of financial instruments that were supposed to make things more safe not less safe. In 2008 it was derivatives that supposedly made it easier to manage risk, but this increased confidence in the big banks who leveraged up 30 to 1 and 40 to 1, but did not have downside protection in place so it was a false confidence.
And it just happened again with the "rogue" trader at UBS who used Exchange Traded Funds (ETFs) which were supposed to manage risk, but he ended up losing $3.2 Billion of UBS' money because the supposedly safe ETF hedge was not accounted for.
Coming back to Information Security, it strikes me that we might be in a similar scenario, where we have umbrellas that work until it begins to rain. There is a lot of confidence in security mechanisms whether its access control, firewalls, monitoring or pick your favorite. But is that confidence deserved? If the security mechanisms fail, what's the impact to incident response?
MR: I think you're right - unfortunately. Sometimes, it seems to me that the intent of the security mechanism is not necessarily to produce a more reliable system or positive outcome: it's to cover the butts of a few people who are about to do something more risky, and to sell something repeatable. To me, it's the repeatability of a sale that's often the red flag because that indicates that you're buying something that's a "cookie cutter" application, when what we really need are often custom-tailored solutions that take into account business practices. I won't go so far as to call it a general rule but whenever you see something being sold as a catch-all solution for X, for any given X, it either means that X is really easy or that the catch-all solution really isn't. One example of the kind of thing I'm talking about is audit doctrines like PCI or FISMA or whatever: the premise is that if you do these particular incantations you'll be safe. But for that to make sense, every target organization would have to be pretty much the same, which they aren't. So, you look around and realize that maybe the real benefit of the program is not security at all - it's that it looks good on someone else's bottom line.
A number of years ago I was thinking about security from the "normal accidents" perspective and realized that if connectedness and complexity are what cause security disasters, then many of the security mechanisms we put in place actually increase connectedness and complexity. Oops! We're putting out fire with gasoline. This is one place where I think you and I disagree: you often point out that firewalls are one long-standing security solution that is just an endless "more of the same" - but they're actually one of our simplest tools and if they are used correctly, they really do work. "Drop all" is a good firewall policy; you can protect the hell out of a network with a single "drop all" rule. Where the problem comes in is that the firewall gets installed with a relatively permissive "allow all outbound" and malware has afield day. So, back to your question: I think organizations adopt this "we have a firewall"stance that represents a misunderstanding of their actual risks. The risks are not along the"have a firewall/don't have a firewall" axis, they're around the "have a drop all rule/have a permitall rule" axis.
Misunderstanding things like this appear to be human nature - people think "well, I have a parachute, it's OK to jump out of the plane" and they might even be especially clever and have a reserve parachute - but that still doesn't mean that jumping out of a plane isn't a fundamentally stupid form of entertainment.
GP: In terms of putting out fire with gasoline, most people think that complexity is the enemy of security, but if you look at it many of the most complex things in IT architecture are security mechanisms! Kerberos, PKI, crypto and others are much more complicated than, say, a typical Web server. There are only a few people in any given organization who understand how the security tools work.
As much as the "we're ok we have a firewall" mindset drives me bats, I think the victory of the firewall is its relative simplicity to virtually every other security mechanism you might care to name. Logging is another one that comes to mind, are there other security tools that organizations can use to meet some security goals without creating major complexity issues?
MR: I think the reason many of these security technologies are so complicated is because the problems they are aimed at solving are complicated, too. Actually, there are solutions that are often not complicated but they're seen as not being cost-effective or competitive, sothe result is massive feature-creep. You mentioned PKI - that's a great example: why does it need to be so complicated? Why not just have a single federated ID system based on a trusted authenticator? Well, that wouldn't have fit the desired business model, because then some organization would have been in a position to make a lot of money that everyone else had their eye on. The evolution of the SSL certificate world had virtually nothing to do with a goal of producing a simple, stable, reliable authentication and integrity system and everything to do with monetizing public key patents and creating a whole new vertical business opportunity for organizations that wanted to sell certificates. If you look at it from that perspective you realize that a certificate is a "tax stamp" and nothing to do with security, at all. The reason we get this kind of contradictory complexity is because a single solution is really trying to solve a myriad of problems - badly - at once. I think our general tendency as humans, when we don't know what to do about a problem, is to move up to higher and higher abstraction layers. And that means we're almost always building something overly powerful and complex (therefore: buggy and full of latent flaws) because we'renot sure how our problems will grow from there, if our current implementation succeeds.It seems to me that our approach to system design is that successful systems will always be expected to scale forever and unsuccessful systems will die. So if you're planning to succeed (who tries to fail?) you're implicitly forced to design at the edge or beyond the edge of your design competence.
It's always seemed to me that there are plenty of techniques that are simple and effective,but they're not popular because they're just not shiny enough. Those little time or event based authenticators or Hard disk encryption? They really work. They're really simple. They're maybe too simple because highly paid executives apparently can't figure them out. The other thing that is apparently too simple is "enumerating goodness" - instead of trying to list all the pieces of badness that you have to worry about, try to list what people are supposed to be doing, then subtract that from what appears to actually be happening. That general principle covers execution control, antivirus, policy-based intrusion detection, malware blocking, and a whole bunch of other issues. The framework I use for thinking about thisproblem is that organizations don't WANT to enumerate what they're supposed to be doing because they think that'd be hard - so it's easier to "outsource" enumerating badness to an antivirus company or whatever. I'm endlessly puzzled by that because it ought tobe pretty easy to tell what you should be doing: after all, you're doing it. If a company hired me to be CSO, presumably they have an idea of what CSOs do, and from that we ought to be able to boil down far enough to determine that my job pretty much consists of running Word, Powerpoint, Thunderbird, and Firefox to security-related sites and blogs. It seems that would be valuable information that could be used all over the place to adjust security policy as it relates to Marcus Ranum. But very very few organizations even try that.
GP: Its interesting that you mention Federated Identity as a response to the complexity of PKI. It seems like the Enterprise took the long way round but now we're finally getting back around to where we should have been to begin with. In my work I see most enterprises with some type of federated identity, some are in initial phases but others have federation baked into the core architecture of how they integrate with systems and customers/partners. Interestingly enough in many cases they backed into this design because the other ways proved to be unmanageable so they had to refactor. If my customer has 500,000 employees, and they all need to access my system, who is in a better position to have current information on whether they are actively employed and in what capacity - me or my customer's system?
So I view this is a positive trend in terms of getting it right...eventually...never goes as fast as one might hope. Do you see any other slivers of light out there? What about PCI DSS' emphasis on audit logging? It took a pretty sleepy market (how big was the security log market in 2001? two guys in west Texas?) and it made it into a bonafide industry with a lot of vendors and choice out there. Is that a net positive?
MR: It seems that whenever there's a trend toward standardization, it only happens once all the economic play in that particular technology has already been milked out of it. Then there's no "need" for other entrants into the field or for established players to divide the market in order to protect it. That's also a huge impediment to security because it means that we only tend to get standards once the problem has been around long enough that it's no longer desirable. How's that for a depressing thought?
I see any emphasis on audit logging as a good thing - it's good to see that we're making a great leap forward to 1970s-style computing maturity in a few areas. The last decade, for me, has been horrible whenever I talk to someone about an incident it has virtually always been "the firewall logs were turned off 'for performance reasons' and so were the web server logs" Of course it's not really performance, it's that people feel silly collecting logs they don't look at. Until they are able to use the logs to transactionally reconstruct a corrupted database, or quickly identify what files were altered during a malware attack, or determine that it was only a handful of customer files that were exposed and there's no need for a gigantic and expensive breach notification. I've been disappointed in the customer community's interest in big pre-packaged SIM solutions, simply because it shows (once again!) that they don't understand the problem: you want the logs for when you need the data, not to feeda system that produces pie charts you never look at. And, when you need the data,there's no way of knowing in advance what data it'll be that you need, but what you need will 100% certainly not be the pie chart that you generated. Extensive logs are the closest thing computing has to a reserve parachute, if you'll forgive the analogy.
Logging has now become a big industry indeed, and I suppose that's mostly positive. The downside of that, now, is that I know a few "do it yourself" sysloggers who have been told to take out extremely good in-house logging systems (good because they know exactly what is in the data and how to use it) and to replace them with "supported commercial offerings" in order to be in line "with industry best practices" yadda yadda. There's an awareness, at least, in the logging world that log data is highly site-specific and purpose-centric and the turnkey systems still require configuration and programming in order to return useful results. In 2001the system logging market had some pretty heavy players in it, but they were generally priced out of the market because the market was small so they were selling logging systems for $1/2 million. Now that "everyone is doing it" there is downward price pressure and the products have improved to the point where they mostly work OK. Some of them just waited for Moore's law to make them usable. In general I think that scene is going in the right direction, though I doubt I will ever experience a year in which I don't get dragged into an incident and discover that "the logs were turned off for performance reasons." Incompetence like that is too ingrained to root it out in a single life-span.
(This conversation continues in Part 2)
**
Secure Coding Training Class: Mobile AppSec Triathlon
Do you have what it takes to complete a triathlon on three vital topics in the mobile world: Mobile application security, web services security, and mobile identity management?
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for the first Mobile App Security Triathlon, in San Jose, California, on November 2-4, 2011.
Comments