This is part 2 of a Security > 140 Conversation with Marcus Ranum.
GP: You recently published a series on the Fabius Maximus blog, and I want to drill down on a number of the points you raised. In one post you speculated "Perhaps, in cyberspace, the best defense is a strong defense", one of the examples you gave was from IT "in IT you run into a problem, which is that something can be deeply broken but still appear functional until a critical time. It’s hard, in the real world, to build a supply chain that appears to work, but in cyberspace you can easily build a network that appears to be secure – but isn’t." Coming full circle on this, one of the main security challenges is in fact in the Supply Chain, the business model and culture of outsourcing makes this a particularly gnarly problem for infosec to deal with. What's your take on building a strong defense for companies with global, outsourced supply chains?
MR: I could go on and on trying to answer this question because, in a sense, it's a question I've already spent my life trying to answer. The relationship between accomplishing things and relying on other people's work is, ultimately, a question of whether human society is valuable or not. Does that sound extreme? I started to figure some of that out when I first started writing software professionally, and had some of my laboriously crafted code fail because of a bug in a library that I had depended on. At first I thought "this is not my problem" but my customers didn't agree - they'd gotten the code from me, and it broke, and "I relied on this other library, that's where the bug was" didn't carry any weight at all. When I started writing security-critical code in the early 1990s, I started vetting pieces of other libraries that I used, and even re-wrote a few of my own implementations. It didn't take me long before I realized that if I wanted to pursue that path to its end, I would need to write my own operating system and design my own processor and, and, etc. And, of course, supply chain problems extend the other direction as well - every single component that you use in a macro-scale architecture can potentially be trap-doored with greater or lesser difficulty. When it was initially published, I read a bit of the UNAbomber's "manifesto" and felt that I recognized, within it, the end-game for that kind of thinking: complete asocial self-reliance. I won't say that worked particularly well for him, but it absolutely doesn't work for a company - if you have a product your customers will expect it to integrate its inputs/outputs with something else, eventually, and *poof* there goes your whole defensive chain. I think that it's because of that reasoning that I got so fixated on firewalls - the standalone device that sits on your network and tries not to trust anything but itself.
If you're a participant in a society, you wind up being susceptible to that society's ills as well as its benefits. We don't really have a choice about society, but in computing while McNealy's "the network is the computer" remains a marketing slogan, it may be getting closer to the truth. Nowadays its getting hard to have a computer work at all that doesn't phone home for updates (or license checks) or to occasionally make sure that it knows how to bring you important marketing information from someplace-or-other. Businesses make a "risk assessment" and decide whether that's a big enough problem to justify worrying about it, or not. Or, rather, in principle they do - but in fact, they simply automatically accept the risk because there is no real alternative. That's one reason I'm a bit skeptical about risk assessment: you're trying to reason about something you don't understand, as if you can control it. For businesses, it's probably not that big a deal because there's really not that much motivation for someone to misbehave. For military purposes, I'm not so sure. If you want to think in terms of the big cyberwar scenario, why would any country on earth use Windows? Or, anything with a licensing scheme that you don't completely control. If I were building command/control systems for something truly important, I'd say the starting point would be to develop a new operating system, on the basis that you'd have a chance of stumbling across or accidentally breaking any trapdoors in the hardware. But I actually don't favor that kind of over-the-top paranoia because I want the governments of the world to be so interdependent that they can't pull that kind of nonsense on each other without hurting themselves. I actually think it's to global society's benefit that computing has become such a house of cards: it makes it harder to militarize cyberspace.
GP: And this "house of cards" makes it easier for countries to monitor citizens, right?
MR: I don't really think that's an objective - as we've seen in the last decade, governments will not hesitate to simply acquire information by main force. Unconstitutional? Pshaw! We'll just do it, and when we get caught we'll indemnify everybody involved, and keep doing it! In the cases of extranational technologies (think: Research In Motion in India) the governments will just demand a backdoor or threaten to blockade the company. I don't think it takes a whole lot of persuasion to get a capitalist business to sell out their customers: generally all you have to do is ask nicely and if that doesn't work threaten the revenue stream and then ask nicely again.
This whole topic fascinates me: when people are asked "don't you care about your privacy?" or are told that they should be worried about Facebook's privacy policy - they get concerned, briefly. Not because they actually care but because they've been told that they should care - so you get a brief thrashing around of "ooh! I care!" and then everyone seems to forget about it. After all, if anyone really cared about their privacy, why would they stick sensitive information about themselves in the hands of a business that makes all of its money selling marketing and marketing data about them? When I see a contradiction like that, I assume someone's just flat-out lying about something. And, usually, it's the person who cares. They "care" but not enough to get off the couch. Uncaring also explains what I see as a tremendous level of tolerance for hypocrisy. You've got US Secretary of State Clinton finger-wagging at the Chinese about how they are monitoring the activities of dissidents at the same time as the US Government is frantically trying to shut down Wikileaks. It's as if people go, "Hm. Wow. Look at that hypocrisy. But, I don't care - time to play a round of Farmville..."
GP: In your posts, you enumerate a number of problems with the current policy on "cybersecurity", but what would a practical and useful policy look like?
MR: For starters, it would include something about an attribution process. I.e.: before we decide we're going to start bombing anyone, we're going to establish an investigation commission that will gather evidence, assess damage, and collate whatever facts we can put together that indicate who launched the attack. That commission might (probably always should!) include international experts as well, and would include ample opportunity for a "minority report" component if the results of the commission were not unanimous. I personally think the U.N. is pretty obviously a joke, at this point, so it's a question where the report would be submitted. Ideally, the International Criminal Court or some other organization with suitable stature might be eventually involved. There's a good reason for that - namely, that since most "cyberwar" attacks would involve civilian infrastructure and networks in the target, they would be practically by definition violations of the Geneva Convention(s) - under Protocol 1, indiscriminate attacks on civilians is a war crime. In fact, under Protocol 1, attacks on nuclear generating stations (like the Stuxnet attack on Bashehr in Iran) would be war crimes if performed by a state. I know that the US has not ratified Protocols 1 or 2, and that should also be a matter of concern - we'd have a better position to complain if our civilian infrastructure were attacked if we weren't possibly busy positioning ourselves to carry total war into cyberspace.
The second piece of the problem is dealing with the flaws that do exist in civilian infrastructure. Whenever the discussion of how to fix vulnerable systems comes up, there's always someone beating the "omg! it'll cost too much!" drum. Well, the problem is that the organizations that built these vulnerabilities into the infrastructure did it by out-factoring security as an aspect of reliability. That allowed them to build systems that were unsafe and unreliable but conceal that reality from shareholders, the public at large, and the government. I think it's crucial for the government to address this at a policy level, by making it clear that if a power company, for example, built SCADA systems that have no passwords or default passwords, and are fundamentally vulnerable - that they built a flawed system, that their cost savings models were apparently inaccurate and that if it blows up in their face they won't get indemnified against liability. That won't happen, of course, because for it to happen, the government would have to be serving the interests of the people, not the interests of the corporations; but that would be the right way to do it.
It's always a bit unfair to ask "what's your practical and useful solution to this screwed-up situation?" because the screwed-up situation happened the way that it did because of deep reasons and deep interests - any solution to the situation would have to either refute the reasons and overturn those interests, or somehow wave them away. We're stuck with what we're stuck with, in all likelihood. About the only thing we can hope for is that we can do anything possible to prevent the militarists from turning cyberwar into the same kind of open-ended justification that the war on drugs and the war on terror have turned into. It's hard to look at the consequences of those two "wars on an abstraction" and imagine that the "war on cyberterror" would be anything but an extremely ugly and expensive boondoggle.
GP: I am also fascinated by the "but it would cost too much" argument. I think the real enemy is short term thinking. It was much faster to build Chicago out of wood and that worked fine until Mrs. O'Leary's cow came along. Building out of brick and steel has a higher up front cost but you get to ride that investment for a long time, hey I like visiting Chicago even more when i am not super worried of dying in a fiery blaze!
I am with you on avoiding open ended mandates that can be used for any means deemed necessary, but what about something a little more practical like building codes? If you drive by your fire station its pretty likely that the fire fighters are practicing or working on their equipment or watching TV. Why? Well buildings are built to certain codes, the laws require it, there are inspectors that can check off the boxes and the end result is that fire fighters do NoT resemble infosec people running around with their hair on fire, they have a critically important job but for the most part they are NOT the ones running around "fighting fires." (Maybe in the future people will describe a person madly scrambling around not as fighting fires but as trying to secure data?). Anyhow the building codes seems to me to have 1) not put builders out of business 2) delivered quality products to consumers and 3) generated a lot of common good through safety. In your view, would something like this be doable on the web? A software developer "pulls a permit" and has to meet certain criteria, access control, logging and so on, before they are ready to deliver to the market? Would this work in practice and would it help?
MR: The idea of building codes is good but I don't think that, in general, physical building materials really map to software very well. The options for how you might use a brick in a construction are probably few and far between compared to how you might use an "if ( ) {... " code-block. I think that's one of the problems with software architectures - they're infinitely malleable, which is what makes them so darned cool, and on the other hand they're infinitely malleable, which means that they can also be infinitely bad. I think that if you wanted to search for a building metaphor for software, it'd have to be at a broader level than just asking "are you only correctly employing Underwriters' Laboratories Windowing functions?" In building we've got certified architects who (hopefully!) are keeping track of how much load is being placed on given parts of the building - the big picture stuff - and then builders who are keeping track of correctly using the standardized and 'rated' components.
The reason we have building codes and architects and whatnot is as a way of quickly assigning blame. If my office building burns down and it's discovered that it wasn't built to code, then we can quickly assign blame based on where things went off the rails. That's why no builder is going to risk a lawsuit by saving a couple dollars to buy a case of non-UL-approved outlets, because if they do and the building burns down because of a problem with the outlet, they own the problem. In a sense, what you're doing is opposing the architect against the builder against the building's owner - by cutting lines through the situation and assigning who's in jeopardy, where, you make it clearer what part of the system each party is going to worry about. Ideally, you're also going to get a sort of check/balance effect - if the architect comes up with something beautiful which can't really be built safely, in principle the builder is acting as a sort of a check/balance on behalf of the building owner. It's funny you asked about this because a bunch of years ago some friends of mine and I were thinking there might be a really cool business opportunity to start a software architect's shop, in which we built apps out of components that we had constructed to our specs - outsource the coding of the components to one shop, have another shop do a QA pass on it, then a third do a test/harness and acceptance - while the architectural firm wrote the main routine that called all the components discretely. If code units were treated as valuable intellectual property that was expensive to develop, then we might see internal code re-use happening within a given architectural firm, at least. The way to make that model work would be if you could milk enough time-to-market efficiency out of your established codebase that you could start guaranteeing your customers much more reliable code, much faster. I do think that the field of software development could use a bit of re-inventing but it's a hard problem because I don't think the industry is mature enough to understand the value proposition of good code tomorrow versus flashy stuff today. Perhaps the invisible hand of the free market will eventually reach down and slap some people around. That would be fun to see.
**
Secure Coding Training Class: Mobile AppSec Triathlon
Do you have what it takes to complete a triathlon on three vital topics in the mobile world: Mobile application security, web services security, and mobile identity management?
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for the first Mobile App Security Triathlon, in San Jose, California, on November 2-4, 2011.
Comments