Gunnar Peterson: We first met a few years back, and I remember this very well, we did a detailed Threat model and discussed a very long set of security protocols that could be used to add in layers of protection around a Web services application to cope with the threats. by the time we were done, the board was covered and it was a set of dozens of capabilities that a poor developer would have to go and somehow implement (and implement correctly).
Do developers get enough support from platforms and tools to do things right? Even if they try to it seems like they are facing some heavy headwinds.
John Wilander: I'm tempted to just answer "No, sir" but that would be too simple, right?
To start with, writing software that withstands attacks from skilled adversaries includes both dos and don'ts. This might sound obvious but it's what I consider a key contribution in my upcoming PhD on secure software. It turns out that we do discuss both but rarely at the same time. And the models of secure software we use for code and design analysis typically take one side but not the other.
Two examples of what I mean:
1. Input validation is a typical "do". You need to validate input before you let it control your execution or taint sensitive data. For any given input there is typically a limited amount of ways to do input validation correctly, but an infinite amount of ways to get it wrong, including having no input validation at all. So to be able to support developers on input validation we need models of how to get it right. A positive model if you will. We can then use that model to to static analysis, manual source code review, or to provide training.
2. Double free is a typical "don't". You shall only free up memory once. In a large C program you will find a lot of memory allocation and deallocation. If we would stick with a positive model here -- "This is how you free memory" -- we would not be able to find double free flaws through static analysis or source code review and we would not be able to train developers in secure memory deallocation. Why? Because insecure deallocation is built up of multiple instances of correct deallocation! To analyze for and convey the meaning of double free we need a negative model -- don't do this.
That summarizes my first answer to your question. Developers are not getting enough support in the dos and don'ts of secure software. They need help in not introducing vulnerabilities in the features and functions the systems is required to have. Let's call that non-functional security.
My second answer connects to your recall of all those protocols, layers, and measures we took in securing a web service back in 2008. Let's say we've given developers appropriate support in the dos and don'ts. On top of that we might have security requirements per se, such as authentication, authorization, and audit logging. Let's call it functional security.
The developer support in functional security is much larger. You'll find all sorts of products here. Some open source, some proprietary and expensive. An example is Microsoft's Active Directory which has helped a lot of developers to fulfill the functional security requirement of single sign-on. The problems I see in this space is in terms of configuration. It should be easy to get things right and hard to get things wrong. But look at for instance OAuth. If Facebook's security team can't get OAuth right, how are we expecting the average web app developer to get it right?
Developers have long been requesting comparative tables with security info on frameworks. Which HTML template frameworks does proper character encoding by default? Which framework versions are known to be vulnerable? The security community has yet to produce these tables and support developers in doing the right thing.
Third and Final Answer
My third and final answer to your question goes deeper into the technology stack. Why are programming languages still lacking security primitives and simple ways to "get it right"?
1. Why are programmers encouraged to pass around strings in the year of 2013? There's almost nothing useful in a computer system that can be described as just a "string". Meanwhile strings are killing security. Programmings languages should encourage (demand?) developers to create abstract data types that model what they are passing around. If it's human name there should be a HumanName type. If it's HTML there should be an HTMLString type. If it's a comment there should be a ForumComment type. That's how we can get input validation and semantics right.
2. Why can't developers annotate or type data as "secure" or "sensitive" directly in their source code? That way static analysis could help find information leakage. This whole area is called language-based security and has yet to turn up in any mainstream language.
The security community needs to cut down on the Bug Parade and start delivering useful models, tools, programming languages, and technology comparison tables to developers. Then, if they don't make use of that support we can complain about insecure software again.
GP: One company I find as an aspirational model for Software security is Autoliv. They are a Swedish company that is a world leading maker of safety gear for cars. So Toyota, Ford or whoever makes the cars, but the safety gear they get from Autoliv. Autoliv has the expertise in safety design, development and testing to pioneer advances in things like air bags.
One problem for any component maker whether its industry wide or just within an enterprise is integration. Autoliv has to be able to embed airbags inside steering wheels from many different manufacturers. So to your second answer (and great point on OAuth and Facebook BTW) "Developers have long been requesting comparative tables with security info on frameworks."
What does a useful response from security look like here? What is the right "form factor" to deliver this to development teams and when? Basically what I am asking is - how should these teams collaborate better?
JW: There's a difference between developer teams collaborating with their respective security departments (intra collaboration) and developer teams collaborating with the security community (inter collaboration).
For intra collaboration I'm a strong believer and practitioner of IT security trojanizing development teams (pun intended). Developers need to have access to a security conscious developer daily for security to get built-in.
"Should we send this in a URL?"
"What about this security news I just read – does it apply to us?"
"What was this thing about not setting the location and instead setting the pathname?"
Such questions pop up all the time. If there's a developer with security skills on the team the questions get answered. If you instead rely on a separate security department to review the whole shebang you'd better pray you find that DOM-based XSS where they used window.location instead of window.location.pathname.
There are three problems with IT security trojanizing dev teams:
1. You have to know software development, not only IT security.
2. You have to leave your comfort zone but still maintain a healthy competence exchange with your security peers.
3. It scales poorly. You need one software savvy sec pro per development team.
So how can the security community help developers produce more secure code? Well, I mentioned comparative tables with security info on frameworks. Security pros need to produce those. How should they look? Just steal from other services popularized among developers such as …
Can I Use: http://caniuse.com/
Grid component feature comparison: http://dojofoundation.org/packages/dgrid/#featurecomparison
A similar, security-focused table would tell developers which web frameworks do context-sensitive encoding, handle JSON in a safe way, have safe defaults for text nodes vs inner HTML etc.
Next we need to deliver security-oriented components in small pieces. Developers are very reluctant to download a huge pile of code just to get an encoding function. And rightfully so. Security pros love to build and promote draconian "enterprise security frameworks" whereas the developer world has moved on to so called micro frameworks.
Chris Schmidt's jQuery Encoder is a good example of this. Two years old and a bit buggy but short and sweet!
GP: I very much agree with your thoughts on the benefits and challenges of trojanizing app dev teams. I think anyone who has tried software security realizes pretty quickly that its fundamentally a people problem on multiple levels.
To my way of thinking, an app team should have a security person assigned (though I like your trojanized term better) the same way a DBA is assigned. DBA teams are made up of people who know Oracle, DB2, MySQL what have you and they get pulled in for x percent of time to expedite and improve dev projects. Importantly, DBAs spend their other, non-project time, hardening and streamlining their own Databases and making sure they work well. I think security has a ways to go in both productive engagement and in optimizing their own tools.
I cannot blame security teams themselves for the whole state of affairs. They have been underfunded and undermandated. The security tools are pretty weak as a group. To me the journey to getting a DBA-like level of engagement feels like Deng Xioaping's saying on gradual change "We will cross the river by feeling the stones under our feet, one by one." We can all see what some better version of an end game looks like, but getting there is painful.
In the meantime while we are gradually solving the people problem, it seems the tools and technologies vendors could step up and address at least some of the other challenges. Your comments on the pervasiveness of strings and the lack of security annotations reflect an overall benign neglect for security issues from language, frameworks and tool vendors. Is there any hope for improvement here? Do you see greenshoots? I see some promising signs in that for example iOS as a new OS began its life with a decent mix of security built in. However, one problem is that the security design is more based on protecting the platform and its less about making things easy for developers and security people, it feels like its a pretty hard job until the vendors and platforms meet them halfway. What do you think?
JW: You're right about underfunded security teams. It has gotten to the point where many are bitter and/or feisty. Some carry a backlog of things that "should have been done" and bring those things up in every project.
When it comes to security features and primitives in programming languages and frameworks I feel we're still finding issues quicker than we get generic solutions. The pressure to deliver more and more features drives complexity, for instance in the web stack. And complexity is not good for security. Then complexity increases further when we have to add security on top. Take Content Security Policy (CSP), a new HTTP response header intended to finally take on cross-site scripting (XSS). The shear complexity of getting a good CSP config off the ground is enough. Then there's the complexity CSP adds to testing, deployment, debugging, and maintenance. If we instead never would have mixed data and code and always whitelisted script sources in a header or meta tag we might have dodged the XSS problem all together with just a minor increase in complexity upfront.
A major hurdle is ease of use when it comes to security technologies and features. It's one thing avoiding vulnerabilities and another to correctly add security features. I doubt even 1% of the world's developers are able to generate a CA root, sign an SSL cert with it, and whitelist it on their local machine. Instead some buy official SSL certs for their testing environments and others skip the whole thing. Then there's using OS features for secure storage in so called key rings. Then there's single sign-on with SAML, Kerberos, Active Directory etc. Then there's cryptography and cryptographic hashing. Bring those things up and most developers frown (some smile though :).
I think developers of security APIs and technologies really need to test for developer usability. Can a B-level developer get your stuff working in one day? Will he/she start googling for blog posts and answers on Stackoverflow or will he/she feel confident with your documentation? Do you even provide a step-by-step example?
Apple's so called "walled garden" has proven very successful in keeping iOS malware-free. With Gatekeeper Apple is bringing the same thing to OS X. For the casual user a walled garden is not a problem, rather a feature. But it doesn't make benign apps secure, only avoids malicious apps. Providing usable security APIs and integrated static analysis would help developers do better security-wise. I hope that's where OS and browser vendors are going.
Three days of iOS and Android AppSec training with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1