Earlier, I blogged about an excellent paper by Brian Snow of the NSA called "We Need Assurance!" Brian Snow explores several techniques we can use to increase assurance in our systems: operating systems, software modules, hardware features, systems engineering, third party testing, and legal constraints. From a security architecture point of view this breadth is useful since none of these mechanisms alone is sufficient, but together may create a greater level of assurance.
Snow on Operating systems:
Even if operating systems are not truly secure, they can remain at least benign (not actively malicious) if they would simply enforce a digital signature check on every critical module prior to each execution. Years ago, NSA's research organization wrote test code for a UNIX systems that did exactly that. The performance degraded about three percent. This is something that is doable!
Operating systems should be self-protective and enforce (at a minimum) separation, least-privilege, process-isolation, and type enforcement.
They should be aware of and enforce security policies!Policies drive requirements. Recall that Robert Morris, a prior chief scientist for the National Computer Security Center once said: "Systems built without requirements cannot fail; they merely offer surprises - usually unpleasant!"
In the section on Operating Systems, Snow goes onto call them the Black Hole of security. The current state of operating system quality is far from where it needs to be in terms of achieving Snow's pragmatic goal of not achieving security only remaining benign. As with all of the techniques Snow discusses in the paper, we have techniques currently today to improve the situation.Thinking architecturally, with an actively mailicious operating system mediating application, data, and network activity we are building on a "foundation" made of plywood and termites. RBAC systems, digital signatures, MLS systems, and diversification all have potential to improve what we have out there today.
Snow on Software modules:
Software modules should be well documented, written in certified development environments...,and fully stress-tested at their interfaces for boundary-condition behavior, invalid inputs, and proper command in improper sequences.
In addition to the usual quality control concerns, bounds checking and input scrubbing require special attention.
...
A good security design process requires review teams as well as design teams, and no designer should serve on the review team. They cannot be critical enough of their own work.
BuildSecurityIn has many techniques for improving assurance in software. As do books by Gary McGraw, Mike Howard, Ken van Wyk, and others. Source code analysis and binary analysis tools, again, are tools we can work with *today* to ensure our code is not as vulnerable as what is currently in production. Collaboration by security in the separate disciplines of requirements, architecture, design, development, and testing is absolutely critical. The security team should align its participation to how software development is done. In turn, software development teams must build security into their processes. Architects, in particular, bear responsibility here. Architects own the non-functional requirements (of which security is one) and they have a horizontal view of the system, they need to collaborate with security team to generate the appropriate involvement and plaement of security system-wide. The paradigm of blaming the business ("they did not give me enough time to write secure code") or blaming the developer ("gee- why didnt they just write it securely") is a non-starter. Architects need to synthesize concerns in harmony with the risk management decisions from the security team.
Snow on Hardware Features:
Consider the use of smartcards, smart badges, or other critical functions. Although more costly than software, when properly implemented the assurance gain is great. The form factor is not as important as the existence of an isolated processor and address space for assured operations - an "Island of Security" if you will.
I blogged yesterday that smartcards have a ton of promise here. The economics of hardware and advances in software security (like PSTS) are really increasing the practicality of deploying these solutions. Having a VM (jvm or clr), a XML parser, STS, some security mechanisms, and an IP will make smartcards a primary security tool that is cost effective to deploy now and/or in the very near future.
Snow on Software systems engineering:
How do we get high assurance in commercial gear?
a) How can we trust, or
b) If we cannot trust, how can we safely use, security gear of unknown quality?
Note the difference in the two characterizations above: how we phrase the question may be important.
This is a fundamental software architecture question. In a SOA world it is even more relevant because we cannot "trust" the system. The smartcard example from above allows us to gain traction on solutions that can help answer "the safely use" challenge. Security in Use Case Modeling can help to show the actual use of the system and what assurance concerns need to be addressed.
More on Software systems engineering:
Synergy, where the strength of the whole is greater than the sum of the strength of the parts, is highly desirable but not likely. We must avoid at all costs the all-too-common result where the system strength is less than the strength offered by the strongest component, and in some worst cases less than the weakest component present. Security is fragile under composition; in fact, secure composition of components is a major research area today.
Good system security design today is an art, not a science. Nevertheless, there are good practitioners out there that cna do it. For instance, some of your prior distinguished practitioners fit the bill.
This area of "safe use of inadequate" is one of our hardest problems, but an area where I expect some of the greatest payoffs in the future and where I invite you to spend effort.
I have written about Collaboration in Software Development (2, 3) too often system design which is essentially an exercise in tradeoff analysis is attempted to be dealt with by dualistic notions of "secure" and trusted. In many cases the word security creates more harm than good.Specificity of the assurance goals at a granular level from requirements to architecture, design, coding, and testing is a big part of the way forward. Again, we do not need to wait for a magical security tool to do this, we can begin today.
The last two areas discussed are 3rd party testing which reinforces the integrity of separation of duties at design and construction time, and market/legal/regulatory constraints. These groups and forces can be powerful allies in gaining empowerment for architects wishing to create more assurance in their system. One of the challenges for technical people when they deal with these sorts of individuals is an effective translation of geekspeak to something that is understandable by the wider audience. In particular, the ability to quantify risk is a very effective way to show what the impact could be for the enterprise. The Ponemon Institute has some excellent data on this, and more is coming all the time, quantifiable risk data says in black and white business terms what the business needs to deal with.
Last thought, Brian Snow:
It is not adequate to have the techniques; we must use them!
We have our work cut out for us; let's go do it.
"...we must use them!"
No, we must develop successful products. See Epstein's work on why this is, still, painfully often orthogonal to their security properties.
Posted by: Adam S | December 22, 2005 at 11:04 PM