Looks like Richard Bejtlich wants to see WS-Anasazi as much as I do, he posted on "Security Application Instrumentation", specifically on the need to build security in to applications:
Today we need to talk about applications defending themselves. When they are under attack they need to tell us, and when they are abused, subverted, or breached they would ideally also tell us.In the future (now would be nice, but not practical yet) we'll need data to defend itself. That's a nice idea but the implementation isn't ready yet (or even fully conceptualized, I would argue).
Returning to applications: why is it necessary for an application to detect and prevent attacks against itself? Increasingly it is too difficult for third parties (think network infrastructure) to understand what applications are doing. If it's tough for inspection and prevention systems it's even tougher for humans. The best people to understand what's happening to an application are (presumably) the people who wrote it. (If an application's creator can't even understand what he/she developed, there's a sign not to deploy it!) Developers must share that knowledge via mechanisms that report on the state of the application, but in a security-minded manner that goes beyond the mainly performance and fault monitoring of today.
(Remember monitoring usually develops first for performance, then fault, then security, and finally compliance.)
...
I'll end this post with a plea to developers. Right now you're being taught (hopefully) "secure coding." I would like to see the next innovation be security application instrumentation, where you devise your application to report not only performance and fault logging, but also security and compliance logging. Ideally the application will be self-defending as well, perhaps offering less vulnerability exposure as attacks increase (being aware of DoS conditions of course).
...
One final note on adversaries: first they DoS'd us (violating availability). Now they're stealing from us (violating confidentiality). When will they start modifying our data in ways that benefit them in financial and other ways (violating integrity)? We will not be able to stop all of it and we will need our applications and data to help tell us what is happening.
Which leads us to WS-Anasazi, the missing WS-* spec. We have standards for authentication, authorization, and encryption, but nothing addresses auditing and monitoring. This is a big gap.
John S. Quarterman on Clff Forts vs. Coordinated Mesas:
Eventually, at the end of the thirteenth century, the Anasazi abandoned their cliiff faces and moved to mesa tops to the southeast. At least three mesas, each of which could see at least one of the others.``It was not difficulty of access that protected the settlements (none of the scrambles we performed here began to compare with the climbs we made in the Utah canyons), but an alliance based on visibility. If one village was under attack, it could send signals to its allies on the other mesas.''
The mesas did have perimeter defenses: they were 500 to 1000 feet tall, and they each had only one way in. But their individual perimeter defenses were not as extreme as back on the cliffs, and perimeters were only part of the new mesa defense system. Their descendants the Hopis still live on mesa tops.
Coordinated detection and response is the logical conclusion to defense in depth security architecture. I think the reason that we have standards for authentication, authorization, and encryption is because these are the things that people typically focus on at design time. Monitoring and auditing are seen as runtime operational acitivities, but if there were standards based ways to communicate security information and events, then there would be an opportunity for the tooling and processes to improve, which is ultimately what we need.
Have you seen Samoa?
http://research.microsoft.com/projects/samoa/
Many people rely on web server logs, WAF logs, and some basic tools that parse those looking for specific security-related events. SecureScience InterScout (open-source) and (commercial) have both existed for quite some time and provide some application IDS and application forensic abilities.
http://www.securescience.com/home/newsandevents/news/interscout1.0.html
http://www.syhunt.com/section.php?id=log_analyzer
Posted by: Andre Gironda | June 14, 2007 at 09:42 AM
It is kind of dogma that security has to be built in from the very beginning to be worthwhile.
But, empirically, this never seems to happen. It might be a nice idea, but generally we are faced with a lot of historical evidence that security is always a catch up game.
In a series of rants called GP I explore the idea that security should not be built in from the beginning, and instead look for the right time to add it. In a sense, I say that business comes before security, and security is just another of those add-ons that has to be done once the business model is proven; but put it in before the right point, and you'll kill the business.
Sure, a dead business is secure, but what's the point of that?
PS: the blog's built in security knocks out URLs with https in them ... go figure
Posted by: Iang (GP essays) | June 14, 2007 at 01:16 PM
Ian - I certainly agree that you cannot start by saturating the entire SDL with all things security. You have to phase it in. I also explored some approaches - top down, start at the end, start in the middle here:
http://1raindrop.typepad.com/1_raindrop/2006/01/phasing_securit.html
As to your "dead business" problem - our jobs would be _so_ much easier if we could just do c and i and not have to worry about a.
Posted by: Gunnar | June 15, 2007 at 10:08 AM
Hi Gunnar,
I think that it is important for services to do this in a wider scope and also watch for technical and other problems.
I called this pattern "blogjecting watchdog" and it means that a service monitors itself, tries to heal itself if it can and notifies the world if it identifies any problem
Arnon
Posted by: Arnon Rotem-Gal-Oz | June 15, 2007 at 12:40 PM