1 Raindrop

Gunnar Peterson's loosely coupled thoughts on distributed systems, security, and software that runs on them.

Recent Posts

  • Security Champions Guide to Web Application Security
  • Security > 140 Conversation with Pamela Dingle on Identity
  • 6 Things I Learned from Robert Garigue
  • The Curious Case of API Security
  • Security Capability Engineering
  • Ought implies can
  • Security > 140 Chat with T. Rob Wyatt on MQ and Middleware Security
  • Privilege User Management Bubble?
  • The part where security products solve the problem
  • Four Often Overlooked Factors to Give Your Security Team a Fighting Chance

Blogroll

  • Adding Simplicity - An Engineering Mantra
  • Adventures of an Eternal Optimist
  • Andy Steingruebl
  • Andy Thurai
  • Anton Chuvakin
  • Beyond the Beyond
  • cat slave diary
  • Ceci n'est pas un Bob
  • ConnectID
  • Cryptosmith
  • Emergent Chaos: Musings from Adam Shostack on security, privacy, and economics
  • Enterprise Integration Patterns: Gregor's Ramblings
  • Financial Cryptography
  • infosec daily: blogs
  • Jack Daniel
  • James Kobielus
  • James McGovern
  • John Hagel
  • Justice League [Cigital]
  • Kim Cameron's Identity Weblog
  • Krypted - Charles Edge's Notes from the Field
  • Lenny Zeltser
  • Light Blue Touchpaper
  • Mark O'Neill
  • Off by On
  • ongoing
  • Patrick Harding
  • Perilocity
  • Pushing String
  • Rational Survivability
  • rdist: setuid just for you
  • RedMonk
  • RiskAnalys.is
  • Rudy Rucker
  • Software For All Seasons
  • Spire Security Viewpoint
  • TaoSecurity
  • The New School of Information Security
  • Windley's Technometria
  • zenpundit
Blog powered by Typepad

Building a Security Architecture Blueprint

This week I spoke at the Secure 360 conference on Building A Security Architecture Blueprint (slides). My thesis is that information is a strategic enterprise asset (in many cases it *is* the business), yet the typical enterprise approach to securing the information or even risk management, is rarely strategic. Last year, I wrote a Security Architecture Blueprint paper to describe one framework for putting a strategic context around information security program. The main idea is that instead of starting with security goals (cue the ritual CIA invocation), we start with considering security in the context of the stakeholders - business, development, operations, customers, and so on.

You can then use the framework to assign priorities and phasing for Information Security actions. So instead of letting the random auditor and their everpresent checklist that the final four assigns you drive your program, use a framework that incorporates the business and its goals. A number of people commented on my post on GRC -

Rich Mogull

Much of what we call GRC should really be features of your ERP and accounting software. ... It’s an additional, very highly priced, reporting layer. ...A GRC tool provides almost no value at the business unit level, since it doesn’t help them get their day to day jobs done.

Mike Rothman succinctly gets to the point with a one liner I am sure will become part of my repertoire:

It's about serving the business, NOT THE AUDITORS. If you protect information effectively (which is a key imperative for the business), then the auditors should be kept reasonably happy. And if not, screw them and fight them. Yes, the auditor can make your life a bit harder, but you don't work for them. Keep that in mind.


So my GRC post seemed to tap into a fair amount of GRC blogohostility , fair enough, but the main point is not slamming GRC, just the overfocus on GRC and substituting misdirected marketecture for real world architecture Hoff got to the heart of the point of what i was saying - its about assets

As I think about it, I'm not sure GRC would be something a typical InfoSec function would purchase or use unless forced which is part of the problem. I see internal audit driving the adoption which given today's pressures (especially in public companies) would first start in establishing gaps against regulatory compliance.

If the InfoSec function is considering an approach that drives protecting the things that matter most and managing risk to an acceptable level and one that is not compliance-driven but rather built upon a business and asset-driven approach

So I submit that you should not start with a compliance checklist, but instead build a security architecture blueprint that captures your stakeholders goals. Assess this against your policy and standards, and your security architecture capabilities. Out of this comes risk management decisions. And off we go into actually building and operating something - hopefully making some profits along the way.

So build blueprints, minimize time spent doing checkbox Olympics. The blueprint I worked on is just generic framework, you may have a different one. I know that the one that I designed is in use in many organizations and in each case I know of it has been tailored to local purposes. So its a beginning not an end, but those two things are more related than you think as someone from the financial services industry once said

In my beginning is my end ... in my end is my beginning

Where you start your security architecture and design matters, and directly effects where you end up.

Anyway, the conference was a lot of fun, I rarely get to do conferences in MN. I got meet Anton Chuvakin for the first time, and went to the presentation on the local OWASP Minnesota chapter - Robert Sullivan, Joe Teff and Kuai Hinojosa did a great job doing an overview of what OWASP is all about, demoing WebGoat and so on.

May 16, 2008 in Security, Security Architecture | Permalink | Comments (0)

Security Architecture Blueprint

The security industry has little strategic cohesiveness, instead the market is comprised of vendors selling an aggregation of tactical one off point solutions. The problem is that security is of strategic concern to the enterprise but the market does not reflect this. This security architecture blueprint shows one way to take a strategic approach to security in the enterprise.

Securityarchitectureroadmap

The purpose of the security architecture blueprint is to bring focus to the key areas of concern for the enterprise, highlighting decision criteria and context for each domain. Since security is a system property it can be difficult for Enterprise Security groups to separate the disparate concerns that exist at different system layers and to understand their role in the system as a whole. This blueprint provides a framework for understanding disparate design and process considerations; to organize architecture and actions toward improving enterprise security.


This blueprint distills what I have learned on a number of enterprise security engagements into a generic framework. Enterprises know they have security problems today, and will not be perfect tomorrow, next month or next year -- so the question is how to define a pragmatic way forward?

I was fortunate to get great feedback on drafts from a number of folks including James McGovern, Jim Nelson, and Brian Snow.


May 01, 2007 in Assurance, Deperimeterization, Security, Security Architecture, Software Architecture | Permalink | Comments (3)

Integrated Transaction Puzzle

Robert Morris, Sr. gave a very interesting problem at Defcon a few years back, it describes very why the security technologies like network firewalls and SSL are insufficient by themselves, and in my view why we need technologies like Federation, SAML, Cardspace, WS-Security, and other tools to help us build what is required to operate in an increasingly malicious system.

"This is a long term problem. If you work on it and make any progress against it, you'll find yourself much smarter at the far end, than you were at the near end.

When I was in Norway about 5 years ago, I was there very close to the summer solstice. I was wandering around town at 2 o'clock in the morning and there was plenty of light out. You come to a sign that says New Minsk about 60 km and it points south.

And I ask the lady "what country is this?"

She scratched her head for a bit, and said "well I think its Norway"

I said "well who plows the roads?"

"well Norway does, but he have to pay them."

There is a triple boundary in this town that I was in between Norway, Finland and Russia.

But what I did there, was, I had a card about wallet size, I stuck it into a machine, I punched in four digits, and it gave me about 2,000 krone, whatever the hell that is.

Now there are a lot of participants in that transaction. When I put a card into that machine, punch in a pin, and it gurgles for awhile, and finally gives me, a fairly large amount of money. There are a lot of participants in that transaction. The bank that owned the machine that gave me the money, it gave some money away -- that bank wants it back. The pin is necessary to convince my own bank that I'm me. But I don't want my pin to be broadcast all over the world. My bank in the us, it hasn't really given out or taken in any money, really. But there is a lot of credits involved here. Somebody needs to charge somebody else for having more money available. Even though there was actually no cash transfer.

And the problem that I have in mind is

- who are all the participants in an ATM transaction?

- what do those participants need to satisfy their problems?

- how is that in fact done?

In a general way, does the atm system actually work in some reasonable sense? To which the answer is by the way: yes. The atm system damn well works. With extremely high reliability and accuracy. It surprises me. Its quite a bit different than voting machines.

One part of the issue is that ATM network is more of a puzzle, while the web is a mystery. However, the real gap with firewalls and SSL is that they are all or nothing propositions. With netowrk firewalls you are supposedly "inside" the firewall, in the DMZ or "outside" the firewall. Yet we know that gajillions of transactions and attacks plow right through the firewall on a regular basis. With SSL, we are able to create confidential channel, but once that channel terminates, say at your load balancer or firewall, you have no way to guarantee any integrity or confidentiality for the transaction beyond the termination point. And you have no way to provide a security model that satisfies multiple parties in the context of a given transaction; this is what I find so interesting about what the WS-Trust standard provides. WS-Trust assumes that different token types (SAML, X.509, Kerberos) are desired at different endpoints and what we really need is a way to move them around in some standard way. It is partly an integration problem, and partly an interoperability problem. Call it SOA, call it REST, call it Web 2.0, call it eCommerce, callit whatever, this is a problem all technologies that engage in valuable transactions with multiple players need to solve. More thoughts on how are here.

January 09, 2007 in Assurance, Deperimeterization, Identity, Risk Management, Security, Security Architecture, SOA, Software Architecture | Permalink | Comments (2)

Vulnerability Puzzles and Mysterious Threats

From Dan Geer, we know that digital security is fundamentally about risk management. We further know that risk is comprised of threats exercising vulnerabilities against some set of assets, which may or may not be defended by the countermeasures we deploy. This is all well and good...looks great in the ppt, but doesn't exactly help your average middle manager or pragmatic CSO fill in their project plans of what to do about security in an organization.

To help focus and find action steps, I advocate for my clients to separate their activities into several categories, two important categories are threats and vulnerability management. How to differentiate the two? Well, first off, it is helpful to understand where you can be proactive (most desirable) and where you must be reactive. I explored the difference of risk and uncertainty in a paper on Identity Management Risk Metrics

Risk differs from uncertainty in that risk may be measured and managed whereas uncertainty may not. Risk management efforts hinge on this important distinction because it highlights differences where a team may be more proactive. For instance, many vulnerabilities are known, hence they may be measured and managed whereas the threats to a systems contain a greater degree of uncertainty in that the threat environment contains numerous elements such as threat actors that one’s organization can not directly control.

Malcolm Gladwell explores a related concept - puzzles and mysteries:

" The national-security expert Gregory Treverton has famously made a distinction between puzzles and mysteries. Osama bin Laden’s whereabouts are a puzzle. We can’t find him because we don’t have enough information. The key to the puzzle will probably come from someone close to bin Laden, and until we can find that source bin Laden will remain at large.

The problem of what would happen in Iraq after the toppling of Saddam Hussein was, by contrast, a mystery. It wasn’t a question that had a simple, factual answer. Mysteries require judgments and the assessment of uncertainty, and the hard part is not that we have too little information but that we have too much. The C.I.A. had a position on what a post-invasion Iraq would look like, and so did the Pentagon and the State Department and Colin Powell and Dick Cheney and any number of political scientists and journalists and think-tank fellows. For that matter, so did every cabdriver in Baghdad.

The distinction is not trivial. If you consider the motivation and methods behind the attacks of September 11th to be mainly a puzzle, for instance, then the logical response is to increase the collection of intelligence, recruit more spies, add to the volume of information we have about Al Qaeda.
If you consider September 11th a mystery, though, you’d have to wonder whether adding to the volume of information will only make things worse. You’d want to improve the analysis within the intelligence community; you’d want more thoughtful and skeptical people with the skills to look more
closely at what we already know about Al Qaeda. You’d want to send the counterterrorism team from the C.I.A. on a golfing trip twice a month with the counterterrorism teams from the F.B.I. and the N.S.A. and the Defense Department, so they could get to know one another and compare notes. If things go wrong with a puzzle, identifying the culprit is easy: it’s the person who withheld information. Mysteries, though, are a lot murkier: sometimes the information we’ve been given is inadequate, and sometimes we aren’t very smart about making sense of what we’ve been given, and sometimes the question itself cannot be answered. Puzzles come to satisfying conclusions. Mysteries often don’t."

What does this have to do with infosec? There is a lot of vulnerability data out there. Many known knowns. Excluding zero days (which are unknown to enterprise security managers), we have many vulnerability puzles to deal with. These puzzles can be sized, remediated, measured (how many vulns? where? how long to patch?), managed, etc.

Threats and zero day vulns are mysteries, they require different tools, different techniques, like monitoring and detection process and tools. Gladwell on the evolution from Cold War to present day:

Then the pressing questions that preoccupied intelligence were puzzles, ones that could, in principle, have been answered definitively if only the information had been available: How big was the Soviet economy? How many missiles did the Soviet Union have? Had it launched a “bolt from the blue” attack? These puzzles were intelligence’s stock-in-trade during the Cold War.

With the collapse of the Eastern bloc, Treverton and others have argued that the situation facing the intelligence community has turned upside down. Now most of the world is open, not closed. Intelligence officers aren’t dependent on scraps from spies. They are inundated with information. Solving puzzles remains critical: we still want to know precisely where Osama bin Laden is hiding, where North Korea’s nuclear-weapons facilities are situated. But mysteries increasingly take center stage. The stable and predictable divisions of East and West have been shattered. Now the task of the intelligence analyst is to help policymakers navigate the disorder. .
...
Puzzles are “transmitter-dependent”; they turn on what we are told Mysteries are “receiver dependent”; they turn on the skills of the listener

When enterprise "Security" staffs for only one of these, the other is inevitably short changed. When security teams conflate threats and vulnerabilities, the result is confusion. Instead efforts dealing with threats (tune the listener) and vulnerabilities (tune the transmitter) should be separately optimized, besides both being part of "security"; they don't have that much in common.

January 08, 2007 in Risk Management, Security, Security Architecture | Permalink | Comments (0)

Building Secure Apps in 2007

I am teaching a series of public training sessions on building secure apps with Aspect Security. The classes are open to the public and are going to be held in Chicago, Boston, NYC, DC, San Jose, and LA. Dates, registration and more details are here. These public courses are based on the same training sessions that Aspect and myself have delivered hundreds of times to different companies around the globe. Usually, the only public classes are at conferences and such.

The courses are on building and testing secure web applications, SOA/Web Services, J2EE, .Net apps:

Building and Testing Secure Web Applications - Two Day Hands On

Course Overview

Most developers learn what they know about security on the job, usually by making mistakes. Sadly, that’s not working. Most recent data shows that hackers have turned their attention away from operating system and network flaws to web applications as their target of choice. Developers who once could rely on application obscurity are now targeted by hackers who use their programming errors to steal sensitive corporate or government data, make millions of dollars in illicit gains, and embarrass their targets.

Aspect has worked with many government and commercially focused development teams around the country to raise awareness and understanding of security issues. We have developed powerful multi-day versions of this course that focus on the most common application security problems. This course covers the traditional Application Security issues including the OWASP Top Ten, plus many other issues, all of which introduce significant additional security issues that are frequently not addressed properly. It also provides a forum for developers to discuss security issues specific to their application and to establish basic security ground rules that will last throughout a project’s life cycle.

Aspect has taught this course to thousands of software developers, IT security engineers, and QA team members, including dozens of course offerings to several of the most security-minded defense contractors in the country. With its hands on testing labs, code review scenarios, and group exercises, it works. It is packed with hard-hitting examples and demonstrations of flaws uncovered in real-world code review and penetration testing efforts. SANS has selected and now offers Aspect’s course through its major SANS events all around the country. From SANS: “This is the one course in the country that has been successful in teaching application developers the most common application security problems and how to avoid them.”(More details)

SOA, Web Services, and XML Security - One Day

Course Objectives

Understand the real risks in SOA, Web Services, and XML. Not just the hype:

What standards there are to help and how to use them
Where standard don’t help
After completing this class, you should be able to:

Architect security services in Web Services and SOA
Understand how an attacker looks at Web Services
Use best practices
Course Overview

The movement towards Web Services and Service Oriented architecture (SOA) paradigms requires new security paradigms to deal with new risks posed by these architectures. This session takes a pragmatic approach towards identifying Web Services security risks and selecting and applying countermeasures to the application, code, web servers, databases, application, and identity servers and related software.

Many enterprises are currently developing new Web Services and/or adding and acquiring Web Services functionality into existing applications -- now is the time to build security into the system!
(More details)

Secure Coding for Java EE - Three Day Hands On Programming

Although billed as a secure language, Java EE applications have as many vulnerabilities as other languages. The Java edition of Secure Coding focuses on enabling Java EE developers to build secure applications using Java, J2EE and common open source Java security mechanisms. It covers servlets, Struts, JSP, persistence layers, and more. (More details)


Secure Coding for .NET - Three Day Hands On Programming

Microsoft has made secure coding a key part of their software development process. This course teaches the key best practices for securing and testing .NET web applications with hands-on programming exercises. (More details)


Check Aspect's site for more details if you are interested in registering.

January 03, 2007 in Security, Security Architecture, SOA, Software Architecture | Permalink | Comments (0)

New article - Defining Misuse within the Development Process

The latest IEEE Security & Privacy Journal has an article that I co-wrote with John Steven on Misuse Cases. The goal of the series and this article is to get beyond the hand wavy "security needs to get involved in the development lifecycle", and instead provide the where, what, and how that security needs to get involved with specific ways to doing this. Gary McGraw's latest book has excellent methods for doing this, as does Mike Howard's. One idea that Mike Howard talks about is particularly important which is that everything that security attempts to impose on developers is a like a tax on the developers time. As such, security's requests must be targeted at the areas of strategic importance, and in my view part of what this means is that security should be involved early enough in the lifecycle so that security can work in parallel with developers on building more secure code. It is hard to get involved to much early than use case modeling, which is what this article is about -- Defining Misuse in the Development Lifecycle.

In the article we explore how misuse cases are related to use cases, what to model, how misuse cases relate to architecture, and related considerations. We will continue on this theme over the next several issues of S&P.

November 30, 2006 in Enterprise Architecture, Security, Security Architecture, Software Architecture, Use Cases | Permalink | Comments (0)

Paul Madsen on Evolving Risk

Paul Madsen looks at mutual funds that start out with a riskier/high growth portfolio and gradually move to a more conservative appoach as retirement nears, and asks:

Why not the same for identity policy, i.e. privacy rules that automatically become more conservative and risk-averse as the user ages?

This maps pretty well to security in the business world, erstwhile static analysis luminary Brian Chess showed an example of evolving risk model for a start up business where as business risk decreases (the company gains market share), the security risk (which starts small) grows as it gathers assets. In 1999, Google's business risk for company viability was high and its security risk was low; now they are reversed.

When the business reality is dynamic and the security model is static, then errors creep in. This situation can also be looked at as a Grigg Point.

I blogged a similar issue awhile back that was in a JASON document:

"Several major espionage cases have shown a systemic weakness in the present security system, namely the fact that individuals are most often treated as either “fully trusted” (cleared) or “full untrusted” (uncleared). That is, trust is treated as a discrete, not a continuous, variable. A major reason for this is that a down-transition between these two states — revoking someone’s clearance — is so drastic an action that line managers, and even security managers, try to avoid it at almost any cost.

The Aldrich H. Ames case is a particularly famous, and perhaps egregious, example of this phenomenon.

Not wanting to rock the boat, managers at multiple levels dismissed, or explained away, warning signs that should have accumulated to quite a damning profile. In effect, each “explanation” reset Ames’ risk odometer back to zero.ffect, a continuously variable level of trust. The individual manager, therefore, is never faced with an all-or-nothing decision about whether to seek suspension of an employee’s security access. Instead, the manager has clearly defined, and separable, responsibilities in functional (“getting the job done”) and security (“work securely”) roles."

As usual the shades of gray in reality don't map too well to black and white models. As John Quarterman shows: risk moves


Euout


November 20, 2006 in Identity, Security, Security Architecture, Software Architecture | Permalink | Comments (0)

Bionic Hornets

SOA? That is soooo 2006. Looks like some folks are already cracking on HOA. Hornet oriented architecture, doncha know? Maybe in an HOA more than 28% of the systems will use security? Israel Developing anti-militant bionic hornet (h/t John Robb):

Israel is using nanotechnology to try to create a robot no bigger than a hornet that would be able to chase, photograph and kill its targets, an Israeli newspaper reported on Friday.

The flying robot, nicknamed the "bionic hornet", would be able to navigate its way down narrow alleyways to target otherwise unreachable enemies such as rocket launchers, the daily Yedioth Ahronoth said.

Now that is what I call decentralized security. This all brings up a question or two for information security. Can you get a XML parser on a hornet, for example? I suppose if you can put one on a smart card today, a XML capable hornet should not be too far in the future.

November 17, 2006 in Deperimeterization, Security, Security Architecture, Software Architecture | Permalink | Comments (0)

Andre Durand - Firewall This

Andre Durand has two great, short posts on firewalls

So where do I deploy my firewall now?

Thenetwork

Andre previously blogged

I was in a management meeting this morning, and we were talking about how a number of clients are asking Ping Identity to do more and more of the management surrounding their federation initiatives and the entire notion that 'security is going cross-company, and what does that really mean' struck me in a way it never had before.

...

The fact is, as enterprises outsource everything possible that's not core or strategic, they're going to need ways to control, manage and secure them. That's where identity and the notions of identity federation come in, and I'm not talking just single sign-on, I'm talking 'internet-scale security'.

In a sense, the entire notion of a "firewall" as defining what's protected from the 'outside' is simply outmoded. There is no 'them' and 'us'. There is only the difference in how you define and control 'us'. Companies need ways to maintain security, but they are drawing the lines around what they need to protect all over the internet. The NEW security company will be the one that helps them do that.

This more less captures in a few short paragraphs the essence of what I explored on a few posts on Decentralization and Good Enough Security. That series yielded a neat meme:

Thomas Barnett:

All in one must yield to the distributed many POST: Decentralization and "Good Enough" Security, By Gunnar Peterson Great post. Centralized security in a federated world: I love it! That, in a nutshell is why the all-in-one Leviathan must yield to the distributed SysAdmin force.

So its not so much that firewalls are dead or useless, but they do not deal with most threats, and in general do not represent the best investment for your security dollars.

Your firewall hard at work
Whowearemain

Phil Windley:

This idea is the premise of my own book on digital identity: the old centralized models that require everything behind a firewall no longer work and good identity infrastructure is crucial to resolving that dilemma. I’m writing specifically about idenity in the computer-system sense, but there are close parallels to the global world as well. Note that if you’ve read my book you’ll understand that this is a far cry from a call to implant RFID chips in everything that moves.

BTW, Phil's book is excellent, and I recommend you buy a copy. I will blog a review shortly.

Mike Rothman:

Disappearing perimeter and new applications - we're screwed This post and resulting discussion from Gunnar Peterson provide a lot of food for thought. He uses a decentralization vs. centralization metaphor to make the point that we are inextricably moving toward distributed data. SOA and web services guarantee that. That means the traditional, centralize and apply draconian policies of many security practitioners are no longer valid. Oh boy. If you aren't following me, maybe this quote from the post will help: "The perimeter in an SOA is the document, not the network. The security model is defined by the security constructs in the document, not the network firewall." That kind of screws everything up, no? Gunner goes into a bit more detail in a follow-up post (here) - but understanding the concept is critical. This is the case for why we need to separate out infrastructure security and data/information security. Right here. Read it and understand it.

If your security model does not mirror where the business and technology is going then it's broken. Simple as that. The time for pretending ivory tower security with flaming firewalls on visio drawings and thousands of open ports and exceptions is security is pretty much over. Your business is a not a castle behind a moat, it is a node in the midst of many to many relationships. Don't be afraid of it, design for this reality.

October 30, 2006 in Deperimeterization, Security, Security Architecture, SOA, Software Architecture, Web Services | Permalink | Comments (1)

Whatever happened to give 'em enough eyes?

This past OWASP App Sec conference was the second in a row where folsk from Microsoft gave an overview about what they are doing wrt software security, and the response from the open source folks was both times more or less "hey we aren't as big as Microsoft, how do you expect us to do all this stuff?" I think this response is pretty weak, after all, open source systems have over the years certainly given Microsoft a run for its money in reliability, scalability and security. Now that Microsoft has turned its own flywheel and by all appearances made some radical moves forward, shouldn't the open source people do the same? I have several friends who write security-focused open source software that is in wide use, and a common complaint of theirs is that it is hard to get people interested in contributing to security projects. Perhaps security is not as interesting to people as other things.

Hey guys, whatever happened to given enough eyes all bugs are shallow? Maybe we can rephrase this to "given enough motivated, security-clueful eyes, all bugs are shallow." Anyhow, let's roll the tape:

It started last spring Belgium. Johan Peeters moderated a panel on "Should companies be emulating Microsoft’s Security Development Lifecycle?" Johan intro:

Microsoft is a company we love to hate. In particular, the security of Microsoft products has been the target of fierce criticism. However, in the last few years, Microsoft has made a concerted effort to improve the security of their products. The Windows Security Push was launched in 2002 in the run up to the release of Windows Server 2003. At that time, the seeds of the Security Development Lifecycle (SDL) were sown. This process has since been refined by many more security pushes.

There is a lively debate on Johan's blog entry about all of this. The net result at the conference was that the open source proponents deemed that the things the Microsoft SDL proposes like Threat Modeling and fuzzing are too hard and time consuming for open source developers. I am sure that there are line managers at software vendors like Microsoft who would say the same thing, but there seems to be near universal rejection of this from open source.

Deming: "It is not necessary to change. Survival is not mandatory."

The simple fact is that the threat environment that Microsoft software faces in the field are the same threats that open source software does. The only question is: how do you engineer your software to be resilient? Last week Michael Howard made a number of interesting points in his presentation, "How the SDL Improved Windows Vista". He discussed findings bugs as early as possible being the best bang for the security buck. He also notes that Microsoft has a number of central analysis team including a central fuzzing team for fuzzing file formats, network protocols, etc.

Regarding threat analysis, the trick I have used in the field and is consistent with Microsoft's findings is to avoid having people create attack trees on their own and instead use threat model patterns that they can plug in. He goes on to make the point that because security is human vs human you can never be done. It would seem that some folks in open source would like to say "hey we were better than Microsoft at security in 1999, so we are still better." Microsoft has executed an OODA loop or two since then, here is hoping that open source executes one soon.

The last part of the talk concerned how these security improvements manifest themselves in Vista.

X64 systems can leverage PAtchGuard which only loads signed code into the kernel. This sounds very similar to a test that Brian Snow referred to where NSA ran test by enforcing digital signature checks on all calls to critical kernel modules. Using a Solaris system this resulted in only single digit percentage performance impacts. That is a nice defense. Where is the uptake of this in open source or Unix in general for that matter?

IE7 runs as a low integrity process, Microsoft said they are working with Firefox to get Firefox to run as a low integrity process as well. This is designed to limit damage to other system components that run as medium integrity so an attacker should not be able to write up to those medium integrity processes. Again, something that Firefox sorely needs. Granted they needed the constructs that Vista provides to do this, so they are not necessarily behind in this case.

This slide summarizes protections against a variety of exploits across systems:


Defexpmitigations

So we have seen that Microsoft has taken steps to protect its users, we have seen some uptake in open source and of course you are always free to take matters into your own hands with open source, but we have not seen the groundswell of defense in depth mechanisms coming out of Redmond. Where does this leave commercial Unix vendors like Sun, IBM, and Apple? One thing I would say to them is, if they can see past religion, they have a great model to begin their security journey.

Update: Matasano adds some interesting related info. Reading the comments, you can see that it is still hard for people to see past religion

October 23, 2006 in Security, Security Architecture, Software Architecture | Permalink | Comments (0)

»
My Photo

SOS: Service Oriented Security

  • The Curious Case of API Security
  • Getting OWASP Top Ten Right with Dynamic Authorization
  • Top 10 API Security Considerations
  • Mobile AppSec Triathlon
  • Measure Your Margin of Safety
  • Top 10 Security Considerations for Internet of Things
  • Security Checklists
  • Cloud Security: The Federated Identity Factor
  • Dark Reading IAM
  • API Gateway Secuirty
  • Directions in Incident Detection and Response
  • Security > 140
  • Open Group Security Architecture
  • Reference Monitor for the Internet of Things
  • Don't Trust. And Verify.

Archives

  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015

More...

Subscribe to this blog's feed