The Economics of Finding and Fixing Vulnerabilities in Distributed Systems
Alexandria, VA
October 27. 2008
Gunnar Peterson
Managing Principal, Arctec Group
Blog: http://1raindrop.typepad.com
When Andy Ozment asked me over the summer to do this talk at QoP, I knew back in August that the topic I wanted to address was security and economics. So to that end I would like to start by thanking all of our friends on Wall Street and here in Washington DC for providing such a rich tapestry of recent events that I can speak to.
Like many people in this industry, my focus on security was fundamentally altered by Dan Geer's speech "Risk Management is Where the Money Is"[1], there are not many people who can call a ten year shot in the technology business, but Dan Geer did. The talk revolutionized the security industry. Since that speech, the security market, the vendors, consultants, and everyone else has realized that security is really about risk management.
Of course, saying that you are managing risk and actually managing risk are two different things. Warren Buffett started off his 2007 shareholder letter [2] talking about financial institutions' ability to deal with the subprime mess in the housing market saying, "You don't know who is swimming naked until the tide goes out." In our world, we don't know whose systems are running naked, with no controls, until they are attacked. Of course, by then it is too late.
So the security industry understands enough about risk management that the language of risk has permeated almost every product, presentation, and security project for the last ten years. However, a friend of mine who works at a bank recently attended a workshop on security metrics, and came away with the following observation - "All these people are talking about risk, but they don't have any assets." You can't do risk management if you don't know your assets.
Risk management requires that you know your assets, that on some level you understand the vulnerabilities surrounding your assets, the threats against those, and efficacy of the countermeasures you would like to use to separate the threat from the asset. But it starts with assets. Unfortunately, in the digital world these turn out to be devilishly hard to identify and value.
Recent events have taught us again, that in the financial world, Warren Buffett has few peers as a risk manager. I would like to take the first two parts of this talk looking at his career as a way to understand risk management and what we can infer for our digital assets.
Warren Buffett's evolution as an investor can be broken up into two parts. He began his career very much influenced by Ben Graham, who sought to buy "cheap stocks", comparing the price of the stock to value of the company's assets, and placing many, diversified bets on companies whose share price was below the total assets. Note that the businesses may have been of unremarkable quality, but when the price was right Graham would buy in, wait for it to rise and then sell. This was the dawn of value investing.
Buffett's later career departed from Graham's strict, statistical measures, where he sought to buy into companies that were selling at a fair price, but were also high quality businesses. We will examine high quality in Part 2 of this talk, but first we go to Part 1 which is asset value.
Why does a talk on finding and fixing vulnerabilities start with valuing assets? The reason is that vulnerabilities are everywhere, we are literally marinating in them. Interesting vulnerabilities are attached to high value assets. In a world that quite literally presents us with too much information, we need screens to sift out what is worth paying attention to. You can run your vulnerability assessment tool of choice on your system, and come back with hundreds or thousands of vulnerabilities, but which ones should you pay attention to and act on? The first part of answering this question is asset value.
When Warren Buffett was 19 years old studying at the University of Nebraska, he read Ben Graham's book "The Intelligent Investor", Buffett said he thought it was the best book on investing he has ever read and still feels that way today. In the Intelligent Investor Graham lays out the framework of value investing. Specifically, Graham talks about three concepts - Mr. Market, a stock is a piece of a business, and Margin of Safety.
Mr. Market is a fictional, teaching device invented by Graham. You imagine that you have a somewhat manic depressive business partner called Mr. Market. Every day, Mr. Market comes into the office and offers you quotes on companies, some days he is in a good mood and the prices are high, other days he is gloomy and prices are low. The market is a quote machine, for quoting prices, not a value assessment machine. Your job is to wait for the right price, and you are free to take as many passes and be as patient as you would like, Mr. Market will just show up the next day and throw out a new price.
Graham used Mr. Market to teach us the separation between a price of a stock, and the value of a company. The second big concept from Intelligent Investor is that buying a stock is buying a small piece of the underlying business. You are not buying a roulette chip, or a number that fluctuates in the newspaper every day, rather you are buying a piece of the company's existing and future cash flow. What the stock market says General Electric is worth yesterday, today or tomorrow is separate from GE's actual ability to generate cash flow.
The last big concept in "The Intelligent Investor" and the one seemingly most applicable to information security is the Margin of Safety. Graham's margin of safety involved calculating the intrinsic value of a business and then buying stock where the market cap of a company is less than its intrinsic value. So if a company has $100 million in assets and a market capitalization of $75 million, then an investor would get a 25% margin of safety. Ideally, Graham wanted to buy stocks that were selling for one half of their book value, i.e. with a 50% margin of safety. Graham said that buying stocks without a margin of safety, above their book value, speculation, not investing.
So price is readily available, but how do we calculate intrinsic value so that we can ascertain the margin of safety? Graham used quantitative statistical measures, relying heavily on the company's book value, like its hard assets. What would it take for a competitor to reproduce the company's assets - its factories, distribution system, and so on. The difference between the book value of the assets and market cap is the margin of safety.
What can we learn in information security from this quantitative approach? Where price and value are readily ascertainable we should build countermeasures and eliminate on vulnerabilities that give our assets a wide margin of safety. Since budgets are not unlimited we should prefer vulnerabilities that are cheap to find, cheap to fix.
First to the asset question, information security budgets like all IT budgets are crufty, they are not a reflection of today's top issues and priorities so much as an accumulating snowball of decisions, legacy contracts, and solution attempts to yesteryear's problems. Today the normal Information Security budget is just a legacy artifact from bygone years when the network was the purported greatest vulnerability. If you were around in 1995, you remember the great gnashing of gears as the enterprises opened up their networks, connected their back ends to the Web and began to transact business in the giant virtual space.
The security people huffed and puffed that it was dangerous but there was simply too much money to be made, so businesses went ahead. The security people would not go down without a fight and insisted on countermeasures. They got two - the network firewall and SSL. The firewall was used to separate the average Fortune 500s network of hundreds of thousands of machines, employees, consultants, and partners from the web at large. SSL was used to protect the network channel between the web server and the client browser. so the network firewall separated the network segments, and SSL in effect encrypted the last mile of many million complex transactions and computations.
In 1995, this seemed like a good security architecture. When we built out these security architectures, the eCommerce market was derided as a toy. Amazon famously lost money for years - losing a little on every transaction but making it up in volume. When the market is nascent, a quaint security architecture offers cost effective protection. But what about 2008? Those cute little eCommerce buggers have grown they even make profits now - market caps measured in the tens of billions, accumulating large cash hordes, no debt, and the largest ones are in better financial shape than the financial services players that kicked sand in their face in the dotcom era.
And its not just eCommerce, the "real" economy Fortune 500 types are all connected as well. Directly and indirectly the Web is seeping into all businesses. Major changes from when the security architecture of the web was built out. But has the security architecture changed to reflect these new business realities? Not a bit of it!
We can use the book value of the IT budget investments and the book value of the Information Security investments to see what kind of Margins of Safety Information Security groups are engineering.
Let's look at some market data, Gary McGraw reviewed the numbers [2] in software security for 2007, breaking down software security sectors like tools and services. Here is a summary of his findings on software security tools:
"One of the most important developments in the software security market can be seen in the tools space which, combined, almost doubled to $150-180 million. Top of list are two major acquisitions that closed in 2007: Watchfire's purchase by IBM (somewhere in the range of $120-150 million on 2006 revenue of $26 million) and SPI Dynamics's purchase by HP (for around $100 million on 2006 revenue of $21.2 million).
...
The black box space was flat in 2007, with IBM/Watchfire checking in at $24.1 million and HP/SPI Dynamics earning $22.3 million. Smaller companies in the space, including Cenzic, Codenomicon, WhiteHat and the like had combined revenues around $12.5 million (a growth of 25%, though Cenzic grew 16% and WhiteHat 52%). Most of the growth "hiccup" in the black box market can be attributed to the serious challenges posed by any acquisition. So far 2008 looks to be back on track from a growth perspective in the black box testing space. The global reach that IBM and HP offer are already making a big difference.
On a more positive note, static analysis tools for code review grew at a healthy clip in 2007 into a $91.9 million dollar market. Fortify was up 83% to $29.2 million. Klocwork grew over 60% to $26 million. Coverity grew over 50% to $27.2 million. Ounce Labs tripled their revenue to $9.5 million."
These are very nice growth numbers, what company doesn't want 83% growth? However, the let's look at the total picture and compare the software security countermeasures against other security mechanisms. Gary McGraw's estimate shows the software security space coming in at $150 Million total, yet we see a company like Checkpoint that won the network security war in 1995 with earnings of around $900 Million! One single network security vendor is 6 times bigger than the entire software security space, in what alternate universe does this make sense?
This is where we begin to see that decisions in the People's Republic of Information Security have no real risk management thinking, they truly are swimming naked and hoping the tide doesn't go out.
Let's look at network assets. Obviously Cisco is the biggest, they earned $39.5 Billion last year. Pretty stellar. So spending $900 Million (Checkpoint) to defined $39.5 Billion seems like a pretty good deal.
Except, let's compare software security spending - last year Microsoft earned $60 Billion, SAP $16 billion, and Oracle $22 Billion. So that is about $98 Billion in just three vendors and you are going to "defend" that with allocating $150 Million worth of software security tools?
On the network side we are buying $900 million of security countermeasures (Checkpoint firewalls) to protect $39.5 billion worth of Cisco gear, about 2.3% of the network investment goes to security.
On the software side, we are buying $150 million of security countermeasures (like static analysis and black box scanners) to protect $98 billion of software (you know the stuff that runs the whole business), roughly coming to about 0.2% of the software budget goes to security.
This is very disturbing. From a prioritization standpoint The People's Republic of Information Security is misaligned by an order of magnitude at least. Next time you read about a data breach, or see an auditor's report with thousands of findings you won't have to wonder how it happened. It happened because Information Security doesn't have its eye on the ball, it invests in network security not because those controls have greater efficacy (the whole point of networks is they are dumb), no, they invest in network firewalls because they bought a bunch in 1995, some more in 1998, and heck they just kept buying them, the Checkpoint rep kept showing up and taking CISOs out to play golf, contracts got renewed, and poof - there goes the security budget.
Consider that software security tools could grow 50% a year for five years and still be half of where Checkpoint is today.
The optimistic way of looking at all this data is that there is major room for growth for software security, if you take network security as a target for a mature industry and assume that 2.3% is a reasonable margin of safety, then the software security space should evolve to around 2% of the software space meaning that it should evolve into a $2 billion space around fifteen times larger than it is today. Unprotected assets will either be protected or will cease to be assets, VCs get your check books ready.
My friend Brian Chess has a nice way of looking at this he says 2007 was the turning point - "the first year there was a bigger market for products that help you get code right than there was for products that help you demonstrate a problem exists."
Now I am not suggesting that Information Security budgets have to be aligned with IT budget one for one, but I do think that looking at the overall IT budget is the starting point. If Information Security has a more cost effective security mechanism they should deploy it, but the starting point should be aligned to the business. Businesses spend most of their money on software, and there are very good reasons - competitive advantage, increased revenues and lower costs. Information Security spends most of its money on network security, and there is no good reason why, except that it was a seemingly good idea in 1995. You really don't have to go beyond the book value of IT investment as a whole versus Information Security to see a stunning disparity. Information Security's job is to deliver a Margin of Safety to the business, but they are not.
To deliver a real Margin of Safety to the business, I propose the following based on a defense in depth mindset. Break the IT budget into the following categories:
- Network: all the resources invested in Cisco, network admins, etc.
- Host: all the resources invested in Unix, Windows, sys admins, etc.
- Applications: all the resources invested in developers, CRM, ERP, etc.
- Data: all the resources invested in databases, DBAs, etc.
Tally up each layer. If you are like most business you will probably find that you spend most on Applications, then Data, then Host, then Network.
Then do the same exercise for the Information Security budget:
- Network: all the resources invested in network firewalls, firewall admins, etc.
- Host: all the resources invested in Vulnerability management, patching, etc.
- Applications: all the resources invested in static analysis, black box scanning etc.
- Data: all the resources invested in database encryption, database monitoring, etc.
Again, tally each up layer. If you are like most business you will find that you spend most on Network, then Host, then Applications, then Data. Congratulations, Information Security, you are diametrically opposed to the business!
Its not just about alignment for alignment's sake, its about applying controls as a way to have a Margin of Safety properly placed so that when not if there is a failure on a higher value asset you are relatively better positioned to deal with it.
The pure statistical approach can only take us so far. Buffett said he would be a lot poorer if all he did was listen to Ben Graham. Book value is great to see the diametric opposition mentioned above, but it doesn't really tell us much about the efficacy of the security mechanisms.
What we do get out of this statistical approach is a screen. The asset value screen filters out subjective opinion and narrows the field for where we need to dig in to do the high value, time consuming analytical work.
The second part of Warren Buffett's career and the second part of this talk leave behind pure statistical measures. In Warren Buffett's case he was joined by a guy named Charlie Munger who talked him out of the pure Ben Graham approach. Charlie Munger has a saying - "a great business at a fair price beats a fair business at a great price." Where Graham was focused on price and margin of safety, Munger wants a fair price but also a high quality business. This lead to Warren Buffett's company Berkshire Hathaway investing in companies like Coca Cola, Wells Fargo, and American Express, where the prices were far from dirt cheap (as Graham would have wanted), but the long term returns were outstanding.
In our world of Information Security, we start by aligning our priorities with the business using the thumbnail defense in depth approach, but then we would like to invest in high quality, effective controls.
To get at the notion of control quality and effectiveness, I am going to start part 2 of this talk with a brief history of software. The first web software was just static HTML, but web software really got interesting when developers started creating dynamic websites using CGI an PERL.
Once websites were hooked up to company databases and were not just serving static content, the security people realized they needed a security architecture, and they sprung into action. What they came up was was model that divided the world into "good stuff" which was comprised of all their networks, systems, and data; and then there was everything else the "bad stuff" on the Internet. So job one of the early days Internet security architecture was to separate all your good stuff (i.e. your network) for the bad stuff (the Internet). To do this the security people used a sophisticated tool called Visio to draw a flaming brick wall on the network diagram, and this flaming brick wall was supposed to keep the good stuff and the bad stuff separate.
The security people also realized that the data and session tokens that they served up from their Web server would have to traverse the "bad" neighborhood called the Internet, so they added one more security mechanism to secure the last mile of the transaction - SSL between the browser and the Web server.
And this was the state of the art security architecture used circa 1995 to protect the earliest dynamic web applications.
What happened next was that the dotcom boom started to happen and businesses realized they could make some real money on the Web, the web apps started to get more sophisticated, more personalization, richer session experiences and so on. This led the Java people to create JSP and the Microsoft people to create ASP, and of course the PERL people to create even greasier PERL scripts, all of this in the effort to pooling resources and sessions on the Web server. The security people defended this new application programming model with network firewall and SSL.
Around 1998, developers began building out more distributed N tier or 3 tier applications that separated the business logic layer, the presentation layer and the data access layer. Among other things, your web application could seamlessly integrate data from multiple back ends systems. Let's say you have pricing data in Oracle, order data in SAP, and customer data in a Mainframe. You write separate data access objects, apply business logic in the middle tier and then you tie it all together in a friendly user interface. At this point the web applications are beginning to integrate across departments and geographic boundaries, huge critical chunks of the business are now connected to the web. How did the security people defend this part of the business? They applied the same 1995 security architecture - network firewall and SSL.
Around 1999-2000 timeframe businesses relied on web applications for major parts of the revenue, and the apps were built in different technologies like Java and Microsoft technologies, but the customer didn't care (still doesn't), the customer wanted (and still wants) data access and functionality. So to integrate the disparate technologies, SOAP and XML were deployed so that Microsoft could talk to Java and so Websphere could talk to Weblogic and so on. And, oh yes, SOAP and XML were used to connect B2B networks so partners in a supply chain and business process can exchange data and interoperate. SOAP and XML present a fundamentally new programming model based on a message document style integration, where XML is used to mesh together data and functionality across platforms. SOAP and XML have no security model by default for authentication, authorization, and confidentiality. How did the security people deal with this? They kept the security architecture the same as they had in 1995 - network firewalls and SSL.
The software world did not stop innovating in 2000 of course, in the last few years we have seen Web services and XML form the basis of baroque and powerful SOAs and simple REST applications. We have seen Web 2.0 come on the scene, and entirely new networked applications built on top of that.
What we have not seen, is a single meaningful change in security architecture in 13 years. Developers have evolved, businesses have increasingly bet their entire business models on the web and they have increased security budgets. But what has the security architecture as its deployed in the field got to show for all of this? More firewalls and more SSL connections.
Since Information Security has proven incapable of evolving, it is time to learn from a discipline that has mastered innovation - software development, and yes, I will step back in case the lightning bolts hits.
What does software development focus on these days? Well, let's look at Service Oriented Architecture (SOA), all hype aside I look at SOA as a set of technologies that delivers three things:
Virtualization: we want Beijing, Bangalore and Boston to communicate.
Interoperability: we want our .Net stuff to talk to our java stuff.
Reusability: how many order/claim/pricing/customer systems does one company need?
To build out their SOA, developers separated the application interface from its implementation. So you can host the interface in a variety of locations, but its separate from the application logic and data.
This is also a useful trick for putting services like SOAP through the firewall. SOAP was designed as a firewall friendly protocol. When SOAP first came out, Bruce Schneier said calling SOAP a firewall friendly protocol is like having a skull friendly bullet. Which is a great line and explains why his books fly off the shelves, it does not explain, why security people think an architecture designed in 1995 is the one we should be using today. Maybe the problem is not that the developers figured out how to go through the firewall to get the data their customers want, maybe the problem is that the firewall is the sum total of the security architecture, and it never adapted.
A big part of this problem is that we have left Newton's world behind and entered Einstein's universe. Mainframes are Newton’s world, we have THE computer, THE price, THE record and so on.
As Pat Helland explained [4,5], Mainframes are Newton's world, but Distributed computing is Einstein’s world. More specifically in the Einstein world of distributed computing - "Computers don’t make decisions, computers try to make decisions." Our computers don't really make a decision, they say you can buy this book from Amazon at this price, we have it in stock and will deliver on such and such a date. But the warehouse runs out, the pallet gets dropped in the warehouse, your boo is crushed, and the package is stolen off your front step. The computer confirmed your transaction, but the real world intervened.
So we don't have iron clad decisions, instead its all about Memories (last time I checked your book was in stock), Guesses (we should be able to ship on this date) and Apologies (sorry the forklift ran over your book)
Translating this into security, security mechanisms don’t make policy-based decisions, security mechanisms try to make policy-based decisions
Some examples of memories, guesses and apologies in security
Memories
Security Policies - for example Triple A policy
Triple A policies can memorize a map of subjects, objects, and roles. They can even replicate these memories and play them back at runtime to try to make policy enforcement decisions.
Guesses
Security Policy Enforcement Decision
Unfortunately, while the policy enforcement decisions can be based on memorized logic, the decision itself is still a guess, even in the case of Triple A. Any guesses why? Because, the authentication process itself is a guess. It happens to be a guess that you then bind to a principal so it looks very official once you bind your guess to a Kerberos ticket or SAML assertion, but it still a guess.
Apologies
Giant Global Bank is sorry your account was compromised!
And this leads to lots and lots of apologies by companies with poor access control models.
Some additional examples of information security memories, guesses and apologies.
Example Memories - Triple A Security Policies, Audit logs, User account information , Authorization Logic - concrete mapping Subject, Resource, Condition, Action
Example Guesses - Security Policy Enforcement Decision Points, Authentication Logic, Monitoring, detection, fraud response
Example Apologies - Identity Management tools - provisioning, deprovisioning, Reimburse customer for fraud losses, Compensating Transaction - Giant Global Bank is still sorry your account was compromised!
The point of this is that security memories, guesses and apologies utilize different processes, different people, and different capabilities to be effective.
What trends can we identify to lead us toward better qualitative analysis based on the best practices of virtualization, interoperability and reusability.
Virtualization
Finding Vulnerabilities in a Virtualized World is a problem because applications are more configured than coded. Runtime behavior and structure not apparent due to weak typing and inversion of control.
Result - finding bugs becomes harder. Action - use screens to target finding time and resources
Fixing Vulnerabilities in a Virtualized World is a problem because how do I locate the controls when interfaces run in Beijing, Bangalore and Boston?
Result - synchronization and/or replication of security policy is problematic. Action - decentralized policy enforcement points and policy decision points.
Interoperability
Finding interoperable vulnerabilities
XSS - Javascript is an equal opportunity offender - interoperability for developers and attackers alike.
Fixing interoperable vulnerabilities
App servers, ESBs, and services are the attacker’s red carpet to your enterprise, right into your book of business. Interoperable access control can be leveraged across the enterprise.
Use XML signature for authentication and integrity
<SOAP:Envelope>
<SOAP:Header>
<WSSE:Security>
<ds:Signature>
<ds:Reference URI=‘#body’>
WSSE:Security>
</SOAP:Header>
wsu:Id=‘body’>
…
</SOAP:Body>
<SOAP:Envelope>
Use XML encryption to protect sensitive data, don't pass sensitive data in the clear
xml version='1.0' encoding='UTF-8'?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Body><ns1:echo xmlns:ns1="http://sample01.samples.rampart.apache.org">
<param0>My Credit Card Numberparam0>
ns1:echo>
soapenv:Body>
soapenv:Envelope>
Encrypt the data
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" soapenv:mustUnderstand="1">…
<xenc:EncryptedKey Id="EncKeyId-3020592">
<xenc:EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#rsa-1_5" />
<xenc:CipherValue>
XNQ0a4legiie5mWFxO6CQkk2hhldYNnKroObue/LXS/VYtvaTgMbCujhGExDi+vlkU//Qc2/T6mx0WVTmBMT3z8rogha8jD+nS9Zr2Bc3CwoTh2lh8wL3D0DEu91iwJT9JByLGXvt7v9lyuxK0ooDOYEClsH974CPmTs3tBC+GQ=
xenc:CipherValue>
xenc:CipherData>
To ensure that these controls are applied use automated tools like static analysis to scan for security mechanism use and coverage.
In terms of reusability findings and fixes consider two bug findings
Session management bug: session state is passed around to every component, service and user. Makes for many high priority findings in audit report, also the fix is required on virtually every program
Data validation bug: Data access object (DAO) has a SQL injection hole. One major high priority finding in report. DAO used by many business logic classes, one fix location serves many classes
To bring these factors together, I generally use a scorecard index [6], so you can measure such things as transport security, message security, threat protection and so on. The hard work in developing the index is developing a useful scale. A scale for XML tokens could use the following
0: no token
1: hashed token
2: hashed and signed token
3: hashed and signed token from standard authoritative source
An example scale for XML validation could use:
0: no validation
1: schema validation
2: schema validation against hardened schema
3: schema validation against standard, hardened schema
These indexed scales are used to show maturity across the factors in the scorecard. The first part of the talk described value, the value assessment is used to focus time and effort on high value assets. The value assessment can be determined quantitatively. There is hard analytical work to qualitatively determine the scorecard, index, and scales, the quantitative value assessment is used to screen out high value targets for these endeavors. The scoring index is used to track progress and improve quality over time. In the best case scenario, automated tools are used to perform the checks described in the index, and once security is automated just like software developers we may see security innovation make progress in years not decades.
Thank you for your time.
1 "Risk Management is where the Money Is" by Dan Geer, http://catless.ncl.ac.uk/Risks/20.06.html
2 Berkshire Hathaway 2007 Shareholder Letter by Warren Buffett, http://www.berkshirehathaway.com/letters/2007ltr.pdf
3 "Software [In]security: Software Security Demand Rising, by Gary McGraw
4 "SOA and Newton's Universe" by Pat Helland, http://blogs.msdn.com/pathelland/archive/2007/05/20/soa-and-newton-s-universe.aspx
5 "Memories, Guesses and Apologies" by Pat Helland, http://blogs.msdn.com/pathelland/archive/2007/05/15/memories-guesses-and-apologies.aspx
6 "Web Services Security Checklist" by Gunnar Peterson, http://arctecgroup.net/pdf/WebServicesSecurityChecklist.pdf
Gunnar,
You only count Cisco for network market cap. Firewalls, if I recall correctly, are more about protecting the servers (and the operating systems) on the network versus just the network devices. So, add Dell, Lenovo, HP, and Sun into the mix to be far. You could also add IBM, Apple, and Microsoft, too. Their market caps would reduce the magnitude of difference, and probably get closer to that 0.2%, if not pass it.
Posted by: Jon | November 19, 2008 at 05:35 AM
Jon - I can add in more networking companies but I think Cisco has a pretty large market share don't you? Care to name any other network companies the size of say SAP, Oracle, or MSFT?
Here is the wikipedia definition of a firewall
"A firewall is an integrated collection of security measures designed to prevent unauthorized electronic access to a networked computer system. It is also a device or set of devices configured to permit, deny, encrypt, decrypt, or proxy all computer traffic between different security domains based upon a set of rules and other criteria."
I believe there are host firewalls like Zone Alarm but these make a relatively small part of Checkpoint's business. When you said "Firewalls, if I recall correctly, are more about protecting the servers " you make my point perfectly, they should be about protecting servers (more specifically the functionality and data abetted by those servers), but instead they are applied in the network layer, blissfully unaware of any real assets behind their PERMIT/DENY binary world.
Posted by: Gunnar Peterson | November 19, 2008 at 06:06 AM
Gunnar,
this is really a inspiring post. Jon's point is valid. Those firewalls are protecting not only network devices, but also actually most of the applications, devices and boxes inside them.
Richard.
Posted by: Richard | November 20, 2008 at 01:33 AM
Two quick comments:
1) You cannot do reliable market analysis by looking at the sales of dominant players in the market place. Sometimes people tend to buy more innovative things from smaller players, and in other times they look at consolidated solutions from major vendors. You might want to look at statistics on enterprise and telecoms spending. These statistics are becoming more and more available today.
2) For most, software security products are Quality Assurance tools. For any meaningful results you probably should compare them against other R&D costs and not generic enterprise IT spending. A code auditing tool is not bought to secure the IT network. A fuzzer (black-box tool) is a bit different because it can be used by both QA and IT staff. Static and dynamic security tools are two completely different markets.
Posted by: Ari Takanen | November 20, 2008 at 03:21 AM
I would be very happy if firewalls protected hosts, however they open up ports to those hosts and blithely pass the attacks along. Let's take the simple case of the OWASP Top Ten, assume you have a big scary network firewall in front of your webapp, you are still vulnerable to every single attack in the OWASP Top Ten. On the plus side you got your network addresses translated.
Posted by: Gunnar Peterson | November 20, 2008 at 09:47 AM
In most cases, firewalls are glorified routers. They are often the problem, since they give you the impression that you have some security capability when in fact they blissfully route packets to your servers. The whole DMZ concept is also broken. The bottom line is that our security architectures have not moved with the times. We are layers behind.
Posted by: Marinus van Aswegen | November 26, 2008 at 06:18 AM
My main point with Gunnar's calculation is on the calculation between network security and software security. My point is that there is overlap between the two. Gunnar included "Hosts" only in the software security protection calculation. I am stating that hosts should also be included in the network protection calculation. That is it. Odd that we digress into what a firewall is or is not, and what it does or does not do. If that is our only discussion point, then I fear for our industry.
For some of the non sequitur replies to my comments, I don't recall talking about how a firewall helps on exposed services. It does not. It helps only on those services not exposed. I don't recall mentioning how much a firewall helps (subtle, but important point). I just recall mentioning it does help protecting servers. If they don't help at all from protecting servers, then tell your CIO, CISO, CEO, or clients to remove them. Go ahead. I'm waiting...
Sadly enough, my point really only addresses a small bit of Gunnar's post. What's sadder is that all the subsequent comments have only focused on my comment, versus the in depth discussion Gunnar posted about. Please truly accept my apologies for the distraction. It did more of a disservice to Gunnar's very good post than a service.
Can someone comment on the rest of Gunnar's post?
Posted by: Jon | November 26, 2008 at 07:52 AM
I definitely agree that firewalls are protecting not only network devices, but also actually most of the applications.
-Luigi
Posted by: forklift | December 27, 2009 at 10:29 PM