1 Raindrop

Gunnar Peterson's loosely coupled thoughts on distributed systems, security, and software that runs on them.

Recent Posts

  • Security Champions Guide to Web Application Security
  • Security > 140 Conversation with Pamela Dingle on Identity
  • 6 Things I Learned from Robert Garigue
  • The Curious Case of API Security
  • Security Capability Engineering
  • Ought implies can
  • Security > 140 Chat with T. Rob Wyatt on MQ and Middleware Security
  • Privilege User Management Bubble?
  • The part where security products solve the problem
  • Four Often Overlooked Factors to Give Your Security Team a Fighting Chance

Blogroll

  • Adding Simplicity - An Engineering Mantra
  • Adventures of an Eternal Optimist
  • Andy Steingruebl
  • Andy Thurai
  • Anton Chuvakin
  • Beyond the Beyond
  • cat slave diary
  • Ceci n'est pas un Bob
  • ConnectID
  • Cryptosmith
  • Emergent Chaos: Musings from Adam Shostack on security, privacy, and economics
  • Enterprise Integration Patterns: Gregor's Ramblings
  • Financial Cryptography
  • infosec daily: blogs
  • Jack Daniel
  • James Kobielus
  • James McGovern
  • John Hagel
  • Justice League [Cigital]
  • Kim Cameron's Identity Weblog
  • Krypted - Charles Edge's Notes from the Field
  • Lenny Zeltser
  • Light Blue Touchpaper
  • Mark O'Neill
  • Off by On
  • ongoing
  • Patrick Harding
  • Perilocity
  • Pushing String
  • Rational Survivability
  • rdist: setuid just for you
  • RedMonk
  • RiskAnalys.is
  • Rudy Rucker
  • Software For All Seasons
  • Spire Security Viewpoint
  • TaoSecurity
  • The New School of Information Security
  • Windley's Technometria
  • zenpundit
Blog powered by Typepad

When the cure is the problem

From Martin Wolf comes a story with real Infosec parallels (emphasis added):

We must not get diverted by the financial sector's opposition or by populist rage. We must focus, instead, on the core issue. Trying to make financial systems safer has made them more perilous. Today, as a result, neither market discipline nor regulation is effective. There is a danger, therefore, that this rescue will lead to still greater risk-taking and an even worse crisis at some point in the not too distant future.

Either we impose a credible threat of bankruptcy, or institutions we have to support are made safer, or, better, we have both of these. Open-ended insurance of weakly regulated institutions that take complex gambles is intolerable. We dare not return to business as usual. It is as simple - and brutal - as that.

"Trying to make the system safer has made them more perilous", information security routinely gripes that systems are "too complex" and that is why they are insecure - too much code, too distributed and so on. Fair points. But what happens when infosec has to offer a solution of their own? In almost every case, the security protocols and mechanisms that are put in place to mitigate some risk become the most complex part of the system. Does anyone see a problem here?

"Open-ended insurance of weakly regulated institutions that take complex gambles is intolerable", the result of this echoes Brian Snow's point that the most dangerous security posture is to think you are secure and act accordingly when in fact you are not secure.

Both of these issues to me point out the reality that security can only solve so much and that assurance is required to deal with both of these issues.

October 21, 2009 in Assurance, Security | Permalink | Comments (1)

Lottery Ticket Assurance

For distributed systems security analysis, lottery ticket systems are interesting to study. Lots of loosely coupled distributed partners that form a federation to broker trust and exchange real value. What happens when a node goes bad though? Here is what happened in Silicon Valley:


San Jose - A sting operation by the Santa Clara County Sheriff's Department has resulted in a group of South Bay convenience store clerks facing charges of stealing winning Lottery tickets. The California Lottery aided in the investigation 


The California Lottery printed up a number of winning Lottery tickets for the deputies to use while they went undercover. The deputies would walk into convenience stores, hand those winning tickets to the clerk and wait to see what happened next. 


In nine cases, the clerk scanned the ticket, saw that it was a winner, but told the deputy, "Sorry. Better luck next time." In several cases, the clerks turned around and tried to cash those winning tickets for themselves.


Speaks to the need for not just mechanism but also assurance.

March 06, 2009 in Assurance, Security | Permalink | Comments (1)

Assets Good Until Reached For

A few months back Minyanville wondered whether this subprime mess would end up as a cancer or a car crash. Guess we know the answer now. The question is - should we be at all surprised? Some smart folks have been warning for a long time. Warren Buffett famously called derivatives financial weapons of mass destruction.


Charlie Munger, as he is wont to do, went a bit further (from 2004):

I think a good litmus test of the mental and moral quality at any large institution [with significant derivatives exposure] would be to ask them, "Do you really understand your derivatives book?" Anyone who says yes is either crazy or lying.

They have many other statements in the same direction, based on their own experience from buying companies that used deriviatives where they were unable to to unwind the books and figure out who owed who. At the last Berkshire Hathaway annual meeting someone asked Charlie Munger what we could learn from past blow ups about the present crisis

It was a particularly foolish mess. We talked about an idiot in the credit delivery grocery business, Webvan. Internet based delivery service for groceries -- that was smarter than what happened in mortgage business. I wish we had those Webvan people back.

What can we learn from all this?
Well Dan Geer launched a revolution with his famous speech about risk management. He got the big picture part right on the security industry evolving into more risk management practices, however the examples we assumed that were right at the time, the financial industry are proving wrong. For one thing you can't manage a risk if you don't know the assets (back to Charlie Munger, emphasis added):

It is crazy to allow things to get too big to fail, run with knavery. As an industry, there is a crazy culture of greed and overreaching and overconfidence trading algorithms. It is demented to allow derivative trading such that clearance risks are embedded in system. Assets are all “good until reached for” on balance sheets. We had $400m of that at general re, “good until reached for”. In drug business you must prove it is good. It is a crazy culture, and to some extent an evil culture. Accounting people really failed us. Accounting standards ought to be dealt with like engineering standards.


So, yes it is about risk management, but if you build too many abstractions on top of your assets through derivative accounting and such you may find you don't have any assets when you need them. Don't fall in love with your abstractions, manage your assets.

There are some clear lessons for us in Information Security, err I mean Information Risk Management.
Margin of safety Its our job to manage risk, but this doesn't mean that we have to build layers and layer of abstraction on top of it. It also means that we help to design, build, deploy, and operate systems with margins of safety. Understanding the failure modes and accounting for this in design. Developers (because they are supposed to) and architects (because they haven't been properly trained) focus on functional requirements, building features, but on security not so much. There are many ways to improve security in a system and they are all inadequate by themselves, but we can help find cost effective improvements.

Don't fall in love with abstractions
If you have a 100,000 dekstops or 100,000 servers it hard to manage. You will need to automate and to do that you need to abstract, but you should also realize that its a drawing on a whiteboard not reality. You need abstraction assurance. 

Ian Grigg commented on an earlier post

There are distinct parallels between phishing / retail payments, and the bigger investment mess. In both cases, banks would argue these are core business. In both cases, they have applied risk-based security models, and accepted some loss. In both cases, they have the ability to apply substantial experience to the monitoring, allocating and absorbing risks and losses.


In both cases, they watched and did nothing as the risks started from low, and migrated upwards. Are we at the point where regulation has killed the ability of banks to apply their (arguable) one core skill, to whit, risk-based analysis? Are banks that far out of banking that they no longer have it?


So you have to remember that top down and bottom up need to be combined.

Design for failure
Dan Geer has also told the story that he sat in a large bank's risk management training, and the trainer said "you may wonder why this works so well. it works because there is zero ambiguity over who owns what risk." Dan's thought was - "in my field we have nothing but ambiguity." Turns out the second part was right, we have nothing but ambiguity over who owns what risk; unfortunately the financial people have much more ambiguity than they thought! So we do have a lesson here after all, and it this - when the thing you thought was true isn't, the failure mode is very ugly. Design for failure - add layers of protection.

Keep it simple.
They have some smart engineers at Google to be sure, but even they had incredibly basic errors in their SSO. I have seen other obvious fails like people signing WS-Security messages, and the recipient checks for a signature but not if they trust the signer! There are so many ways to shoot yourself in the foot in a loosely coupled systems, and we have so many abstractions layered on top of each other, part of the mantra of protecting assets has to be keeping it simple.

So that is my list, to do all these things it requires that Infosec get in the game, understand the use cases, understand the business value (it should be abundantly clear that you can't simply rely on "business people" to be "business experts"), and that you not lose sight of the asset amidst all the abstraction. Finally, the systems we build security on are very primitive, a firewall and SSL are fine, a seatbelt was fine in 1935 and its still fine today, but there are lots of other safety controls in cars. ABS, airbags, traction control, they all protect the assets far better than in 1935, that's what we need to build.

Anyone can make bad assumptions (assume you know who owns what risk) and its easy to make bad abstractions (the firewall protects the information system), but when you combine bad assumptions with bad abstractions you'll get assets that are good until reached for sooner or later

September 15, 2008 in Assurance, Investing, Risk Management, Security | Permalink | Comments (2)

New Software Security Horizons

Software security is still a very small space, but in the last few years we have started to see some real successes with large organizations rolling out much improved software security tools, processes, and technologies. As someone who has been out doing this stuff for a long time, I am very heartened by the progress. There is one nit I have though which is that most of the current progress has been driven by technology vendors and financial institutions. These companies have made real progress in some areas like static analysis, and while a medium level of assurance won't defeat a determined opponent,progress is progress and there is a lot of value in raising the floor.

My issue is more to do with the particular focus that technology vendors and financial institutions have. They have real availability concerns and some integrity concerns, but they don't speak for all enterprises. So manufacturers, insurance companies and healthcare companies have much different integrity and confidentiality concerns than financial companies. Particularly when you get to things like authorization and audit these things come to a head. Since the good improvements we have seen recently have been driven by the financial market the solutions are also driven by these concerns. Yet other enteprise's concerns remain, perhaps with stock market hammering all things financial this will reduce spending and the software security vendors will begin to listen to the different technical and deployment considerations that exist for other industries besides financial.

This would be a very big win - long term - for financials as well. Financials like most companies generally want to cost effectively get to a medium level of assurance (side note - Fred Cohen taught me that this is the hardest problem to solve - we know what high looks like, and we know what broken looks like), but here is the thing - there is more than one way to get to medium! If the healthcare, insurance companies, and manufacturers can drive out new deployable medium assurance solutions that augment some of the good stuff that has come out of the innovation in the financial space. Then we all win.

I understand the urge for vendors to dance with who brung 'em, but there is more than one way to do this.

So if you work for a software security vendor and you hear a list of requirements coming from a an insurance, healthcare, manufacturer or other enterprise prospect - don't automatically just compare them to what your financial customers (who have hard edges around processes) do. Instead listen and try to think of how to address, deploy and scale in a new environment. Find multiple paths to medium assurance (not just the financial one) - your customers will be happy and you will make more money.

December 06, 2007 in Assurance, Security | Permalink | Comments (0)

Security Architecture Blueprint

The security industry has little strategic cohesiveness, instead the market is comprised of vendors selling an aggregation of tactical one off point solutions. The problem is that security is of strategic concern to the enterprise but the market does not reflect this. This security architecture blueprint shows one way to take a strategic approach to security in the enterprise.

Securityarchitectureroadmap

The purpose of the security architecture blueprint is to bring focus to the key areas of concern for the enterprise, highlighting decision criteria and context for each domain. Since security is a system property it can be difficult for Enterprise Security groups to separate the disparate concerns that exist at different system layers and to understand their role in the system as a whole. This blueprint provides a framework for understanding disparate design and process considerations; to organize architecture and actions toward improving enterprise security.


This blueprint distills what I have learned on a number of enterprise security engagements into a generic framework. Enterprises know they have security problems today, and will not be perfect tomorrow, next month or next year -- so the question is how to define a pragmatic way forward?

I was fortunate to get great feedback on drafts from a number of folks including James McGovern, Jim Nelson, and Brian Snow.


May 01, 2007 in Assurance, Deperimeterization, Security, Security Architecture, Software Architecture | Permalink | Comments (3)

Bands = Wired, Factors = Tired, Passwords Only = Expired

I was talking with Andre Durand yesterday, we agreed that when you look at the security market, it is not very strategically oriented, instead vendors have aggregated a large number of tactical solutions under one roof, but there is no real cohesive strategy. The problem is, of course, that security of the system is of strategic concern to the enterprise, but the marketplace simply does not reflect this. A perfect example of what happens when security groups *do* think strategically instead of blindly adopting what the vendors are selling is from OWASP jefe de jefes Andrew van der Stock contributed this comment to an earlier post on using SMS (or other communication band to authorize a transaction), and it raises a very important point. IT Security groups so often end up in drilling down one particular rabbit hole that they lose sight of the big picture. Sometimes some basic authN schemes from multiple communication bands will add more strength than increasing the strength of a single component.

The SMS is a form of transaction signing. Weak transaction signing for sure, but it's better than 2FA authC, as the MITM scenario proposes.

The attacker has to

a) conduct a DNS or other infrastructure level attack (such as install a Banking trojan) against the victim

b) take over the active session, either by navigating the user to a perfect copy of the site (hard) or just interceding with some added Ajax background music, such as a forceful CSRF to which the user is unaware

c) Convince the user that they must give up a sound credential. With a 2FA fob with no transaction signing, this is easy and many will fall victim. This is why 2FA fob only does not work and is a waste of money. If the user sees a message on their phone for no reason, saying:

"The token code to transfer $2900 to account "blah" is 437485. If you did not make this transaction, do not enter the code and call 1800 EXAMPLE immediately. This token expires in 30 seconds."

A user will hesitate to enter that unless they're already in the process of entering a transfer. If it's the wrong value, and the wrong destination, they will not enter it.

SMS 2FA is extremely cheap, and extremely effective. For customers without a mobile phone, transaction signing calculators from Vasco et al cost $20 on up to a couple of hundred for the EMV capable fancy ones. I like the intermediate EMV calculators - they force the user to put their corporate credit card into the device thus ensuring they have a mental picture as to the value of leaving the token around. All too often, I find tokens in desk drawers and such. Making the user aware of the exact value by hooking a real monetary instrument to a disconnected calculator is so close to three factor authentication: something you know, something you have, and something you are, as to not count. I doubt the attackers will work out a suitable attack path for these devices unless *we* make mistakes.

And sure enough, we will make mistakes. But the cost will be a lot less than it is today. The day of the password ended a long time ago. It's time to move on to a world where devices are *always* assumed to be trojaned, and the network is *always* assumed to be tainted (DNS pinning attacks, etc), and so on. SSL no longer cuts the mustard alone.

This illustrates very well that heretofore goal of most IT security initiatives of ivory tower security, wherein a benign subject uses very strong "unbreakable" authN mechanisms, to a "Secure" destop, to a "secure" network is not only flawed, but overly compelx and expensive compares the subtle and powerful change in the approach Andrew describes. This is also reminiscent of a comment post by Bob Blakley from awhile back on dealing with identity theft in a similar manner regarding communication bands:

I define "Identity theft" as the theft of a "breeder document" which enables ME to generate NEW identities which people attribute to you. If I learn your Social Security Number and your address (and maybe your mother's birthdate and maiden name, or some other such highly esoteric piece of information), then I can write off to EnormousGlobalBank and take out a NEW credit card in your name. And when you cancel that card (assuming you can), then I can do it AGAIN.

If you accept that this is the problem, I think there's an easier solution than one which relies on demonstrating a "proof" to a third party. It goes like this. Imagine that your Social Security card is the sole "breeder document" for accounts. Now imagine that you (and everyone else is issued a new Social Security card - the actual physical card, not the number.

Imagine that this card has four new features. The first is an LCD window which can scroll text. The second is a "yes" button. The third is a "no" button. And the fourth is a vibrate mode.

Finally, imagine that EVERY TIME you try to open an account using your SSN, the institution trying to create the account sends out a signal. The signal causes your card to vibrate. When you take the card out of your pocket, the screen displays a message on its LCD screen saying "EnormousGlobalBank creating new VasterCard Account for you. OK?" If you believe that the account is being created because of some process you initiated, you press the "yes" button. Otherwise, you press the "no" button.

The key here is NOT authentication. It's awareness (creating the opportunity for the "real" "owner" of the "identity" to know what's being done on his behalf), and, most importantly, TIMELINESS. A big part of the identity theft problem comes from the fact that the average person checks her credit report every time she buys a house - i.e. not often enough to realize that something shady is going on and stop it before a lot of damage is done.

IT security needs to fixate less on individual mechanisms and more on the system as a whole.

**************************************************

Upcoming public SOA, Web Services, and XML Security training by Gunnar Peterson, Arctec Group
--- NYC (April 19), Unatek Web Services (May), OWASP App Sec Europe (May), Helsinki (June), DC/Baltimore (July 19).

April 06, 2007 in Assurance, Security | Permalink | Comments (3)

2006 in review

Here is a look back at this blog in 2006

January

Service Oriented Security Architecture - paper published in ISB. This paper deals with a security model for a distirbuted system the assets are separated as Identity, Message, Service, Deployment Environment, and Transaction. This way the risks and countermeasures can be understood and the elements and constraints dealt with in their own domain to the extent possible.

Learning from EAs: The EA Bill of Responsibilities - the role of a useful EA

Phasing Security into the SDLC - a comparison of approaches, top down, bottom up and so on

February

Bugs v. Flaws

April

Governance and Assurance Synthesis

Darwin Lives - Governance Models and the ability to govern

May

SOA security metrics - from my SOA, Web Services security training in Leuven

O, Perimeter Where Art Thou

June

Assertion Federation Assurance - putting strong authN on the wire

July

Intro to Identity Management Risk Metrics - published in IEEE Security & Privacy

August

Notes from MetriCon 1.0 Part 1, 2, 3

September

Laws of Identity and Web Services

October

Decentralization and Good Enough Security - Part 1, 2, 3

J Peterman's Threat Levels - watch out for the Enterman's shim sham

Whatever happened to give 'em enough eyes? - open source security lagging MSFT

Firewall this

November

Defining Misuse in the development process - published in IEEE Security & Privacy

Playing for keep across the board - thumb fingerprints or cut off your opposeable thumbs -- the choice is yours.

December

A series of posts looking at Rest security issues Part 1, 2, 3

Security Concepts, Challenges, and Design Considerations for Web Services Integration - published the DHS/ CERT Build Security In portal

February 01, 2007 in Assurance, Deperimeterization, Identity, Security, Software Architecture | Permalink | Comments (0)

Integrated Transaction Puzzle

Robert Morris, Sr. gave a very interesting problem at Defcon a few years back, it describes very why the security technologies like network firewalls and SSL are insufficient by themselves, and in my view why we need technologies like Federation, SAML, Cardspace, WS-Security, and other tools to help us build what is required to operate in an increasingly malicious system.

"This is a long term problem. If you work on it and make any progress against it, you'll find yourself much smarter at the far end, than you were at the near end.

When I was in Norway about 5 years ago, I was there very close to the summer solstice. I was wandering around town at 2 o'clock in the morning and there was plenty of light out. You come to a sign that says New Minsk about 60 km and it points south.

And I ask the lady "what country is this?"

She scratched her head for a bit, and said "well I think its Norway"

I said "well who plows the roads?"

"well Norway does, but he have to pay them."

There is a triple boundary in this town that I was in between Norway, Finland and Russia.

But what I did there, was, I had a card about wallet size, I stuck it into a machine, I punched in four digits, and it gave me about 2,000 krone, whatever the hell that is.

Now there are a lot of participants in that transaction. When I put a card into that machine, punch in a pin, and it gurgles for awhile, and finally gives me, a fairly large amount of money. There are a lot of participants in that transaction. The bank that owned the machine that gave me the money, it gave some money away -- that bank wants it back. The pin is necessary to convince my own bank that I'm me. But I don't want my pin to be broadcast all over the world. My bank in the us, it hasn't really given out or taken in any money, really. But there is a lot of credits involved here. Somebody needs to charge somebody else for having more money available. Even though there was actually no cash transfer.

And the problem that I have in mind is

- who are all the participants in an ATM transaction?

- what do those participants need to satisfy their problems?

- how is that in fact done?

In a general way, does the atm system actually work in some reasonable sense? To which the answer is by the way: yes. The atm system damn well works. With extremely high reliability and accuracy. It surprises me. Its quite a bit different than voting machines.

One part of the issue is that ATM network is more of a puzzle, while the web is a mystery. However, the real gap with firewalls and SSL is that they are all or nothing propositions. With netowrk firewalls you are supposedly "inside" the firewall, in the DMZ or "outside" the firewall. Yet we know that gajillions of transactions and attacks plow right through the firewall on a regular basis. With SSL, we are able to create confidential channel, but once that channel terminates, say at your load balancer or firewall, you have no way to guarantee any integrity or confidentiality for the transaction beyond the termination point. And you have no way to provide a security model that satisfies multiple parties in the context of a given transaction; this is what I find so interesting about what the WS-Trust standard provides. WS-Trust assumes that different token types (SAML, X.509, Kerberos) are desired at different endpoints and what we really need is a way to move them around in some standard way. It is partly an integration problem, and partly an interoperability problem. Call it SOA, call it REST, call it Web 2.0, call it eCommerce, callit whatever, this is a problem all technologies that engage in valuable transactions with multiple players need to solve. More thoughts on how are here.

January 09, 2007 in Assurance, Deperimeterization, Identity, Risk Management, Security, Security Architecture, SOA, Software Architecture | Permalink | Comments (2)

O, Web 2.0 App Security, where art thou?

How about Web 2.1?...and this time, can we upgrade the security along with the rest of the stack? Where *do* I download the security patches for Web 2.0?

So instead of revving Web 1.0, but leaving the "security" model with the same old, same old network firewalls and SSL, how about we rev the security model too?

Tim O'Reilly points to a comparison of the evolution from Web 1.0 to Web 2.0, and it is pretty cool...but where is security?

Programming20thumb

My take?

Security as someone else's job --> Build Security In

I could go on.

Writing your own identity/app layer --> Using a standards based framework

Assuming a benign environment --> Designing/building with assurance in mind

Outside the firewall/DMZ/inside the firewall pattern --> Deperimeterization

What's cool about SAML based federation is that you get browser based SSO (it just works with the browsers, so the Web 2.0 should love that), you get SSO that spans companies and technologies. Oh, and the credentials are protected.

Why does this matter? Well, attackers have evolved a lot since 1.0, so rolling out the same old 1997 security model doesn't cut it any more.

Brian Krebs "Cyber Crime Hits the Big Time in 2006: Experts Say 2007 Will Be Even More Treacherous"

Call it the "year of computing dangerously."

Computer security experts say 2006 saw an unprecedented spike in junk e-mail and sophisticated online attacks from increasingly organized cyber crooks. These attacks were made possible, in part, by a huge increase in the number of security holes identified in widely used software products.

Few Internet security watchers believe 2007 will be any brighter for the millions of fraud-weary consumers already struggling to stay abreast of new computer security threats and avoiding clever scams when banking, shopping or just surfing online.

One of the best measures of the rise in cyber crime this year is spam. More than 90 percent of all e-mail sent online in October was unsolicited junk mail messages, according to Postini, a San Carlos, Calif.-based e-mail security firm. The volume of spam shot up 60 percent in the past two months alone as spammers began embedding their messages in images to evade junk e-mail filters that search for particular words and phrases.

As a result, network administrators are not only having to deal with considerably more junk mail, but the image-laden messages also require roughly three times more storage space and Internet bandwidth for companies to process than text-based e-mail, said Daniel Druker, Postini's vice president of marketing.

"We're getting an unprecedented amount of calls from people whose e-mail systems are melting down under this onslaught," Druker said.

Spam volumes are often viewed as a barometer for the relative security of the Internet community at large, in part because most spam is relayed via "bots," a term used to describe home computers that online criminals have compromised surreptitiously with a computer virus or worm. The more compromised computers that the bad guys control and link together in networks, or "botnets," the greater volume of spam they can blast onto the Intenet.

At any given time, there are between three and four million bots active on the Internet, according to Gadi Evron, a botnet expert who managed Internet security for the Israeli government before joining Beyond Security, an Israeli firm that consults with companies on security. And that estimate only counts spam bots. Evron said there are millions of other bots that are typically used to launch "distributed denial-of-service" attacks -- online shakedowns wherein attackers overwhelm Web sites with useless data if the targets refuse to pay protection money.

"Botnets have become the moving force behind organized crime online, with a low-risk, high-profit calculation," Evron said. He estimated that organized criminals would earn about $2 billion this year through phishing scams, which involve the use of spam and fake Web sites to trick computer users into disclosing financial and other personal data. Criminals also seed bots with programs that can record and steal usernames and passwords from compromised computers.

When 90% of email traffic is spam, you cannot assume that the environment is benign. What will it look like 5 years from now?

These past 12 months brought a steep increase in the number of software security vulnerabilities discovered by researchers and actively exploited by criminals. The world's largest software maker, Microsoft Corp., this year issued software updates to fix 97 security holes that the company assigned its most dire "critical" label, meaning hackers could use them to break into vulnerable machines without any action on the part of the user.

In contrast, Microsoft shipped just 37 critical updates in 2005. Fourteen of this year's critical flaws were known as "zero day" threats, meaning Microsoft first learned about the security holes only after criminals had already begun using them for financial gain.

This year began with a zero-day hole in Microsoft's Internet Explorer, the browser of choice for roughly 80 percent of the world's online population. Criminals were able to exploit the flaw to install keystroke-recording and password-stealing software on millions of computers running Windows software.

At least 11 of those zero-day vulnerabilities were in the Microsoft's Office productivity software suites, flaws that bad guys mainly used in targeted attacks against corporations, according to the SANS Internet Storm Center, a security research and training group in Bethesda, Md. This year, Microsoft issued patches to correct a total of 37 critical Office security flaws (that number excludes three unpatched vulnerabilities in Microsoft Word, two of which Microsoft has acknowledged that criminals are actively exploiting.)

But 2006 also was notable for attacks on flaws in software applications designed to run on top of operating systems, such as media players, Web browsers, and word processing and spreadsheet programs. In early February, attackers used a security hole in AOL's popular Winamp media player to install spyware when users downloaded a seemingly harmless playlist file. In December, a computer worm took advantage of a design flaw in Apple's QuickTime media player to steal passwords from roughly 100,000 MySpace.com bloggers, accounts that were then hijacked and used for sending spam. Also this month, security experts spotted a computer worm spreading online that was powered by a six-month old security hole in a corporate anti-virus product from Symantec Corp.

What is scary about the Microsoft numbers is the implication to the industry as a whole...that while Microsoft still have widely publicized security issues (and an insanely large code base), they have made huge strides in secure coding. Where does this leave competitors that have yet to adopt secure coding practices (read: virtually every other large software company)? That Microsoft still faces security challenges even as they upgrade their SDLC to incorporate security concerns, is not the takeaway. Microsoft is like the USMC facing the best the insurgents can throw at them, and as they coevolve what happens when those same insurgents throw their attacks at the Norwegian or Brazilian armed forces (read every other major software company or your Web 2.0 app for that matter as the Norwegians/Brazilians)?

So it seems clear to me that we have Attacker 2.0 and we have Web 2.0, but Web 2.0's security model relies on Attackers being stuck on Attacker 1.0.

January 08, 2007 in Assurance, Deperimeterization, Federation, OWASP, Security, Software Architecture, Web 2.0 | Permalink | Comments (0)

Vexable Identity Vetting

Andy Jaquith has a great post on the trials and travails of dealing with identity on the interweb

Last week's shutoff of this website's self-registration system was something I did with deep misgivings. I've always been a fan of keeping the Web as open as possible. I cannot stand soul-sucking, personally invasive registration processes like the New York Times website. However, my experience with a particularly persistent Italian vandal was instructive, and it got me thinking about the relationship between accountability and identity.

Some background. When you self-register on securitymetrics.org you supply a desired "wiki name", a full name, a desired login id, and optionally a e-mail address for password resets. We require the identifying information to associate specific activities (page edits, attachment uploads, login/log out events) with particular members. We do not verify the information, and we trust that the user is telling truth. Our Italian vandal decided to abuse our trust in him by attaching pr0n links to the front page. Cute.

The software we use here on securitymetrics.org has decent audit logs. It was a simple matter of identifying the offending user account. I know which user put the porn spam on the website, and I know when he did it. I also know when he logged in, what IP address he came from, and what he claimed his "real" name was. But although I've got a decent amount of forensic information available to me, what I don't have any idea of whether the person who did it supplied real information when he registered.

And therein lies the paradox. I don't want to keep personal information about members — but at the same time, I want to have some degree of assurance that people who choose to become members are real people who have serious intent. But there's no way to get any level of assurance about intent. After alll, as the New Yorker cartoon aptly put it, on the Internet no-one knows if you're a dog. Or just a jackass.

During the holiday break, I did a bit of thinking and exploration about how to address (note I do not say "solve") issues of identity and accountability, in the context of website registration. Because I am a co-author of the software we use to run this website (JSPWiki), I have a fair amount of freedom in coming up with possible enhancements.

One obvious way to address self-registration identity issue is to introduce a vetting system into the registration process. That is, when someone registers, it triggers a little workflow that requires me to do some investigation on the person. I already do this for the mailing list, so it would be a logical extension to do it for the wiki, too. This would solve the identity issue — successfully vetting someone would enable the administrator to have much higher confidence in their identity claims, albeit with some sacrifice

There's just one problem with this — I hate vetting people. It takes time to do, and I am always far, far behind.

Right, there is another issue at work here, vetting is a conversational process. As I learned from Ross Mayfield, conversational networks are not scale free.

And further we know from Kim Cameron's work on the identity metasystem

The root of these problems is that the Internet was designed without a system of digital identity in mind.

Andy goes on to look at several ways to skin this cat:


A second approach is to not do anything special for registration, but moderate page changes. This, too, requires workflow. On the JSPWiki developer mailing lists, we've been discussing this option quite a bit, in combination with blacklists and anti-spam heuristics. This would help solve the accountability problem.

A third approach would be to accept third-party identities that you have a reasonable level of assurance in. Classic PKI (digital certificates) are a good example of third-party identities that you can inspect and choose to trust or not. But client-side digital certificates have deployment shortcomings. Very few people use them.

A promising alternative to client-side certificates is the new breed of digital identity architectures, many of which do not require a huge, monolithic corporate infrastructure to issue. I'm thinking mostly of OpenID and Microsoft's CardSpace specs. I really like what Kim Cameron has done with CardSpace; it takes a lot of the things that I like about Apple's Keychain (self-management, portability, simple user metaphors, ease-of-use) and applies it specifically to the issue of identity. CardSpace's InfoCards (I have always felt they should be called IdentityCards) are kind of like credit cards in your wallet. When you want to express a claim about your identity, you pick a card (any card!) and present it to the person who's asking.

What's nice about InfoCards is that, in theory, these are things you can create for yourself at a registrar (identity provider) of your choice. InfoCards also have good privacy controls — if you don't want a relying party (e.g., securitymetrics.org) to see your e-mail identity attribute, you don't have to release that information.

So, InfoCards have promise. But they use the WS-* XML standards for communication (think: big, hairy, complicated), and they require a client-side supplicant that allows users to navigate their InfoCards and present them when asked. It's nice to see that there's a Firefox InfoCard client, but there isn't one for Safari, and older versions of Windows are still left out in the cold. CardSpace will make its mark in time, but it is still early, methinks.

OpenID holds more promise for me. There are loads more implementations available (and several choices for Java libraries), and the mechanism that identity providers use to communicate with relying parties is simple and comprehensible by humans. It doesn't require special software because it relies on HTTP redirects to work. And best of all, the thing the identity is based on is something "my kind of people" all have: a website URL. Identity, essentially, boils down to an assertion of ownership over a URL. I like this because it's something I can verify easily. And by visiting your website, I can usually tell whether the person who owns that URL is my kind of people.

Now to the interesting technical questions:

Recall that the point of all of this fooling around is to figure out a way to balance privacy and authenticity. By "privacy", I mean that I do not want to ask users to disgorge too much personal information to me when they register. And correspondingly, I do not want the custodial obligation of having to store and safeguard any information they give me. ...

By "authenticity" I mean having reasonable assurance that the person on my website is not just who they say they are, but that I can also get some idea about their intentions (or what they might have been). OpenID meets both of these criteria... if I want to know something more about the person registering or posting on my website, I can just go and check 'em out by visiting their URL.


The larger issue at work in Andy's quest is one I predict that is going to dominate the software security space for the next few years. CBAC (claims based ac) systems (or ABAC (assertion based ac) if you prefer) require reusable design patterns to make exactly the kind of decisions Andy discusses, and if we want to be anywhere close to scale free, we better be able to do it really, really fast, and dynaic as well...its one thing to say we can federate and exchange saml, it is another to identify the logical architecture for what assertions you accept, from whom, when, etc. there are a bazillion things to be done in this space.

Of course, these types of things were rarely dealt with in predecessor security architectures, as RESTafarians are quick to point look how far we got with SSL. And with SSL we don't have to deal too much with above (at least until someone puts pr0n on a perfectly good securitymetrics site). Andy uses this analogy for SSL

And with respect to the difference between transport-level security and message-level security, the analogy I use is to compare SSL to a concrete sewer pipe. You may not be able to break into it, but you sure as hell have no idea what's flowing through it.

The good thing about the sewer pipe model is that we just hooked it up and it was "secure", the CBAC/ABAC world means we have +/- 10 bazillion choices to make about the claims/assertions we are gonna deal with. Frankly there are too many choices for an average enterprise developer. We need design patterns for this stuff (think identity mvc). And we pretty much need them now.

January 06, 2007 in Assurance, Security, Software Architecture | Permalink | Comments (0)

»
My Photo

SOS: Service Oriented Security

  • The Curious Case of API Security
  • Getting OWASP Top Ten Right with Dynamic Authorization
  • Top 10 API Security Considerations
  • Mobile AppSec Triathlon
  • Measure Your Margin of Safety
  • Top 10 Security Considerations for Internet of Things
  • Security Checklists
  • Cloud Security: The Federated Identity Factor
  • Dark Reading IAM
  • API Gateway Secuirty
  • Directions in Incident Detection and Response
  • Security > 140
  • Open Group Security Architecture
  • Reference Monitor for the Internet of Things
  • Don't Trust. And Verify.

Archives

  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015

More...

Subscribe to this blog's feed