Some recents items as a backdrop for Dan Geer's recent ACM piece.
Item 1: Who owns eCommerce? Tim Bray on IBM versus Amazon
If the titles mean anything (not always a sure bet), this might mean that IBM has finally managed to figure out how to set up that Internet Tollbooth that we’ve always been afraid of. If you’re interested in “Presenting Applications in an Interactive Service”, “Storing Data in an Interactive Network”, “Presenting Advertising in an Interactive Service”, “Adjusting Hypertext Links with Weighted User Goals and Activities”, or “Ordering Items Using an Electronic Catalogue”, apparently IBM thinks you need to pay them for the right to do any of those things. If the courts agree with them, it’s time for me to find a new line of work.
Item 2: Who owns sales information?
Deal site BlackFriday.info yesterday removed the Best Buy "Black Friday" sales price list after the big box retailer threatened to deliver a DMCA takedown notice to Black Friday's ISP. In a brief posting, Black Friday said, "While we believe that sale prices are facts and not copyrightable, we do not want to risk having this website shut down due to a DMCA take down notice."
As my friend Jim points out, since First Data aggregates personal data can we just launch a DMCA claim against personal data aggregators?
Dan Geer's essay in the recent ACM starts off:
How we ultimately decide to handle the security problem will have a lasting effect on computing as we know it.
He goes on to provide various metrics about where the advances in storage, networking, and processing power is leading us, and opines that complexity makes the security problem ever more daunting. I pretty much buy into this thinking with the one caveat being that if we security geeks are smart enough we can use the cheaper computing power, network, and storage to our advantage too. For example, the way security tokens are represented in XML in SOAP messages would never have been feasible in TCP/IP or every other protocol that preceded SOAP because they all relied on fixed offests for speed reasons. It is fun to bash XML as bloated, but Peter Weill pointed out:
Peter Weill at MIT, “Don’t lead, Govern”
XMLis like cardboard. It is a very useful packing material because it is malleable and self-describing. Cardboard is itself a large worldwide industry because it is so useful. Often the cardboard used in packaging is heavier than the actual contents of the package. So we should not really be complaining about the “angle brackets” because that’s really not the point.
So anyhow, the cheaper power sources let developers write more features, but security can also leverage cheaper power sources. Deep Blue beat Kasparov a few years ago, we should be able to expect Deep Blue to have doubled its computing power once or twice since then (if it was still maintained), do we think that humans have got twice as good at chess since then?
Anyway, security is always a follower, lagging behind technology, but I think we have not really begun to even try as an industry to begin to catch up, and we should not cede all the computing power to vendors who want to write the next Microsoft Bob or Clippy app.
Dan Geer on preemption:
Preemption requires intelligence, intelligence requires surveillance, and surveillance requires mechanisms for doing so that do not depend on the volition or the sentience of those under surveillance. The public is demanding protection that it feels unable to accomplish itself. The public, at least in the United States, is inured to the idea that all bad outcomes are the fault of someone to whom liability can be assigned. The public is wrong in principle (take care of yourself) but right in practice (assign risk to those most capable of thwarting it). These forces conspire to put the duty of surveillance on the upstream end, to assign the duty (and the liability) of protection to those with the most resources regardless of whether it is fair.
Surveillance is an excellent example of the vastly under-utilized fast, cheap, and out of control security design pattern. Of course surveillance has an interesting impact on privacy:
Closer to the point, everyone who carries a cellphone in the U.S. is under surveillance by the imposed requirement of instant location for emergency (911) services. So, what do we want as a unit of surveillance? Remembering the power of fusing data from one surveillance system with another, think carefully before you answer. Think about whether you want surveillance at all. Think about what you are willing to trade for safety or, frankly, that obliviousness that seems to be synonymous with a feeling of safety. My own family is quite naturally tired of me and this issue, but someone who should know better says, “Privacy doesn't matter to me. I live a good life. I have nothing to hide.”
This is not meant to be a diatribe about privacy, however much the word surveillance may imply that it is. It is a question of what the future of computing is. Here is the question that I am really asking: If, by some miracle, my friends and neighbors decide that the safety they want is not available without a level of surveillance they can't knowingly accept, what then?
Geer points out where complexity is taking us:
Sure, my colleague Hobbit knows ncat because he wrote it, but who can explain its usefulness to that man in the street? Andreesen probably understood Mosaic at some point, Gutmann knows cryptlib, and Dingledine knows tor, but how much of Windows does Cutler still grok? Ditto Linux and Torvalds? And why am I saying this?
The snowballing complexity our software industry calls progress is generating ever-more subtle flaws. It cannot do otherwise. It will not do otherwise. This is physics, not human failure. Sure, per unit volume of code, it may be getting better. But the amount of code you can run per unit of time or for X dollars is growing at geometric rates. Therefore, for constant risk the goodness of our software has to be growing at geometric rates just to stay even. The Red Queen was right: You have to run faster and faster to stay in the same place.
...
We digerati have given the world fast, free, open transmission to anyone from anyone, and we've handed them a general-purpose device with so many layers of complexity that there is no one who understands it all. Because “you're on your own” won't fly politically, something has to change. Since you don't have to block transmission in order to surveil it, and since general-purpose capabilities in computers are lost on the vast majority of those who use them, the beneficiaries of protection will likely consider surveillance and appliances to be an improvement over risk and complexity. From where they sit, this is true and normal.
While the readers of Queue may well appreciate that driving is much more real with a centrifugal advance and a stick shift, try and sell that to the mass market. The general-purpose computer must die or we must put everything under surveillance. Either option is ugly, but “all of the above” would be lights-out for people like me, people like you, people like us. We're playing for keeps now.
Accept surveillance or turn your computer into a PS2 does not sound like much of a choice. Where would the innovation come from? Historically, a lot of innovation it came out general purpose computing environments. Bottom up innovation on PS2? Not so much. As interesting as this Hobson's choice is, does it really have to be all or nothing: loss of functionality versus loss of autonomy/privacy? I seem to recall that we can solve any problem by adding a layer of indeirection. Technically, we can we virtualize environments like identity information on smart cards, in cardspaces, on cellphones that on some level can create a way to separate a virtualized/controlled environment combined with some autonomy and privacy on the user's behalf. Technically, not a problem, of course, Dan Geer's work points out the business and social factors that may lead to this not happening.
And if you have not already done so, go here and read the Report on the Surveillance Society which I'll blog/annotate another day. The intro:
"We live in a surveillance society. It is pointless to talk about surveillance society in the future tense. In all the rich countries of the world everyday life is suffused with surveillance encounters, not merely from dawn to dusk but 24/7. Some encounters obtrude into the routine, like when we get a ticket for running a red light when no one was around but the camera. But the majority are now just part of the fabric of daily life. Unremarkable."
Cut off our opposeable thumbs or store a copy of our thumbprint in some giant database somewhere. Nice choice.
The surveillance society doesn't want your thumbs - it wants to identify and track you to their own ends, which may not be society's or your desire.
Don't bother cutting off your thumbs - they'd just as well imprint another digit or other identifying mark (palm prints are just as unique).
The only good thing is that works in our favor is that:
a) they have too much data to sift through
b) not enough imagination to ask the right questions of the data they already have
c) massive inertia and empires prevents new and better ideas taking hold (a stake through RFID's heart? anyone?)
d) inability to get things done in a timely fashion
e) lack of communications between bureaucracies
Prime examples of this include Dr Jayant Patel who was employed as a doctor without any form of checks, and continued to operate despite active investigations into the deaths of his patients. He left Australia on a commercial flight, using a first class ticket paid for by Queensland Health, despite all this. He made it through to the USA, and is there to this day.
http://en.wikipedia.org/wiki/Jayant_Patel
Both Australia and the USA have extremely close intelligence links, both have the same overly paranoid and highly automated immigration mechanisms, and yet... he managed to avoid it all.
There is hope (and fear) yet.
Andrew
Posted by: Andrew van der Stock | November 15, 2006 at 09:07 PM