"Only peril can bring the French together. One can't impose unity out of the blue on a country that has 265 different kinds of cheese."
- Charles de Gaulle
Adam picked up on the meme yesterday around opportunities for improvement for the Internet on whether more authentication is the solution. I don't see how embedding authentication into Internet specs like HTTP can solve much of anything. Authentication is a guess, and I don't see how imposing haphazard guesses into specs is a solution. Vint Cerf should let himself off the hook that he didn't solve this in 1978.
Here is a list of the authentication contexts supported in SAML:
- IP
- IP Password
- Kerberos
- Mobile One Factor Unregistered
- Mobile Two Factor Registered
- Mobile One Factor Contract
- Mobile Two Factor Contract
- Password
- Password Protected transport
- Previous Session
- Public Key X.509
- Public Key PGP
- Public Key SPKI
- Public Key XML Digital Signature
- Smartcard
- Smartcard PKI
- Software PKI
- Telephony
- Telephony Nomadic
- Telephony Personalized
- Telephony Authenticated
- Secure remote password
- SSL/TLS Client Authentication
- Time Sync Token
- Unspecified
Holy protocol hodge-podge Batman! I have no idea how many of these were available in 1978, but not too many. The only ones remotely up to the task came along much later. The sheer amount of authentication protocols supported in the SAML should tell you that we are nowhere near close to solving this problem. Its way to context-specific for one thing. Its got no place in spec except as metadata (the way SAML handled it).
So there's no way it could have been solved in 1978, and really no way now. Authentication is still just a guess. Right now, some kid at IIT or some place is cooking up the next greatest authN protocol.
Now identity and authorization can and should be handled. Let's take XACML. Basically this gives you a way to associate and enforce rules and policy based on the combination of Subjects (which you can find at runtime) with Resources (which you should know at deploytime) with certain actions. A stub for this type of access control - a puzzle of verifiable Subjects, known Resources and Actions assembled at runtime - would go a long to solving many of the problems we have today, Madame TOCTTOU aside.
wait, what? You can't engineer truth and security is probabilistic? Why didn't someone tell me about this sooner?
Posted by: Alex | September 10, 2009 at 04:23 PM
no, the issue is that a primary principle in programming is to separate the stuff that changes frequently from the stuff that doesn't.
look at that list of authN protocols, its crazy, and that's just the names, *how* they all work is even more varigated.
whereas three part rules for authZ - associating Subjects, Rules, and Actions haven't changed since Martin Fowler was in short pants.
the Web lacks both and its much more doable to get authZ in.
Posted by: gunnar | September 10, 2009 at 04:42 PM
My point was that A1 will always be a witty guess to some degree, and as such A2 will/should inherit the probabilistic nature of A1.
In a sense, we already engineer that now. The probability of A1 for Alice is "high" given a range of factors, so we can grant Alice a high (full) degree of A2. Alternately, if A1 does not meet "High" conditions for Alice, we can give partial A2 (this happens to me using VPN with my employer, actually).
Posted by: Alex | September 10, 2009 at 06:13 PM
"we're already able to engineer that now."
bleh.
Posted by: Alex | September 10, 2009 at 06:32 PM
I think the point missed here is that the protocols themselves don't have any ability to even send authentication data. And, from their basic operational principles don't include basic things like preventing spoofing, etc. Almost all of the items you focus on above are about end-user authentication. They aren't necessarily about network-level authentication. IPv6 for example makes spoofing messages from off-subnet impossible, just by the nature of how the protocol works. This is, in my opinion, a step forward.
Posted by: Andy Steingruebl | September 10, 2009 at 06:40 PM
you're right there's a dependency, but the question i am trying to address is where do you deal with these concerns in a stack. software architecture has a particular focus on separation of concerns and for my money i want my guess logic separate from puzzle logic.
and again you have HTTP the cockroach of protocols which can serve as a transport for authN protocols (like Kerberos tickets), but by itself is not sufficient.
all i really want is web protocols that identify PEP/PDP with associated subject tokens, to force a policy check before accessing the resource, then we can swap in and out of the authN protocols du jour. because, hey - choice is good especially in cheese
Posted by: gunnar | September 10, 2009 at 06:42 PM
Me, I'd like certain checks to happen way below that layer. If I have an intranet app, or maybe an app on my home network, it sure is nice to not allow anyone from the internet to even have a crack at exploiting it. Hence firewalls, which I know you hate. They are however a nice way to reduce attack surface. problem is, when you don't have any certainly about the veracity of the underlying protocols, you're also screwed.
Posted by: Andy Steingruebl | September 10, 2009 at 07:13 PM