In my opinion, one of the most harmful notions in infosec is "trust", how many times have you heard someone say - well we trust this server, we trust this group, we trust this zone, we trust this partner, and on and on. I mean people use "trust" over and over again, but there are several problems.
First, Bob Blakley reminds us that "trust is for suckers." Why would anyone charged with security architecture willingly build naivete into their system?
As Kim Cameron mentions, your lawyer does not use the word "trust", instead when you write a contract it is around roles and responsibilities, liability and actions.
Kim pointed out that infosec is in the midst of a "continuing deterioration of privacy and multi-party security due to short-sighted and unsustainable practices within our industry has begun to have the inevitable result", that infosec itself is a primary contributing factor will come as a surprise to many in the industry, but ill-defined notions like "trust" are a leading reason why.
One reason why "trust" is so problematic for infosec is that how do you write a design requirement, operational plan or test criteria about "trusting something?
People use the word trust to mean anything from access control to integrity to certification to liability and many, many other security architecture concerns.
When you hear anyone say the word "trust", you must drill down and get a precise definition of what they mean by "trust", identify the specific verification, controls, monitoring, processes and other mechanisms that seek to deliver "trust."
And then simply exorcise the word trust from your vocabulary, replacing it with precise definitions.
We have big name gurus like Bruce Schneier saying that we must "trust" our Cloud providers. This is misleading at best and utter madness at worst. What did Bruce mean when he said we must "trust" our Cloud providers? Who knows? And how in the world would we even figure this out? We must trust what exactly? Taking the Cloud example, the way forward for infosec has nothing to do with a sloppy definition like trust, its precision and integration, as to the Cloud its Don't Trust. And Verify. That paper describes four security architecture elements for the Cloud - Gateway, Monitoring, PEP/PDP, and STS. They each do very specific things and the sum of the part give you concrete improvements including among other things:
- Gateways manage attack surface
- Monitoring builds visibility into the system
- PEP/PDP gives fine grained, dynamic authorization capabilities
- STS exchanges security tokens
There is not a whiff of squishy trust in the above list, but there are four pieces of security architecture with a unique value proposition that you can plan, budget, draft requirements, design, test and deploy.
What does "trust"ing a system/app/service mean? I presume it means taking some assumed security controls away, but which ones? Does it mean you don't monitor, you don't have access control on, you don't encrypt or verify data? What does it mean and how do you design/build/test it?
Obviously, Bruce Schneier is a smart and articulate guy, I am sure he has some specific things in mind when he says we should trust the Cloud, but like when others use that word, I have no idea what they are.
As Mark Twain said, precise language is the difference between lightning and a lightning bug.
I'd recommend doing a find or grep on any/all of your security architecture docs to find the word "trust" and replace it with a precise definition of what you mean and don't mean.
Gunner,
The problem you identify with "trust" is endemic in InfoSec. So much so "InfoSec outlook" might become the new slang for "wooly thinking".
At the heart of the problem is "metrics" or more correctly the almost complete absence of metrics that can be used as part of "the scientific method".
Posted by: Clive Robinson | December 18, 2010 at 09:56 PM
@Clive,
Who said anything about science? I would settle for humble engineering, engineers don't "trust" steel they understand its capabilities and constraints and design accordingly.
Posted by: Gunnar | December 18, 2010 at 10:05 PM
Are you saying you have the capability/capacity to fully "certify" each of those components of your architecture, including the ones you buy from a third-party? Are they liable if the product has a bug that compromises your security?
If not, then you're trusting your security gateway providers, those who wrote your STS, etc. Implementation quality matters, and you cna't simply architect it away. You probably can't adequately or meaningfully test for it, and you almost certainly can't hold any of your software/hardware producers liable.
So, you're trusting them right?
Posted by: Andy Steingruebl | December 20, 2010 at 11:57 AM
@Andy - its a question of what kind and how much naivete. Naivete by omission or naivete by commission. You can transfer liability (not the same as trust). How much do you pay or get paid for that naivete?
Blithe assumptions are the enemy of all design and security due to trsut-y thinking is among the worst at this.
Posted by: Gunnar | December 20, 2010 at 12:23 PM
That still doesn't answer the question I'm afraid. You go with a certain security gateway, or PDP, and it was written by someone. If you can't audit it, or don't have realiable audit (almost certainly the case), then you're trusting that they did it right, since your ability to hold them contractually liable for any negatives is essentially impossible. Go ahead and sue IBM/Datapower if (when?) their security gateway has a flaw. Try to buy that device with a contract that has "hard" security guarantees, and liability. You can't.
That leaves you trusting that the vendor has sold you a quality product.
Posted by: Andy Steingruebl | December 20, 2010 at 12:45 PM
@Andy
Bond people demand a higher rate of return on a bond issued by Spain than a bond issued by Germany, no trust involved
Greek bonds pay 12%
Spain pay 5.5%
German pay 2.9%.
Trust has no basis when the goal is safety. Bond market is saying - I dont think Greece is paying me back so they pay a (much) higher rate.
Any transfer or acceptance of liability should be made in the context of cost. Perhaps the cost of buying from IBM is so cheap (versus what it would take build from scratch) that it is worth taking on the liability. Again its not trust its cost and liability.
Note, you can certainly build a gateway from scratch. That's one reason why we have seen open source be so successful.
Posted by: Gunnar | December 20, 2010 at 01:14 PM
@ Gunner,
"Who said anything about science? I would settle for humble engineering, engineers don't "trust" steel they understand its capabilities and constraints and design accordingly"
And for that "understanding" the engineers need reliable methods of measurment or metrics.
There is the old saw about the difference between scientists and engineers,
A scientist is looking for a problem to investigate and characterize, engineers look for problems to not just investigate and characterize, but solve in as workman like fashion as possible as well.
Engineering with out science is the artisanal behaviour of the wheel wright using patterns passed down from father to son.
It is interesting to see the code cutters have atleast progressed to "patterns".
I'm not sure I can say the same for most ICT security practicioners they appear to still be artists applying themselves to cave walls and shaking the witch doctors stick at problems.
And for whatever reason they don't apear to want to go out into the light of day, why I don't know.
But religion went from cave paintings and shaking sticks to burning heretics at the stake and some of thoseheretics were what we would now call doctors and scientists.
Lets hope that, that fate does not await you or I for "breaking the faith".
Posted by: Clive Robinson | December 21, 2010 at 10:16 AM
That was your best rant of the year. Well said!
-Adrian
Posted by: Adrian Lane | December 21, 2010 at 09:00 PM
Trust has to be replaced with liabilities and actions - this is true in case trust is needed within an enterprise product that does not cross the domain. In this case too, customers trust the enterprise and provide their personal data. The enterprise asserts on its liabilities taking responsibility over the customers' data.
In case of cloud computing, trust does not just involve 2 parties. Trust involves all the cloud collaborators. When a service from a domain X invokes a service from another domain Y, and passes data, it is inevitable that domain X trusts domain Y with its data for carrying out ONLY the operation intended and that the data will not be used for any other purpose. As the data crosses the domain, we lose control over the execution flow of data and the only way that a collaboration can work in this environment is mutual trust between the parties involved. Domain X can provide assurance on data security in its own domain plus assurance on security beyond the domain based on trust.
Posted by: Kanchanna | December 28, 2010 at 09:27 AM
This post and comment thread is quite surprising. I thought everyone knew the standard Infosec definition of trust: a trusted system is one that can hurt you. Even Wikipedia gets this right.
Schneier appears to mean this in exactly the same way: we must trust our cloud providers. That is, the only way to benefit from their services is to let them choose whether to hurt us.
Is this definition surprising?
Posted by: Brian Sniffen | December 29, 2010 at 11:03 PM