Learning about security means understanding types of risk, and investors, specifically value investors, have a long demonstrated track record of framing ways to think about asset protection and making it actionable. Recently I've been reading James Montier, very impressed with his approach which is based on a pretty rigid process and objective checklists, here is an excerpt from a recent interview:
In the infosec world the "seductive details" are threats, hollywood movie plots and the like that infosec people use to get funding. Meanwhile the real structural problems of the core assets (apps, data, identity) remain unaddressed whilst the security world chases the taillights of the latest threat du jour.Miguel: Let’s talk about the concept of seductive details…can you give us an example of how investors are trapped by irrelevant information?
James Montier: The sheer amount of irrelevant information faced by investors is truly staggering. Today we find ourselves captives of the information age, anything you could possibly need to know seems to appear at the touch of keypad. However, rarely, if ever, do we stop and ask ourselves exactly what we need to know in order to make a good decision.
Seductive details are the kind of information that seems important, but really isn’t. Let me give you an example. Today investors are surrounded by analysts who are experts in their fields. I once worked with an IT analyst who could take a PC apart in front of you, and tell you what every little bit did, fascinating stuff to be sure, but did it help make better investment decisions, clearly not. Did the analyst know anything at all about valuing a company or a stock, I’m afraid not. Yet he was immensely popular because he provided seductive details.
The airbag that's guaranteed to work unless you get in accident is of course the network firewall, which "protects" your system until someone tries to attack it.Miguel: Interestingly you connect the dangers of irrelevant information to modern risk management. I’m referring to your comments on Value at Risk – “VAR is fundamentally flawed after all it cuts off the very it of the distribution that we are interested in: the tails! This is akin to buying a car with an airbag that is guaranteed to work unless you have a crash…” Can you tell us more?
James Montier: Sure. Modern risk management is a farce; it is pseudoscience of the worst kind. The idea that the risk of an investment, or indeed, a portfolio of investments can be reduced to a single number is utter madness. In essence, the problem with risk management is that is assumes that volatility equals risk. Nothing could be further from the truth. Volatility creates opportunity. For instance, was the stock market more risky in 2007 or 2009? According to views of risk managers, 2007 was the less risky year, it had low volatility, which they happily fed into their risk models and concluded (falsely) that the world was a safe place to take risk. In contrast, these very same risk managers were staying that the world was exceptionally risky in 2009, and that one should be cutting back on risk. This is, of course, the complete opposite to what one should have been doing. In 2007, the evidence of a housing/credit bubble was plain to see, this suggested risk, valuations were high, it was time to scale back exposure. In 2009, bargains abounded, this was the perfect time to take ‘risk’ on, not to run away. Risk managers are the sorts of fellows that lend out umbrellas on fine days, and ask for them back when it starts to rain.
For some strange reason, infosec is far removed from business realities. The layers of seductive details about threats create an artificial separation between what infosec spends its money and time on versus what the business invests its money and time in.
By itself this misalignment would be a medium sized problem (there's plenty of wasteful IT spending, and infosec's penchant for overallocating on network toys is just another example). But what makes this a much larger problem is the lack efficacy of protection and detection mechanisms that only work so long as they are not attacked. The reason for this is that security architecture is removed from the core business assets (apps, data, identity) they are supposed to be protecting. Instead the protection and detection mechanisms must form fit to that which they are supposed to be protecting. The reasons air bags work is that they are very close to the asset they are protecting.
Hi Gunnar, I question your assertion about the importance of proximity of security mechanisms to the asset being protected. This may be true in some cases, although even in your example I'm not convinced that the airbag is the most important protection I've got as a driver. And for another example, the oxone layer is much better protection against the sun's rays than any amount of suncream.
Posted by: Richard Veryard | March 09, 2010 at 10:26 AM
@Richard -
Ozone is a good example, it mediates the sun's rays. In network firewall security - ports are simply opened/closed, and addresses translated, there is zero visibility into the apps, data and identity.
In, say a XML Security gateway, the connection is terminated, the content (data, identity bound for app) are validated against policy and then forwarded along. This latter example is akin to what the ozone does (for now ;-P )
Posted by: gunnar | March 09, 2010 at 10:42 AM