A couple of observations to open up “We systems people regarded the users, and any code they wrote, as the mortal enemies of us and of each other. We were like the police force in a violent slum.” That's from Roger Needham, talking about the early days of infosec. So if you think about it, not that much has changed.
That in itself is a problem. According John Foster Dulles - “The measure of success is not whether you have a tough problem to deal with, but whether it is the same problem you had last year”
Infosec has many problems, its a complex business, but perhaps the biggest problem is that not much has changed in the last few decades
Let's consider the OWASP Top Ten. Chris Hoff asks "why doesn't the OWASP Top Ten ever change?" Its true that the OWASP Top Ten is largely the same as it was in its inception over a decade ago. However, that's not due to laziness on the OWASP team's part. Quite the opposite, in fact, you'd be hard pressed to find a more dedicated, hard working bunch than the core OWASP folks.
The reason the OWASP Top Ten remains static is that we flat out have not made much progress in the past decade. Every year, there is a big OWASP AppSec conference, the organizers try to figure out - should it be in NYC or West coast? What about Austin, that's a cool city? My vote is always for holding it in Punxsutawney, PA, security is Groundhog day. Same issues, different year.
If you really boil it down, what we are trying to accomplish in security is to act as a barrier between devs, Business folks, ops and others and keep them from doing something stupid.
and you might say, well yeah but we are doing this or that with Cloud and Mobile. And my response to that is - yes, same thing - we are the barrier between devs/business/ops and doing something stupid with mobile.
So then the question is - how can we serve our companies as the most effective stupid barrier that we can possibly be? I found the answer to that, as well as an explanation to why the infosec industry has underperformed for so long, in a book by Nobel prize winner Daniel Kahneman called "Thinking, Fast and Slow."
The book explores the brain's two different operating modes. Which Kahneman named System 1 and System 2.
- System 1: Fast, intuitive, frequent, emotional, subconscious 2 + 2 = ?
- System 2: Slow, effortful, infrequent, logical, calculating, conscious 18 x 23 = ?
Once you notice these two modes, you can feel your brain operating in one mode or the other. If you look at this picture:
Many dozens of things come to mind. Travel, culture, a baguette, the movie Ratatouille, and plenty more. You do not even have to think about it, it just happens.
Then there is System 2, best characterized today, April 15, as slogging through your taxes
System 1 excels at answering questions like - What’s the capital of France? System 1 is
- Associative memory, you do not have to decide, it just happens. Like seeing that someone’s hair is dark
- You are conscious of the results and your impressions, not how it happens
Contrast that to System 2
- Laborious, effortful. Compute things – what is 24 x 17?
- Supervisor function. Can trigger System 2 processes. Best case – it should be the error correction function, recognizing when you are prone to error
Here is one big reason why you should care about this distinction. In a nutshell- System 1 usually wins!
When people are offered life insurance in two forms:
1.Life insurance that pays your heirs if you die in any event, or..
2.Life insurance that pays your heirs in the event you die in an act of terrorism
People are more likely to choose the second plan.
Takeaway - Simply making rational arguments isn’t good enough to influence better decisions
Now if we take those two systems, System 1 and System 2, and look at how they play out in a security design discussion. Here is what we get:
The work that comes out of Dev (System 1) ends up being pretty informal and Agile. It gets sent over to Infosec (System 2) which in logical fashion looks at the downsides and inevitably rejects some portion of or all of the plan. Now the important question here is - is that an effective stupid barrier? Is simply telling developers that this is too risky effective? By itself I would say no, not sufficient.
Because the System 1 system probably still goes live, and if you peel it back, you can look at the whole Dev-Infosec interaction through the System 1-System 2 lens with the developers making requests in System 1and the infosec team responding in System 2:
And part of the issue here is that security solutions are almost always creatures of System 2
These are all good ideas on some level, but none of them are simple and fast. And simply slamming these System 2 ideas to a System 1 consumer is a recipe for failure.
When you say "I wish the devs did data classification or threat models", what you are really saying is- "I wish software was written with System 2." Whether it should be or shouldn't be is an irrelevant question, its not written in System 2, its written in System 1. Your security plan has to factor this in otherwise you are causing confusion and delay.
Fixes must be addressed in a systematic manner too. You can't slam a playbook of System 2 ideas and expect it all to work in a big bang. Aim for a balance
The reality is that System 2 infosec has to create System 1 interfaces and activities to drive value.
The work then is not just to identify security logic in System 2, that is step one. The work then becomes making System 2 ideas drop dead, drag and drop simple to consume in the System 1 world in which the vast majority of the enterprise exists in. Find the linkages and integration points.
A comment by David Shaw says it well:
"It certainly helps to focus thinking about the payback (or lack thereof) we see in projects like some GRC 'automation' implementations (System 2 response paying lip service to System 1 interactions).
I also wonder whether a marker for robust, maturing infosec posture is that any measure introduced that delivers a System 1 outcome should have a System 2 corresponding capability performing analysis on effectiveness, efficiency, relevance."
And so the role of infosec is not to live in a System 2 ivory tower. Yes the security logic must make sense. But to be effective in practice, the security logic must be translated into something - process, tools, automation - that makes simple to consume for devs and others in System 1.
Based on observations over the years, it feels like there is a Constitutional law that says you must include a Sun Tzu quote in any infosec work. Now I am not going to revisit the ones you have heard before many times, instead here is one that I think is just as relevant as know your enemy. This quote from Sun Tzu encapsulates our grand challenge in infosec as it relates to improving and normalizing System 1 and System 2 relations:
Applying that then to our infosec world means that we must only deliver System 2 requirements that are back by readily consumable System 1 capabilities:
So for some rules of engagement here:
- Have a balanced approach - both systems matter
- You must always plan for System 1
- Challenges
- How to get System 2 concept to System 1 “no thought required”
- How to integrate
- How to automate
- No more than a few System 2s at a time
- Ownership
- Let other departments own System 1
- Never dump System 2 on someone else
You can define hardening profiles and scan for vulnerabilities all day long in System 2, but it won't matter as much as embedding those scans a la
gauntlt into an Agile process because that's where developers and code live. Let's take another concrete example, which I call Alice in Maliceland.
First we have Access.
Now the normal way that infosec seeks to mediate communications between Alice and Bob is access control - authentication and authorization. However, this leaves aside the issue of Malice.
So access control by itself, while necessary, won't cut it here. One reason is the System's security architecture itself.
All the way back in 2004, Rich Mogull highlighted the chewy center of network security. John Foster Dulles would be disheartened to know that this is still a problem today.
Back to our System 1 and System 2, think about that compared to the DMZ architecture. What is the DMZ, but a teeny, tiny island outpost of System 2 with its own special rules, meanwhile System 1 reigns supreme absolutely everywhere else.
Beyond the enterprise, that design has real consequences for users:
So what is a better and more effective approach that we can take in the real world? Its up to infosec and security architects to navigate the balance between Designing for Access and Designing for Malice:
Defining authentication and authorization services help to determine who has access to what. It does little to deal with active malice. The access control model is the starting point for malicious users, not the end game. Dealing with malice on some level assumes you will not win every battle, that's a healthy outlook for defenders. Tokenization is one of the major improvements in recent years. What is tokenization other than saying, yes we will lose some data/transactions, but we will limit what we lose? This is a fundamentally different approach to security rather than the classic access control which seeks to contain everything in a policy of allowable/non-allowable.
Where access control is concerned with Alice and Bob, designing for malice focuses on threats from Eve and Mallory:
Here is a classic access control scheme, user asking intelligent guard for a resource:
In the case above, the guard is intelligent and deft enough to fend off the wily requester. Unfortunately, not all guards are this intelligent and determined attacker could find a better place/guard to attack
A determined opponent will use brute force, target selection, fuzzing, different combinations and permutations to bypass the guard.
The net result here is that we need security architecture that finds the balance between access and malice, guards that offer more intelligence in protecting resources, can recognize brute force and more clever attacks, have a way to discern friend from foe, and monitor the system. Finding that balance is very much a System 2 job for Infosec, however to create value for our enterprises, we need to always ensure that there are targeted System 1 outputs that can be readily integrated into real world systems.
The mantra for getting from System 2 to System 1 is best expressed by Antoine de Saint-Exupery: "A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." A security architect knows she has achieved perfection not when there is no System 2 capability left to add, but when there is nothing left to take away from System 2 and embed within System 1.