"Never, ever, think about something else when you should be thinking about the power of incentives" - Charlie Munger
In 1980, Wal-Mart implemented a shrink incentive program. If the store holds shrinkage (theft) below a certain level the difference in the amount is reflected in the associates' pay. They reported that their shrinkage level after implementing the program was half their competitors'. Sam Walton also mentioned the associates felt better about each other because no one enjoys stealing even those who would do it given a chance.
I've often said that no one wants to write insecure code, and I wonder if something similar would work in infosec. Could a company put a fixed number each year towards an "average" breach cost and then if one does not occur, credit it back in a bonus to the tech staff, developers and sys admins?
Think - digital version of days since last workplace injury. My guess is that incentives along those lines would very probably work way better than the majority of products on RSA trade show floor, and at a fraction of the cost.
I think the single biggest change is soft. Its very hard to do as a security vet, but close your eyes and imagine the feeling of a walking up to a group of developers or sys admins, and having them glad to see you. That is night and day from most places today. I think security people would enjoy the feeling of being welcomed for their skills instead of shunned and avoided. To put it bluntly, as someone who helps people get their bonuses rather than ensures they don't get it.
There are some practical issues to getting this incentive realignment to work. Like - how do measure what good is in security?
To really try and do this at scale in an organization and to do with with real money and bonuses on the line, I would go with what Ken Thompson says, "when in doubt, use brute force" Simple is usually better, so rather than going into the weeds on figuring out each incident, or each bug and assign individual responsibility - the measure could be something like "headline risk"
Yesterday was the 26th anniversary of Exxon Valdez oil spill. In infosec we have plenty of recent examples in Target, HD, Sony et al. of company changing events. So the incentive could be based on some kind of measure that says "we do not end up on the front page of the newspaper in a breach story" means everyone gets a bonus payout roughly equal to what we would pay in response cost on a rolling two years basis. This should tend to focus the mind and inspire people. Fired up? Ready to go? Now let's go install some patches!
Of course, the headline risk could be seen as too blunt an instrument and you may suffer for other's performance, but security is a collective responsibility. If you wanted to divvy it up into pay for bugs found, then there is a perverse incentive issue. Creating the supply, also know an as the Texas rattlesnake problem
" So, to demonstrate the perverse economic incentives underpinning American health care, [Munger] recalled a story about rattlesnakes.
In a small Texas town, the local government had an idea to combat the growing snake problem. They offered a bounty for every dead rattler brought into city hall. The next thing they knew, everyone in the town was raising rattlesnakes.
This well-intentioned incentive didn’t solve the rattlesnake problem, but it does well to demonstrate the perverse consequences created by health care’s fee-for-service model.
As an example, Munger talked about one surgeon who was known for removing normal gallbladders. Having been caught doing what most surgeons would describe as inappropriate surgery, what was his response?
The doctor said taking out a normal gallbladder was a reasonable way to “prevent disease.” He said he was helping his patients avoid the dangers of a possible rupture. However, this almost never happens in people with normal gallbladders.
Of course, this isn’t the only instance of a doctor abusing the system. In cities like Miami with a surplus of doctors and facilities, we see twice the number of tests, procedures and hospitalizations than in other communities. Whenever a group is paid to do more, not better, the outcome isn’t hard to predict."
That could is an issue for sure for any pay for bugs type of scheme (Chris Walsh calls this the minivan problem). So then defaulting back to big ticket events and headline risk makes sense.
Its not perfect of course, but has the advantage of focusing attention onto the issues of strategic impact and puts security people, developers, sys admins and other on the same side of the table. To me, this is long overdue and a powerful organizational tool. Its never a process or tech problem, its always a people problem.
There is a temptation to say this is not fair, you could be penalized by someone else's mistakes. But bonuses are often based in large part on large events and outcomes where a lot of it is out of control for any one individual. However they still work. People are rewarded in large ways based on how well the company stock does. This has proved a very powerful motivator, and yet its not in hands of any one individual to create the outcome with the stock price. Still it works to focus attention and organize efforts.
Some might argue that incentives are silly, these are professional developers. What we need is regulation. We have used regulations, for example PCI or Company security policies, for a long time in infosec, they are not worthless, but they are not optimal either. At the very least they are only one tool in the toolbox and we should look at others.
Security people's main role is to be a barrier between an organization and stupid. So the real question is - what kind of barrier is the most effective? Regulations create the hostile, tactical and divided environments in which security people operate today. Bonuses have a way of getting people's attention I have noticed and they have a way of getting people to work together.
Ross Anderson was the first to point out the role of incentives in infosec, making them at least as important as security policies, mechanisms, and assurance.
Its not like we have not tried regulations, would it not be worth trying incentives too?
What I think the outcome here would look like is to simplify the coordination between the security team and dev/ops teams. On any engagement I easily spend 30-50% of my time on James Baker-style shuttle diplomacy trying to convince devs and ops folks that security is not deliberately setting out to destroy their timeline, bonus and career. If you just took that portion out of it, that means that any security time and dollars that get spent are spent on trying to solve actual security problems not Security/Dev/Ops Glasnost.
One measure of success I use in AppSec is to identify the point at which the number and severity of security bugs reported by the Dev team exceeds those reported by the Security team. When you hit that point, and few have, that is about as good as it gets in terms of teams collaborating effectively. So anything that can happen on an incentive basis that quickly moves your organization toward that without introducing perverse incentives (minvans and rattlesnakes) seems like a major improvement.