:Lots of companies talk about adding security into their SDLC; Microsoft recently found a security bug and working to address it in the future in their SDLC. Feeding security issues forward is a great idea, but there are problems to be solved and decisions to be made about actually deploying security into the SDLC. The major SDLCs, like XP and RUP, do not have a concrete notion of security, security is treated as just one more non-functional requirement. So it can be hard to get traction on adding security into your SDLC. One of the benefits of being a consultant is that pragmatism is beaten into you. As great as it would be for organizations to unilaterally adopt security throughout their entire SDLC, many companies require a phased approach. There are several contributing factors to this: organizations typically can only absorb so much change at once, the domain is not broadly understood, and the developer-to-security personnel ratio is skewed heavily in favor of the former.
BuildSecurityIn, my own work on security describe how to deploy security techniques into your SDLC from an end to end standpoint, this is the end game. In many cases, a phased approach is needed to advance. Let's take a sample SDLC that includes Analysis (Requirements, Use Cases), Design (Data Modeling and Flows, Modeling), Coding (Development and Unit Testing), and Deployment (QA, System Test, Production Promotion).
If deploying security across all of these phases is not feasible in the near term, or if an iterative phased approach, where "centers of excellence" are developed at discrete points in the process, is desired here are some approaches to phasing security into the SDLC:
1. Top down Approach
This approach adds security requirements to the functional and non-functional requirements and the Use Cases. These requirements can be derived from the Information Security policy, and meeting with security stakeholders. These policy statements are mapped to the requirements and Use Case documents. The requirements drive the design, development and testing of the system. Strengths of this approach are that requirements are clearly defined, and the implementation is up to the domain experts who can make the design and development decisions that work the best in their own technology space.
Key indicators for use: If the organization has a clear Information Security Policy that addresses issues like data classification and acceptable use in a way that can be translated into requirements then this approach can be a good starting point.
Risks in this approach: The Information Security group is outsourcing design and implementation decisions, and lacks an overall compliance view.
Mitigate this risk by writing Security-centric Use Cases that show behavioral flows, exceptional flows, and pre and post conditions that provide more specificity for designers, developers, and testers. Getting actionable security requirements into developer's hands from the early stages is a huge advantage towards seeing them realized in production. Additionally, exorcise the word "security" from as many requirements as possible and be more specific around the actual goal, e.g. authenticate a user dont' "secure a user"
2. Testing and Validation Approach
The top down approach gets Information Security involved at the begining of development. The testing and validation approach targets the end of the development lifecycle. Security systems testing, penetration testing, white and black box apporaches can be used to validate the sufficiency of the system to meet the security goals.
Benefits of this approach are that the Information Security group has a better handle on the qualities of the code that is to be deployed and can find implementation flaws. The drawback is that there may not be time to fix all of the issues that are found. One of the recursive issues at play here, is if the Information Security Policy is not clearly defined, then what is the testing criteria? Industry best practices such as CIS, SANS and OWASP usually fill this gap.
3. Start in the middle approach
This approach focuses on code review and testing during the development phase. Using peer review and automated source code analysis tools to identify security bugs in the code as it is being developed. The Source Code Analysis Tool landscape has gotten a lot more interesting in recent years, making this an interesting approach. The advantages include: being involved early enough to both find *and* fix security bugs, and providing informaiton about bugs to developers in a language that is understandable to them. The disadvantage is that some security errors may be at a design level and it may be too late to resolve these, also the lack of a clear policy and standards may have the smae gap as with the Testing and Validation approach.
A fourth way that may be combined with anoy of the above is training in the SDLC. Focus on a center of excellence in one area and train the subject matter experts on mapping secuirty to their space for the reamining areas. This approach decentralizes some control and empowers domain experts to make the correct design and implementation choices. Again, policy and/or standards are important so that there is clarity around the goals, the technique by itself is never enough.
Remember these are starting points, not the end game. The whole point of a phased approach is to *not* boil the ocean. Phased approaches are an example of an approach I advocate because it forces InfoSec to operate (and iterate) like a software development team
Comments