top of page
Security Blog: Blog2

Writing Effective Security Requirements

  • Writer: David Read
    David Read
  • Feb 5, 2021
  • 7 min read

Updated: Feb 6, 2021

Photo by ThisIsEngineering on Unsplash

When it comes to making sure development teams produce secure products; dropping a list of requirements on the developer's desk and walking away is never the right solution. When engineering/developing a system, it is essential to consider security's needs, but it's not the be-all and end-all of a project. It is an important part but needs to be assessed alongside everything else. Security requirements should be treated as just that, requirements. You should provide requirements using the same process ops teams', or any other customers do.


The role of security is to provide the bridge between security requirements and what the developer needs to do.


The best argument for why your requirements should follow similar to other teams is so everything is written in a common language the developers understand. You make it much easier for developers to understand what they need to do. If you drop NIST or ISO 27001 on a developer's desk, they won't know where to start. It's up to security to bridge the gap between standards (such as NIST) and what the developer has to do.


Take a look at NIST 800.63B Section 5.1.1.2 for example; this is a well-written piece on how to handle "Memorised Secret Verifiers"; AKA passwords. As great as it is, the NIST documentation is still hugely complex. No developer without a keen interest in security will go through; hell, no security architect without a keen interest will either! Suppose you compare this to the OWASP Application Security Verification Standard (ASVS). In that case, you notice that they have taken the NIST section above and turned into a large number of much more straightforward to read bullet points (see section V2 Authentication Verification Requirements). The OWASP ASVS is much easier to understand, but unless you are working with keen developers, they will probably not go into this much detail. Part of the security role is to provide the bridge between these requirements and what the developer needs to do.


Just like we take international standards like NIST and convert them into company policy to fit the company's culture, we should do the same for development requirements. No matter what set of security controls you start with, you can usually identify what requirements are each team's responsibility. If you have one team building the authentication for an application and another building the main features (for example) you can instantly remove all the authentication requirements from the second team's list. As long as they correctly use the other team's authentication service, they will never have to care about any of the authentication requirements.


This approach is known as Services Orientated Architecture (SOA). A great example of how SOA can do this is Netflix's approach to Paved Roads. At Netflix, developers are free to make their choices about how to approach development. However, tools provided secure their systems by default while covering a range of their security requirements out of the box. If developers use the tools provided, their security requirements become someone else's problem!



Security requirements need to be both complete and pertinent



In effect, requirements need to be complete and pertinent.

  • Complete: We have covered all possible vulnerabilities that we want to protect against and don't have any significant gaps

  • Pertinent: we want our requirements to be relevant and needed.


Completeness

Completeness is business as usual for many security professionals, so I won't go into it too much. Most of us got into defensive security as we were sick and tired of seeing gaps in designs or production systems be ignored, and we felt we felt a duty to try and fix them. We believed we could make more of a difference taking the lead on making systems more secure, and for the most part, we were right! :)


The key with completeness is to make sure you have some external framework to map against your requirements. That way, you can perform a gap analysis against a tried and tested framework to see where you might have missed something.


Pertinence

Security requirements should provide a benefit. We should make sure developers are only provided requirements that they need to care about. The best way to prove a requirement is necessary is to justify each requirement, detailing its necessity and the value received from its fulfilment.


If requirements are well defined with a clear purpose, it should be easy to see if the outcome will provide value and, by extension, their relevance. By focusing on the final product, you can also consider what value a requirement brings to the end to end solution. If, for example, some security work makes one application super secure and impossible to hack, but that service is run in a segregated network and only communicates with one other service, in reality, your requirement brings little value to the full solution. Your effort may be put to better use elsewhere.


You can also map requirements to the high-level business objectives of the company. Doing so makes it easier to provide justifications for completing security work and allows you to present your work to seniors within the context they care about. The diagram below is a simple example of how you can do this for NIST 5.1.1.2 above. If you want to learn more about approaches like this I recommend checking out SABSA.




Writing Requirements?

So you have decided to try and write development security requirements for developers. There are two types of requirements:

  • Functional Requirements. What you want the application to do, w.g. "All user logins must be logged."

  • Non-Functionality Requirements: What you expect the application to be, e.g. "The application must meet HIPAA requirements."

Typically people treat most security requirements as non-functional and say "the application must be secure" or "the application must meet regulatory requirements". These tend not to be very useful or helpful. In most scenarios (at least for security) it is possible to convert non-functional requirements into functional requirements. It's possible to turn regulatory requirements into a list of tasks, and those tasks into functional requirements.


Why are functional requirements better than non-functional? It is straightforward to measure whether functional requirements have been met. You can check if a feature has been added, or even better, check that the tests verify that a feature exists and works. By focusing on making sure requirements are easy to test you can ensure that applications continue to follow the security requirements, even as they change in the future, through automated testing at release or even monitoring and alerting within production. When you leave a project to move on to another, you can (hopefully) rest assure that the project will stay secure.


Security Principles

When writing requirements, it's good to have a core set of principles to follow to maintain a set level of quality and keep your requirements relevant. By principles, I mean a fundamental list of conditions any requirements you levy on developers must-have. Requirements for security requirements if you will.


You should write your principles and have them agreed with the development teams, preferably before you start. This way, you are on the same page, and you know that when you provide requirements, they are more likely to be accepted by the development team. If they do push back, you can point to the principles they follow as part of your justification.


Examples of security principles are:


Security requirements must:

  • Have a low operational cost

  • State what is needed and now how to do it

  • Map to an overall objective

  • Have a justification

  • Be given a priority

  • Be easy to test and easy to audit

  • Be written for the end to end solution


What about requirements that don't get completed?

Businesses always want to do more than they can achieve; regardless if its security or product management. Even if we write the best list of requirements in the world, some will be relegated to the backlog.

It's crucial to prioritise your requirements based on their importance. Some people use numbering, risk scores or high/medium/low. Personally, I use Must/Should/Could since this is the language the developers I currently work with use.

  • Must: You cannot release if you haven't done this.

  • Should: You can release if you haven't done this, but we expect this to be implemented within a specific timeframe.

  • Could: This is a nice to have and would make you more secure but isn't necessary right now.

Prioritising your requirements lets teams focus on the most critical issues and gives you (and the development teams) bargaining power to work out what to prioritise.


Make sure you have a budget for security


Another great reason to prioritise your requirements is that it helps with budgeting. It's crucial to budget for security, and your budget needs to reflect your risk posture. In reality, when it comes to securing applications, your business should have a defined risk posture (e.g. we estimate the cost of a breach to cost £X million, so we can spend £X/2 million to reduce our risk posture by 50%*). The goal should be to focus on implementing the minimal amount of security required to meet the business's risk posture. Any more is money that could spend elsewhere to provide better value. If this means not implementing a security control that is needed, then this is a statement on the business's risk posture more than anything.


Sometimes we don't have the money or resources to do everything, and when we don't, we need to get the maximum gain from whatever budget we have to spend. Some companies can and need to spend a lot of money and do whatever is possible to protect themselves; others might not need to. It depends on the business you are in and the environment your company operates in.


When prioritising requirements, you might have disputes over gaps and need to decide whether you should accept the risk associated with a gap in security. In these scenarios, it's best to make sure someone else reviews and agrees with the risk. Risk acceptances like this are performed in large companies using a formal process with the risk and compliance teams. In smaller companies, you should try and find someone more senior with "skin in the game" to help pass judgement.



* In reality such calculations should be a lot more thought out than this one.


In Conclusion

This article has covered a lot very quickly. However, in conclusion:

  • Make sure your requirements cover everything necessary

  • Make sure your requirements are relevant

  • Write your requirements in a language the developer understands

  • Write your requirements so they are easy to test and validate

  • Agree on core principles with the development teams to make it easier to provide requirements and understand what is necessary

  • Prioritise your requirements

  • Budget your requirements

 
 
 

Subscribe for updates. New posts every 2 weeks

Thanks for submitting!

bottom of page