Context In Security Reviews

When reviewing security controls in technical designs for projects, the context of the application in the security analyst’s world is often vital to making sound, risk-based decisions. By context, I mean understanding what an application does, what data it will handle, how customers or colleagues will use it, and how it might work even at a high level. Knowing this information can determine the value of data assets to cyber attackers, help drive discussions about appropriate controls, the depth and scope of testing, and identify the threats an application faces. When the context is not well understood, there may be a tendency to apply all available controls, covering all risks, leading to inappropriate controls or poorly implemented controls. 

What minimum contextual information do we need? 

For instance, if you learn at the start of a project, the new application will capture 500,000 new Internet-facing customer records per year. These data records captured include name, surname, physical and postal addresses, phone and mobile number, identity numbers and credit and debit card information to process payments. The business sees the cloud as the way to proceed and wants to use WeCannotCode to develop the software. 

From a pure compliance point of view, knowing this context, you know that the application faces several threats before getting into technical details. These include regulatory and legal or contractual (cloud, outsourcing, privacy and PCI), fraud (identify theft, card data theft, unauthorised payments), reputation, and possible operational risks (downtime), which should lead you (or compliance) to define mandated controls as a minimum and provide pointers to a security analyst for things to investigate in more detailed threat models. The context details should also give you some idea of how a bad actor might view the application - as a trove of data for identity theft, phishing and other scams on a company website over the Internet. Knowing the volume of data gathered means a successful compromise would be quite the haul for the bad guys and a big headache for the good guys. Understanding the application faces the Internet means it's open to a broad community of good customers and bad people. Finally, knowing WeCannotCode is involved means you must deal with supplier risks.

In another example, if the application is an intranet, hosting generally available company brochureware for general consumption by staff, this contextual information will help drive a more limited security response to the project. You know it's internal only, that the information is broadly available, that sensitive data is not inputted or processed. Fraud and operational risks are unlikely. While some controls should be in place, for example, safeguards to manage content, a more limited selection of controls is necessary, and testing may be limited to scans.

 Understanding the context of an application enables a more measured security response. The earlier the context is understood, the more likely the projects will integrate security controls. A well-informed analyst is more likely to secure funding or resources for penetration testing or other security controls. Even if security is engaged late, a good understanding of the context of the application will make applying controls more effective and accurate, even if they need retrofitting with a hammer into the application. Setting security controls without context potentially leads to excess security controls and testing overkill with vain but valiant attempts to boil the ocean by analysts or, worse, underestimating the security required. Take the time to understand the context before worrying about threat models, data flow diagrams and other analyst things!