Securing Software – Part 1 – The process
Securing software is perhaps the most lucrative craft today, as companies and governments defend against cyber threats and warfare. The "data" is usually "at rest" (stored) and it is the software that retrieves, encrypts, processes, transmits and receives the data that puts it at risk – the software that has vulnerabilities, and is deployed or configured incorrectly, and accessible to unauthorized attackers. In addition, the computing resources and software services are at risk of being hijacked – again due to software vulnerabilities.
Several best practices for securing software span an entire "development life cycle" of a product development/deployment/usage – and by that we usually mean the Design/code/test/release cycle – the waterfall model. Referring to the SAFECode guidelines for secure product development (see reference links at the bottom), the practices listed there anticipate a few weeks/months to be completed. Several of the tasks, such as static code analysis, 3rd party code security review and security test plan update are naturally diffused over a longish period of time. For example, it is easy to pick a 3rd party module to integrate for a particular feature, but time and resource constraint usually cause other security diligence activities to get pushed to "later". Similarly the security testing and 3rd party application pen-testing require a fairly functional and stable product, which happens nearer the "release time", and may take several days/weeks to complete, which may require further code changes to fix any issues identified.
Add to the mix the security audit/compliance requirements with PCI-DSS, FIPS, ISO27001, HIPPA, etc., which define the baseline security requirements in specific contexts.
Moreover, software development process has changed and shortened significantly over the last few years. The practice of rapid deployment of disparate software into "services" offerings and frequent updates to the "services" so deployed – adding features, changing content, changing platforms, changing user interfaces - gives rise to the Continuous Integration/ Continuous Deployment (CICD) work-flow and software security tasks must fit into these short time lines.
So the process/best practices for software security are well established, and the guideposts – the compliance requirements are well known – at least for the waterfall model of product development/deployment. There are tools from companies such as Security Compass/SD Elements which help track the tasks, from a program manager perspective. But the actual practice of executing the tasks, its fine grained tracking, its efficacy tracking, and measuring "degree of assurance" of software security is still evolving.
In other words, we fairly well know (or have tools that help):
- ● "what" we want to achieve (risk minimization), and
- ● "which tasks" need to be done.
But on the other hand we:
- ● usually do not have technical expertize to do some or all of those tasks,
- ● don't track completion of the tasks or how effective these are,
- ● don't know well how these contribute to the security assurance in a particular context, and
- ● don't have good metrics to determine the rate of our progress.
In the next part, I will explore a representative set of tools which assist us in performing some of the technical tasks, but leave us with the complexity of their own reporting and nomenclature, causing the "security information overload" on the already overworked development/deployment teams. In the 3rd and final part, I will suggest a few metrics and an outline of integrated tools framework with unified reporting and potentially a composite report analysis capability to answer the question "How secure is this software?".
Disagree? Already solved the challenges? Comment?
You are welcome.