Securing Software - Part 2

Securing Software – Part 2 – Automation tools and metrics


In part 1 of this series, I explored the process of securing software.

Threat modeling, security code review, security verification/testing, and cryptography review are tasks which have a significant impact on the eventual security quality of a product or deployment.


Threat modeling – describes the assets (motive), access methods (means), and well known types of vulnerabilities and deployment errors (opportunity). There is no good automated tool for data-flow analysis required for threat modeling. However, the network traffic analysis tools, data access pattern analysis tools, DLP tools, and IDS/IPS/UTM/deep inspection firewall data aggregation tools can assist in creating a baseline model of network data flow which can be improved upon by human effort.


Security Code review – Tool assisted code review to improve security quality has a high ROI, considering that large proportion of vulnerabilities arise due to coding errors. Several excellent tools are available from a number of vendors, especially for the more mainstream procedural programming languages – C++, Java, C#. However the newer scripting languages/platforms such as Python, Ruby, node.js, are not so well supported, if at all.


Cryptography review – Adequate encryption strength is critical in information security. Network traffic analysis tools can identify the crypto metadata and help ascertain the strength of the encryption (or lack there of) on certain data exchanges within the IT deployment.


Security verification/pen-testing – Verification of security is harder than verification of functionality, and usually more complex. Several purpose-built tools are available – from SPIKE and PEACH for fuzz testing of protocols and files, sqlmap for SQL injection testing, commercial web vulnerability testing tools, NMAP for network vulnerability discovery, proxy tools for intercepting network traffic and injecting attack vectors, etc.


Defining when is it "DONE" for Security tasks - In Agile product development / deployment practices, it is important to define the "DONE" criteria, so the program manager can verify that a task/backlog item is completed. This depends a lot on the context of product/deployment, and can be defined in terms of the metrics below - a certain trend level is attained, or certain kinds of vulnerability are exhaustively removed, or static code review tool indicates absence of certain kinds of reported issues.


Suggested metrics

Two types of metrics are necessary:

  • ● Quality improvement metrics - indicating improvement in security quality of a product or deployment, and
  • ● Quality comparison metrics – for comparing security quality of two similar products

Quality improvement metrics

Once the product team or IT team chooses a set of tools to use, and runs the assessment periodically, a number of security quality trends clearly suggest themselves, to be tracked for successive runs:

  1. trend of number of reported issues by each tool.
  2. trend of number of outstanding issues, for individual tool as well as in aggregate.
  3. trend of total number of issues fixed without regression between successive runs.
  4. trend of number of security test cases derived for a listed set of functional tests and their trend of uncovering vulnerabilities.
  5. trend of cost per fixed issue over time – is an important metric to indicate an improving ROI and improved skills of the team.
  6. comparison of discovered vulnerabilities with various lists enumerating most prevalent types of vulnerabilities/errors (say, OWASP Top 10 or CWE top 25) - to identify need for an intervention by improving skills in specific areas.

Quality comparison metrics

In the process of IT deployment or product development, teams have to pick a product or component which is better for security quality.

  1. A self-reported survey or a checklist – It can be a useful baseline quality metric - to ascertain whether the team has performed a particular task or checked some parameter (e.g. type of encryption used, whether a 3rd party pen-test was done, and what vulnerabilities were reported, etc.). Each response can be scored with certain number of points for comparison.
  2. Assertion/assessment of absence of particular types of vulnerability (or number of unfixed vulnerabilities) - especially if such a goal/requirement is stated explicitly beforehand.
  3. This also helps in defining scope of a 3rd party pen-test. The vendor may be asked to identify all instances of a few types of vulnerabilities using whatever means, tools and techniques they can get.
  4. Analysis of past vulnerabilities in a product - Frequent security patches/fixes may indicate poor processes to improve security quality. If a product has a specific type of unacceptable vulnerability repeatedly discovered, it may be an indication of overall unacceptable quality.

Automation challenges

Aggregating useful data from several different tools can be a challenge, especially if this needs to be done frequently. Analyzing the reported issues is also expensive. It would be desirable to be able to automatically aggregating the various tool reported data into a single database or a single format. In addition, it would be very useful to run the tools based on a schedule and pushing the “interesting” data to a common database for trend analysis and reporting.


With a view to this automation, a Security Automation Framework was developed and demonstrated at the OWASP AppSec USA 2015. In the next part, I describe the framework in more detail and encourage you to try it out. It is a beginning of a potentially very useful tool chain.

Disagree? Already solved the challenges? Comment?


References

Proposal for SDLC effectiveness metrics

SAFECode publications

Microsoft security development (SDL) process

SAFECode Agile development guidelines

Microsoft Agile SDL process

SO WHAT CAN WE DO FOR YOU ?

For all your software product security and IT security compliance requirements

Contact us ☎