Friday, 20 January 2012

Musings on Secure Software Development

We've done many application source code assessments looking for security issues. The outputs are essentially lists of risks or vulnerabilities identified in the product, described in detail along with relevant suggested mitigations, workarounds or example code. Clients typically exercise this sort of assessment on an infrequent basis, typically per release, six monthly or annually. This raised the question internally, of how best to perform security assessments of applications and how to integrate the results into the Software Development Life-Cycle.

Discussions that we've had with developers over the use of security testing have raised some interesting points. One developer we talked to, said they got great value from free trials of code auditing tools. But didn't feel that the value was sufficient in many cases to justify purchase. Essentially the tools were used to modify the development process, educating the developers and refining coding practises. If the tool reported an exposure, that exposure was investigated, the code fixed and that fix applied to not only all examples in the code (identified or not) but also to the approach taken to all new code. In our experience this is very much an exception to the norm.

The norm appears to be formal, semi-regular assessments of the code; typically late in the life-cycle, code is either:
  • Soon to enter beta.
  • Soon to ship to end customers.
  • Soon to enter production state. 
We see lots of requests for security consulting at the sharp end of a development cycle. The product is ready for ship or production, and now needs a security rubber stamp. Does this approach work? Of course it doesn't (unless you have exceptionally security savvy developers). The reality is with more and more development moving away from waterfall models to agile models where iterative, daily or continuous integration approaches are used there needs to be a duplicate approach for security testing. With more agile development models you don't only do quality assurance testing at the end, and as security is a measure of quality it is implied you shouldn't really leave security testing to end either. As quality assurance testing (functional, new feature and regression) has adapted to the evolution so should security.

The reality is that with security testing done at or near the end of a development cycle there can be pressure on the assessment team to find less or downgrade findings so as not to derail the release schedule. Our pub fuelled conversations (at the not so sharp end) have raised the point that customers can, at times, want a 'everything is ok' assessment service, the 'tick in the box'; where they get security sign-off and things are good to go.

Diligent assessment of an application performed on an infrequent basis tends to have the following characteristics:
  • Highly detailed reporting.
  • Large volume of results with detailed analysis.
  • Can be overwhelming for the application developers, particularly if they're not security savvy.
  • Large time investment in fixing code and mitigating exposures.
  • New feature development suspended whilst fixes are developed, deployed and tested.
  • Often perceived as a restrictive process on the development practice.
  • The developer perspective of having their "homework" marked.
  • Can cause developers to reject security due to the impact an assessment has on progress.
The assumption is of course, that it's better to have visibility of security exposures within an application. A comprehensive audit can educate the developers; potentially to a point where the assessment creates an advanced process where routine checks against coding standards and criteria are augmented with tests for security faults. An approach that we like, is that of making wrong code look wrong blogged about by Joel Spolsky.

The list above (which is by no means exhaustive) raises some questions: is formal infrequent testing the best approach? and should security exposures in an application be treated any differently to any other software bug or fault? In our opinion no, as we've said previously in this post and others, security is just another measure of overall quality, just with different risks associated with the bugs (or vulnerabilities as they're also known). For more on these different risks refer to our earlier post 'The Business v Security Bugs'.

One of the things we've observed (which helped form the basis for this post) was the iterative use of our online ApexSec engine against the same set of Oracle Application Express source code. At regular intervals over a period of around a week, the same application was uploaded to us and analysed. Each time, fewer and fewer issues were identified to a point where (we assume) the residual risk was deemed acceptable. Although for the developers, this was the execution of an external process; the principle of frequent security analysis or inspection does appear to have distinct advantages:
  • Security issues are treated the same as other routine software bugs.
  • Smaller volume of findings at any one time.
  • Becomes part of the daily development routine.
  • Encourages secure development.
  • Educates developers to embrace security as just another facet of development.
  • Limited impact to onward development.
To us, frequent security assessments of an application, integrated into the development cycle makes a great deal of sense. Using software to inspect your source code as part of the daily/nightly build process, identifying vulnerabilities in the same way as any other software bug seems more than sensible. Of course there will also be a place for manual assessment by human beings to catch the issues that static analysis just can't today; such as the logic issues that are typically not present in traditional systems (ever try walking into a betting shop and entering a negative sum on your betting slip?).

In summary as Secure Development Life-cycles have taught us, security must exist throughout the development process. The quality gates that an SDL sets up are good as the final checks before progression however security must be a consistent work item in the same way features, overall testing and quality are. We'd go as far as saying security must be a theme and not just a work item. Pushing tooling out to development to allow them to identify on a daily basis new security issues is a powerful weapon in the software security war. When integrated into the secure development process, support staff who perform manual analysis throughout the development process can create a potent mix for finding, eradicating and reducing new security issues before they see wider release.

No comments:

Post a Comment