Wednesday, 21 December 2011

Breaking the Inevitable Niche/Vertical Technology Security Vulnerability Lifecycle

One of the observations we’ve made over the past fifteen years or so, is that things have only slightly improved with regard to new, niche or vertical specific technologies, security and the inevitable vulnerability lifecycle. When we say this, we mean niche and vertical software vendors achieving the utopia of building robust security throughout the development lifecycle to reduce, mitigate and sustain software security (aka an SDL).

Instead we live in a world where market forces almost dictate that security is taken seriously only once a product or market has matured or early exposure has been gained and come to the attention of researchers and/or regulators. While SDLs have become a hot-topic over the last ten years, the reality is organizations don't see security as a measure of quality but as a cost. So getting 'minimum viable product' to market due to time and market pressures is often still a reality, especially in markets where security hasn't hurt their business previously.

This might sound a ridiculous statement but bear with us. If we look at typically what happens with a new technology or market that successfully matures we see the following life-cycle:



The net result is that vendors don’t have an incentive to front load their investment in security; until they know their entry into an existing market with a product, or the establishment of new market is going to be a success. The reality is, if your products are not a success then it's unlikely that security researchers will look at your products (exceptions can exist) so you won’t be getting pressure from clients about security (other than maybe superficial marketing buzzword bingo requirements).

So this leads us to the inevitable vulnerability lifecycle for successful and initially less common or niche technologies:


If you think we’re just spinning a line, we've seen some really good examples over the past ten years or so. Essentially, it's a snowball effect. A technology piques the interest of a security researcher or academia (or a funded programme is created); time is invested and papers are released and presentations are given. This in turn raises the profile of the technology and it's weaknesses which increases the pressure on the technology/vendor.

Even technologies which are considered obscure or inaccessible over time will become the subject of scrutiny and security research. Independent researchers can obtain indirect funding for their time (via programs such as ZDI) or academia can invest which permits access to software or hardware potentially considered out of reach by the vendor. Additionally, where there is a drive within the community to create an open source implementation it can typically be re-purposed for security research. A great example here is OpenBTS/OpenBSC and everything it lead to in the field of active GSM/GPRS research. A technology previously considered out of reach by vendors and industry bodies is methodically picked apart. It can be expected in the future that it won't take twenty years (like GSM) for future technologies to get such close scrutiny.

Examples of technologies that we've had direct experience with that have followed this cycle include:
  • Cellular application protocols (WAP, SMS, MMS etc).
  • GSM, GPRS and 3G networks.
  • Mobile operating systems.
  • Mobile handset cellular baseband.
  • Bluetooth.
  • In car telematics.
  • SCADA.
  • Fixed line telecommunication equipment.
There are others we don’t have direct experience of, yet they have also followed the same cycle:
  • Embedded healthcare technology.
  • Smart grid (arguably a derivative of SCADA).
In each of these cases it wasn’t a question of IF there were security vulnerabilities, but more of which security researcher was going to get access to the technology first, find vulnerabilities and publish. It is hard to believe that vendors consider a 'lack of technology access' a suitable security stop-gap until such time that market or regulatory forces exist which demand that security issues be fixed and a mature SDL be deployed.

An example of the frustration felt by some quarters can be seen through emerging themes from the US government such as DIS (Designed in Security).


So if you’re a vendor, and your starting to receive reports of security vulnerabilities in your products; it means you’re getting to a stage of market penetration where you need to re-invest some of the return, and start paying back the security debt you've incurred to achieve success. For future products, deployment of a lightweight SDL will likely occur to try and regain control of the security balance.

However, the reality is that vendors will likely only ever deploy a full SDL if there is a material effect on their business because of security. This material effect could be regulatory, customer or market differentiation driven.

For the security research community, much like the great explorers it’s a continual race to find the new land of opportunity; a land where new vulnerabilities are easily found and a technology is ripe for exploitation.

So in summary, hardware and software security cannot be ignored in technology, no matter how niche or vertically aligned. If we ignore security we're laying eggs, that if the technology / market is successful will turn into bugs (of the security kind) being reported. These bugs may then with time lead to a full-on infestation which fatally undermines the security of the product. To break this cycle we need to treat not only the immediate host of the bugs (the software) but also the environment (development practices) to stop re-infestation (through component re-use) of future products.

1 comment: