King and Chen (2005) write about their BackTracker software. The idea is interesting: let’s log everything needed to relate a sequence of events leading to an intrusion. Everything in this case is processes, files, and filenames. It can generate dependency graphs, once an anomalous process or event has been identified. That is, something else must raise an alert, and then BackTracker helps find the cause. It’s an interesting representation of an attack.
Taken one step further than they do, perhaps these dependency graphs could be used for intrusion detection?
Suh et al. (2004) propose a wonderful method for tracking taintedness, and denying dangerous operations. It’s elegant, easy to understand, cheap in terms of performance hit, and effective. The only problem is… it would require re-designing the hardware (CPUs) to support it.
I wish it would happen, but I’m not holding my breath. Perhaps virtual machines could help until it happens, and even make it happen?
Kiriansky et al. (2002) wrote an interesting paper on what they call “program sheperding”. The basic idea is to control how the program counter changes and where it points to. The PC should not point to data areas (this is somewhat similar in concept to non-executable stacks or memory pages). The PC should enter library code through approved entry points only. It would be capable in principle to enforce that the return target of a function should be the instruction located right after the call.
Their solution keeps track of “code origins”, which resembles a multi-level taint tracking. The authors argue that this is better than execute flags on memory pages, because those could be “inadvertently or maliciously changed” (and they have three states instead of only two). I thought those flags were managed by the kernel and could not be changed in user space? If the kernel is compromised, then program sheperding will be compromised too. The mechanism tracking code origins heavily uses write-protected memory pages, so the question that comes to mind is why couldn’t those also be “inadvertently or maliciously changed” if we have to worry about that for execute flags? I must be missing something.
The potential versatility of this technology is impressive. The authors test only one policy. Policies have to be written, tested and approved; it is not clear to me why that policy was chosen and the compromises it implies.
The crux of the whole system is code interpretation, which, despite the use of advanced optimizations, slows the execution. It would be interesting to see how it would fare inside the framework of a virtual machine (e.g., VMWare). Enterprises are already embracing VMWare and virtual machine solutions for its easier management of hardware, software, and disaster recovery. With a price already paid for sandboxing, using this new sandboxing technology may not be so expensive after all. Whereas it may not be as appealing as some solutions requiring hardware support, it may be easier to deploy.
No, not our esteemed director of research. It turned off my ELISA project, Enterprise-Level Information Security Assurance, due to lack of interest from the public at large. The idea for this web application was to keep track of patches and basically support NIST’s recommendation on managing patches to use such a system. I believe this indicates that the process was too heavy; people don’t like to spend so much effort and money managing patches.