Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Beat SMEP on Linux with Return-Oriented Programming (tuxfamily.org)
35 points by coreyrecvlohe on Nov 15, 2011 | hide | past | favorite | 12 comments


A good paper came out a few years ago that defeats ROP by adding protections to ensure that returns/calls/jumps are only taken if the function was entered at its entry point, and rearranging register allocation if the compiler creates unaligned instructions that can be used to ret/jmp. They say the performance penalty isn't huge, but I guess it must be enough if people aren't implementing it.

[1] G-Free: Defeating Return-Oriented Programming through Gadget-less Binaries (http://iseclab.org/papers/gfree.pdf)


They say the performance penalty isn't huge, but I guess it must be enough if people aren't implementing it.

There are other, social reasons why it may not be used. Academics are usually evaluated on published papers, so papers are often their end goal. Their code is usually proof-of-concept quality, not production quality. So it's not realistic to be able to take code written by academics and directly integrate it into real software stacks. And the people who are in a position to integrate new features into open source software stacks often have dozens of other features that are also important, so why work on this one? There really needs to be a champion for the idea in the already-existing community for that project.

This work also has the secondary problem of cross-cutting concerns. In order to provide better security in the kernel, they're modifying the assembler. So now you need a champion who works in the assembler, but cares a lot about kernel security.

Good paper, by the way. I've only read the intro, but I'm going to read the rest later. It looks to have a good primer on ROP.


I think that Shacham paper is the best primer on ROP of all the papers I've read, including the paper mentioned by the parent commentor:

http://cseweb.ucsd.edu/~hovav/dist/geometry.pdf


I agree. This is the paper that started it all.


Well. It's the paper that made the topic break out.


Not true. It just took credit for it, and somehow managed to inject a new buzzword into circulation.

http://www.suse.de/~krahmer/no-nx.pdf is from 2005.

http://www.comms.engg.susx.ac.uk/fft/security/solaris_non_ex... is from 1999.


It is true that code-reuse attacks have been around for some time, but Shacham's paper actually showed that you can make arbitrary --Turing complete-- computations with this approach, among other things. I think this alone is a good contribution.


I am the author of the paper on G-Free.

G-Free does not modify the assembler, or any other component on a system. It is a completely independent layer between the compiler and the assembler. And its performance overhead is indeed surprisingly low.

Even then, there are numerous reasons I can think of why it won't immediately be included in a production environment (as won't most cutting edge research).

The first reason being, by definition, G-Free is a compiler based solution. You need the source code in order to build gadget free software. This may not be the best fit for an already established production system. However, I can imagine this system being easily adapted into a binary loader, so that it does its trick when loading a binary into the memory.

Then, you have the issues of compiler verification. You are right in saying that research quality code does not equal production quality. We do have a prototype implementation that works nicely, as described in the paper. But who am I to say it is perfect? Well, we compiled a full system with it, and have been running it without problems since then. But you never have the assurance unless you do some formal verification. I would not compile a critical piece of software (something running on an aircraft, for example) with just any implementation, without having solid ( = formal) proof that the implementation is perfect.


Calling G-Free an independent layer between the compiler and the assembler is a semantic distinction, not an implementation distinction. The step needs to live in either the compiler (as a post-processing phase) or the assembler (as a pre-processing phase) to be automatically integrated into the software stack.


Using your terms, it is an implementation distinction as well. You install G-Free, call 'make' to build any software, it comes out gadget free. You delete G-free, call make, it comes out with gadgets. With our prototype implementation for Linux and GCC, you don't need to touch gcc or gas or the linker or anything else. This is explained in the paper.

If you have a build system where the assembler and the compiler are bundled as a monolithic binary executable, then your argument may or may not hold.

EDIT: Just to clarify, 'make' was just an example. You can also directly call "gcc program.c" or "as anotherprogram.S" and G-Free does its job.


That being said, I must say G-Free is practical by design, ie, it can realistically be implemented and used in a production environment, as opposed to many defense solutions (for ROP or other memory corruption attacks) which are just proof of concept.


"kernel symbols hiding" ... I always find that to be funny. Once your code is in kernel space, searching for kernel symbols by name is easy. Proof is in the program I wrote called virt-dmesg which uses heuristics to search for the main symbol table, and also for kallsyms if available

http://people.redhat.com/~rjones/virt-dmesg/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: