Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know why they disabled ASLR, but safety critical systems (and functional safety people) tend avoid randomization...


Because such small embedded systems tend to avoid a stack and recursion, and more, tend to disable malloc at all.

Variables and places are predefined. ASLR is a problem there, not a solution.


Having code that behaves differently if it's loaded at different addresses seems like a bug. So by not doing that, aren't you just masking it?


Presume that you are a software engineer. Your career and other people's lives depend from your producing systems that operate safely. You also have to make risk analyses and meet performance goals.

Your operating system executes different program images for every successive execution of your program, picked in an unpredictable manner.

How do you prove that every possibility passes the safety tests? How do you measure the risk of this random selection? How do you know when you have done enough simulation?

How do you match up software randomization with the ISO 26262 concept that all software faults are systematic and not random as (some) hardware faults are?

How do you prove that memory allocation and execution always meet performance goals? How do you construct and perform reproducible performance tests? How do you demonstrate that your measurements are meaningful?

Software engineering in this case involves thinking about all of these questions and more besides.

* https://hal.archives-ouvertes.fr/hal-01375451/document

* https://www.usenix.org/sites/default/files/conference/protec...

It appears (to me, at least) that the current state of the literature on ASLR is that it is treated as a succession of theoretical arms races, which new defence militates against which new attack, and almost no attention is paid to the concerns of actually deploying it in a larger system; and the current state of the literature on functional safety is simply "we will assume that there are no randomization processes in the software" (from an actual paper presented at ESREL 2016).


> It appears (to me, at least) that the current state of the literature on ASLR is that it is treated as a succession of theoretical arms races, which new defence militates against which new attack, and almost no attention is paid to the concerns of actually deploying it in a larger system; and the current state of the literature on functional safety is simply "we will assume that there are no randomization processes in the software" (from an actual paper presented at ESREL 2016).

Thanks for your explanation. To give a slightly different perspective on the quoted paragraph: mitigations such ASLR etc. do not protect against security bugs, they just make them more "inconvenient" to exploit. So "average script kiddie" will probably not be able to write an exploit for them. On the other hand, for well-founded agencies (think 3-letter agencies), these are no serious hurdles. In this sense, mitigations do not improve security in the sense of "less security holes". Instead their (probably unintended, though not undesired) consequence is that mostly well-founded agencies are able to exploit security holes. Whether this new situation is good or bad for software security is up to the reader to think about.


To be more specific about "more inconvenient": I believe part of the intended effect of ASLR is to make ROP exploit attempts typicaly crash the process instead of successfully gaining control. This (ideally) brings admin attention to the system, which attackers generally want to avoid.


> To be more specific about "more inconvenient": I believe part of the intended effect of ASLR is to make ROP exploit attempts typicaly crash the process instead of successfully gaining control.

Keep in mind that before ASLR came, there was (and still is) DEP and its claims that lots of classes of attack were now impossible. The end of this story was that ROP was invented and hardly anything has changed, except that ROP code is much more tedious to write (i.e. no problem for well-funded attackers).

Now we have ASLR and you are probably right that now ROP exploits lead to process crashes instead. But attackers have already invented new techniques for circumventing ASLR, such as return-to-plt, GOT overwrite or GOT dereferencing. Again making it more inconvenient for script kiddies to write exploits, but again no problem for an attacker who can throw lots of money and people at the problem.


> But attackers have already invented new techniques for circumventing ASLR, such as return-to-plt, GOT overwrite or GOT dereferencing. Again making it more inconvenient for script kiddies to write exploits, but again no problem for an attacker who can throw lots of money and people at the problem.

Helmets and bulletproof vests is no match for powerful rifles.

I'm a bit tired of this reasoning here on HN: If it isn't perfect it is worthless.

I think I can see reasons why a vendor might want to avoid ASLR in safety critical systems.

But we shouldn't talk down decent protection tecniques that will often save us.


> I'm a bit tired of this reasoning here on HN: If it isn't perfect it is worthless.

This argument (that "If it isn't perfect it is worthless" does not hold) is suitable for many topics in life, but in my opinion not for IT security. I can conceive that this might be one reason, why so many people (explicitly including politicians) make such bad decisions about IT security.

I might be somewhat paranoid regarding this topic (which is not a bad trait if you want to work in this area), but let me give my arguments:

First: the fight for secure systems is deeply asymmetric. The attacker side just needs one working exploit, while the defender side has to ensure that there exists no security hole. This strong asymmetry really makes it necessary that the security is as perfect as possible.

Second: if the device is connected to the internet, everyone/every device that exists in the world can be an attacker. So what you are fighting against is the whole world. Or in other words: the security of the system that you use has to withstand the smartness of some of the smartest people in the world.

Let it be stated clearly that this fight is not hopeless as it looks based on these arguments: for designing the security of your system, you can resort to the knowledge of many really, really smart people, too: this is what the various standards (e.g. for cryptography) are about. What you cannot afford is to tolerate the slightest bit of imperfection in the security architecture of the system.

TLDR: In security, at least "If it isn't at least nearly perfect, it is worthless" does indeed hold.


Cryptography isn't perfect; someone could always guess your private key. But that doesn't make it useless, since you're hoping that it's just sufficiently improbable that nobody in their right mind will even try doing it.


> Cryptography isn't perfect; someone could always guess your private key.

For the accepted standards, even the smartest people working in this area have not yet found a method to find the private key sufficiently fast (at least such a method has not been published). So to the best of our current knowledge, those methods are at least very near to the perfection that is possible with our current technology.


> Cryptography isn't perfect; someone could always guess your private key.

Cryptography is a branch of mathematics, and cryptographic systems can be formally proved to have certain properties, such as being unable to derive the private key from the content of the encrypted message. That the private key can be guessed is a trivial observation, and a bad argument for dismissing formal proofs. ASLR is a hack on a hack that does not tell you anything about the formal properties of the system.


> Cryptography is a branch of mathematics, and cryptographic systems can be formally proved to have certain properties, such as being unable to derive the private key from the content of the encrypted message.

A small correction: All those proofs (if they exist) are relative to complexity-theoretical conjectures that are (ideally) widely believed to be true, but open. The only system that I am aware of where an "absolute" security proof exists is OTP, but this is hardly suitable to use in practice.


> Having code that behaves differently if it's loaded at different addresses seems like a bug.

Why? This only sounds like a bug to me if it is intended to be position-independent code (PIC).

A reason why in safety-critical code ASLR is avoided is that it introduces another source of non-determinacy and potential bugs, which you want to avoid.

UPDATE: So you really want to keep the system as simple and small as possible and avoid to add anything to it that can introduce new bugs.


At what point do you want general purpose code to be position dependent?


When you program for platform where writing PIC involves ugly hacks with measurable performance impact, for example i386.


Its not i386 that's the issue, it's the ABI


Would you rather die to expose a masked error or live and leave in the masked error? That's what rules for critical systems are about. The time to fail fast is before production.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: