Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

dynamic memory allocation is very much frowned upon in embedded systems, not just for strings.

Everything should be static and deterministic at all times. This is the easiest/only way to ensure you have no resource issues.

You should always (statically) allocate for maximum/worst case.. because you have analysed your worst-case, haven't you?



That rhetorical question touches on a very important point, and I agree 100%. Dynamic memory allocation allows programmers to absolve themselves of considering worst case (memory usage) scenarios. As long as they handle allocation failures, their program should function predictably under all memory circumstances (in reality, we know this isn’t always the case, especially when the failure occurs deep in the call stack). The problem is predicting the memory circumstance itself. The way your program behaves is essentially tied to something external, which makes it that much harder to predict the overall behavior. This scheme makes sense in environments where you really have no idea how much memory will be available to you, such as a conventional PC program, but not on a system that has a single dedicated purpose. One could make the argument that if the program gracefully handles an allocation failure, then there should be no problem. My counter argument would be that in many embedded systems, it’s better to have a predictable, but low failure threshold as opposed to a potentially higher, but unpredictable failure threshold.

Another analogy: I would rather have a program with a 100% chance of hitting a bug under a well understood circumstance rather than a program with a 0.00001% chance of hitting a bug under an unpredictable/unknown circumstance.

In an embedded environment, you have complete control over the system and all aspects of it (or at least you should).

You often need to understand the behavior of your system under all circumstances, and that becomes _much_ easier to do when you operate with fixed size data structures because you now know exactly when you will run out of memory.

Consider a hypothetical embedded system where you’re creating a sub-module which must handle external events by means of a message queue. If you know that you can only have three possible events, you can statically allocate your message queue to be 3 deep to ensure that you will never drop an event. If you were to use dynamic memory allocation, you can’t make that guarantee because you don’t know what the other components of the system are doing (and how much memory they’re allocating). Even if there are no other allocations taking place, you still can’t guarantee that yours will succeed due to the possibility of fragmentation.

Statically allocating your buffers ensures that if the program can be loaded in the first place, you can predict with 100% certainty your program will be able to handle those 3 events.


true, embedded systems (and in theory server s/w) are different in a number of respects to your typical desktop-level s/w.

Embedded systems typically are not 'manned', they have to handle all issues themselves and continue providing service. Note that this does not mean "no resets".

Resets should be designed for because they will happen.

All effort should be made to make errors deterministic as (like you say) they will become a real time-suck.

IMO, servers should also be considered embedded systems and designed like this, but unfortunately the culture around server software is for very dynamic, very resource heavy and very inefficient, non-deterministic s/w.


How do you analyze worst case ? don't you need to know what calls what, up to what depth, and that's dynamic by nature ?


It all comes from the defined requirements and specifications.

i.e. "You shall handle x messages in y milliseconds."

from that, you derive your worst case buffer size given that you can service that buffer every 'z' milliseconds at most (Note, that involves a hard-real-time requirement as it is a bounded maximum time).


As said by TickleSteve, you define the worst case rather than analyze it.

As for the stack usage requirements, it seems like this could be determined statically by some parametric process, but I’m no expert on this.

Does anyone see a reason why there couldn’t be some algorithm to statically analyze some code to derive the worst case stack usage?

For example, take every function and assume that every variable declaration will be required. Add them up. Then, follow every path down the call tree while adding up the required stack for each call.


...in a typical GCC based system, you can for example use "-fstack-usage" and "-fcallgraph-info" to determine a worst case.

(tho that takes a bit of analysis, there are tools around that can automate this).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: