That is similar to the signal processing application I worked on a couple years ago. It was a real-time application with fixed sizes at each point in the algorithm, so we knew maximum memory usage at initialization. Thus, we allocated all memory up-front and continuously reused the buffers until we were shut down. Under the operational architecture we were moving to when I left, our application never actually called malloc at all; the control program that ran the signal processor and which started us malloced massive segments of memory on each machine and gave it to us, which we then "allocated" to our own buffers.