Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I also got the impression when reading the article that a full blown high level language JIT is overkill for a kernel. A lighter template instantiation approach os much better. The key is deriving the template fragments and comatamt patch tables etc... from the fully featured source code. Doing this in a fully automatic fashion seems computationally hard and likely requires finely tuned heuristics to figure out sections that can likely be made optional depending on parameters etc.. humans could annotate the code, but that essentially creates a new and quite weird programming language. The compiler would effectively comsiders all possible combinations compile time configuration switches at once and emit instructions on how to instantiate a certain comfiguration from machine code templates.

All of this has devils and monsters lurkimg in the details, of course. I have seen systems where this kind of runtime code specialization woukd indeed be helpful even in user space. There is a lot of code where the innermost loop has conditionals based on some user input or program setting that cannot be determined at compile time. Code like this could profit a bunch from runtime specialization, too. Mapped into a programming language, this would probably look something like generics, but with parameters that need to be passed to a comstructor at runtime to get something executable.



I don't know if it's such a hard problem as you suggest; it's pretty close to what a linker does, really. Have you looked at `C?


The problem lies in making things work with optimizations. For example, how do you schedule instructions and allocate registers optimally when you don't know the exact sequence of instructions that is executed? Not gettimg that right costs you a bunch of performance. Then there are things like variants of the code that might be vectorized under certain conditions. You would need to find them in the compiler and be able to emit the proper alternative code templates and a way for the runtime to pick them up when appropriate. A full JIT could do all that at runtime, but that is also expensive to run and likely too big and buggy to exist in a stable and safe operating system kernel.


It gets harder when you want to do link time optimization. Without it, the approach is inferior to ahead of time optimization with some programmer input... (Degeneralization and devirtualization rather than specialization.)


It's fine if it's inferior to ahead-of-time optimization with programmer input, because you don't have time for a programmer to optimize the kernel code anew every time you spawn a process or open a pipe or a socket. It just needs to be faster than what the kernel is doing now when you call read() or write() or whatever, without being too much slower when you open the file (etc.)

Like, ahead-of-time optimization with programmer input might take a few hours to a few days, and that's not really an acceptable execution time for the open() system call.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: