Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Silently fail and generate impossible to predict code is a third model that is only of use to compiler writers. Hiding behind the spec benefits no actual user.

A significant issue is that compiler "optimizations" aren't gaining a lot of general benefit anymore, and yet they are imposing a very significant cost on many people.

Lots of people still are working on C/C++ compiler optimizations, but nobody is asking if that is worthwhile to end users anymore.

Data suggests that it is not.



What data?


TFA? Quoting:

    Compiler writers measure an "optimization" as successful if they can find any example where the "optimization" saves time. Does this matter for the overall user experience? The typical debate runs as follows:

    In 2000, Todd A. Proebsting introduced "Proebsting's Law: Compiler Advances Double Computing Power Every 18 Years" (emphasis in original) and concluded that "compiler optimization work makes only marginal contributions". Proebsting commented later that "The law probably would have gone unnoticed had it not been for the protests by those receiving funds to do compiler optimization research."

    Arseny Kapoulkine ran various benchmarks in 2022 and concluded that the gains were even smaller: "LLVM 11 tends to take 2x longer to compile code with optimizations, and as a result produces code that runs 10-20% faster (with occasional outliers in either direction), compared to LLVM 2.7 which is more than 10 years old."

    Compiler writers typically respond with arguments like this: "10-20% is gazillions of dollars of computer time saved! What a triumph from a decade of work!"
We are spinning the compilers much harder and imposing changes on end programmers for roughly 10-20% over a decade. That's not a lot of gain in return for the pain being caused.

I suspect most programmers would happily give up 10% performance on their final program if they could halve their compile times.


> We are spinning the compilers much harder and imposing changes on end programmers for roughly 10-20% over a decade. That's not a lot of gain in return for the pain being caused.

> I suspect most programmers would happily give up 10% performance on their final program if they could halve their compile times.

10% at FAANG scale is around a billion dollars per year. There's a reason why FAANG continues to be the largest contributor by far to LLVM and GCC, and it's not because they're full of compiler engineers implementing optimizations for the fun of it.


> There's a reason why FAANG continues to be the largest contributor by far to LLVM and GCC, and it's not because they're full of compiler engineers implementing optimizations for the fun of it.

And, yet, Google uses Go which is glop for performance (Google even withdrew a bunch of people from the C/C++ working groups). Apple funded Clang so they could get around the GPL with GCC and mostly care about LLVM rather than Clang. Amazon doesn't care much as their customers pay for CPU.

So, yeah, Facebook cares about performance and ... that's about it. Dunno about Netflix who are probably more concerned about bandwidth.


Half of what? I'm not overly concerned about how long a prod build & deploy takes if it's automated. 10 minute build instead of 5 for 10% perf gain is probably worth it. Probably more and more worth it as you scale up because you only need to build it once then you can copy the binary to many machines where they all benefit.


Can't you give it a different -O level?

-O0 gives you what you are after.


You would be very wrong on that last point.


Fun fact you and GP both right. Goals of 'local' build a programmer does to check what he wrote are at odds with goals of 'build farm' build meant for end user. Former should be optimized to reduce build time and latter optimized to reduce run-time. In gamedev we separate them as different build configurations.


Right and if anything, compilers are conservative when it comes the optimizations parameters they enable for release builds (i.e. with -O2/-O3). For most kinds of software even a 10x further increase in compile times could make sense if it meant a couple of percent faster software.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: