If I comment out sections of code while debugging or iterating I don't want a compile error for some unused variable or argument. Warning. fine, but this happens to me so frequently that the idea of unused variables being an error is insane to me.
It is insane and you are completely right. This has been a part of programming for over 50 years. Unfortunately you aren't going to get anywhere with zig zealots, they just get mad when confronted with things like this that have no justification, but they don't want to admit it's a mistake.
i think the plan is to make no distinction between error and warning, but have trivial errors still build. that said i wouldn't be surprised if they push that to the end because it seems like a great ultrafilter for keeping annoying people out so they don't try to influence the language.
They also made a carriage return crash the compiler so it wouldn't work with any default text files on windows, then they blamed the users for using windows (and their windows version of the compiler!).
It's not exactly logic land, there is a lot of dogma and ideology instead of pragmatism.
Some people would even reply how they were glad it made life difficult for windows users. I don't think they had an answer for why there was a windows version in the first place.
I'm not sure why you shouldn't make your compiler accept CRs (weird design decision), but fixing it on the user-side isn't exactly hard either. I don't know an editor that doesn't have an option for using LF vs CRLF.
The unused variable warning is legitimately really annoying though and has me inserting `_ = x;` all over the place and then forgetting to delete it, which is imo way worse than just... having it be a warning.
I don't know an editor that doesn't have an option for using LF vs CRLF.
And I don't know any other languages that don't parse a carriage return.
The point is that it was intentionally done to antagonize windows even though they put out a windows version. Some people defend this by saying that it's easy to turn off, some people defend it by saying windows users should be antagonized.
No zig people ever said this was a mistake, it was all intentional.
I'm never going to put up with behavior like that with the people making tools actively working against me.
I think the point was that people care about ppd, not ppi. 218 ppi would be too low if the screen is 1 inch from your eye or too high if it’s 100 inches from your eye.
I also recently went through the specs and noticed "with sparsity" but I didn't quite understand what it specifically refers to - the premise is that a lot of weights in matmul operations will be zero or insignificant - also known as sparse matrices - and in that case A100/H100 has a circuitry that can boost the throughput up to 2x, essentially "skipping" half of the FLOPS as you say.
I am not an expert in LLM but I don't think you can end up having a significant amount of zeroed weights (~50%) in a converged network so I think it is safe to say that the theoretical throughput for 99% of cases is really ~800 TFLOPS and not ~1600 TFLOPS as advertised.
There are two populations of people reading the NVIDIA specs (and now you switched groups). If NVIDIA ever changes their marketing strategy and the asterisk denotes something else, there might be a third population because I know a lot of people that I suspect will keep dividing those starred FLOPS/s by two :-)