Hacker Newsnew | past | comments | ask | show | jobs | submit | nulltype's commentslogin

It’s hard to say that one needs unused variables.


If I comment out sections of code while debugging or iterating I don't want a compile error for some unused variable or argument. Warning. fine, but this happens to me so frequently that the idea of unused variables being an error is insane to me.


It is insane and you are completely right. This has been a part of programming for over 50 years. Unfortunately you aren't going to get anywhere with zig zealots, they just get mad when confronted with things like this that have no justification, but they don't want to admit it's a mistake.


But even the solutions would be so trivial - have a separate 'prod' compiler flag. With that, make these errors, without make these warnings.

Problem solved, everyone happy.


i think the plan is to make no distinction between error and warning, but have trivial errors still build. that said i wouldn't be surprised if they push that to the end because it seems like a great ultrafilter for keeping annoying people out so they don't try to influence the language.


You are right of course, the solution is trivial.

They also made a carriage return crash the compiler so it wouldn't work with any default text files on windows, then they blamed the users for using windows (and their windows version of the compiler!).

It's not exactly logic land, there is a lot of dogma and ideology instead of pragmatism.

Some people would even reply how they were glad it made life difficult for windows users. I don't think they had an answer for why there was a windows version in the first place.


I'm not sure why you shouldn't make your compiler accept CRs (weird design decision), but fixing it on the user-side isn't exactly hard either. I don't know an editor that doesn't have an option for using LF vs CRLF.

The unused variable warning is legitimately really annoying though and has me inserting `_ = x;` all over the place and then forgetting to delete it, which is imo way worse than just... having it be a warning.


I don't know an editor that doesn't have an option for using LF vs CRLF.

And I don't know any other languages that don't parse a carriage return.

The point is that it was intentionally done to antagonize windows even though they put out a windows version. Some people defend this by saying that it's easy to turn off, some people defend it by saying windows users should be antagonized.

No zig people ever said this was a mistake, it was all intentional.

I'm never going to put up with behavior like that with the people making tools actively working against me.


> And I don't know any other languages that don't parse a carriage return.

fair enough.


“The bugs are real. The math is not. All estimates are made up. Your frustration, however, is valid.”

I’m pretty sure it’s way less than 2%, but I definitely notice running into the same bugs many times.


I think the point was that people care about ppd, not ppi. 218 ppi would be too low if the screen is 1 inch from your eye or too high if it’s 100 inches from your eye.

Retina probably means 60 ppd.


Sure, but I can’t see myself sitting significantly further away from any desktop monitor than I do now.


Isn’t it somewhat common to say something like “slow this down by a factor of 2”?



> What do you mean 10 years?

Didn’t the DGX-1 come out 9 years ago?


Where do they use quotes for the official drivers?


Which H100 and how much over 1500 TFLOP/s?

The datasheet for the H100 SXM seems to indicate that it can only do ~1000 TFLOP/s peak.


I just went to Nvidia’s site and downloaded the data sheet: https://resources.nvidia.com/en-us-tensor-core/nvidia-tensor.... It says 1600/1900 in half precision?


Read the fine print: "With sparsity". They double the claimed throughput by assuming that half of the FLOPs can be skipped.


I also recently went through the specs and noticed "with sparsity" but I didn't quite understand what it specifically refers to - the premise is that a lot of weights in matmul operations will be zero or insignificant - also known as sparse matrices - and in that case A100/H100 has a circuitry that can boost the throughput up to 2x, essentially "skipping" half of the FLOPS as you say.

I am not an expert in LLM but I don't think you can end up having a significant amount of zeroed weights (~50%) in a converged network so I think it is safe to say that the theoretical throughput for 99% of cases is really ~800 TFLOPS and not ~1600 TFLOPS as advertised.


Oh, that is really annoying. Thanks for catching that!


There are two populations of people reading the NVIDIA specs (and now you switched groups). If NVIDIA ever changes their marketing strategy and the asterisk denotes something else, there might be a third population because I know a lot of people that I suspect will keep dividing those starred FLOPS/s by two :-)


Looking at https://ant.apache.org/ivy/history/latest-milestone/ivyfile/... I don't see how this is the same as minimal version selection.



I suspect they mean it’s a secondary source not a primary one


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: