Hacker Newsnew | past | comments | ask | show | jobs | submit | howenterprisey's commentslogin

Yeah, wikipedia has its own user script system, and that was what was disabled.

In that case their pay would be 20% less than market rate because the percent change is based on market rate, not the new value.


> This appears to be an AI-generated draft with severe hallucination problems bordering on WP:HOAX. For one, it calls it the "Margo Largo Accord" -- a name that is not and never was real. The draft also claims the accord was already introduced, and goes into detail about its supposed contents, effects, and reactions to it; the problem is, no actual accord was introduced at the time (or now), so pretty much all of that is made up. The sourcing gives an impression of WP:SIGCOV [significant coverage] but almost all of it appears to be background information about various topics that aren't the accord.

From https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletio..., the discussion that concluded with the article's deletion


Yeah, I've read that and find it a little sus. The article for me was fine even if not perfect.

It's the first time in my internet life (20y+) that I go find a wiki article to share to a friend because the US keeps weakening and I found it gone.

Didn't know it was that easy to remove a wikipedia article by just hinting it was AI generated.


In the anime fan subbing community (which this document is likely from), it's very common to hate on VLC for a variety of imagined (and occasionally real but marginal) issues.


Why is that?


At least for the real part there was the great 10-bit encoding "switch off" at around 2012 where it seemed like the whole anime encoding scene decided to move into encoding just about everything with "10-bit h264" in order to preserve more detail at the same bitrate. VLC didn't have support for it and for a long time (+5 years?) it remained without proper support for that. Every time you tried playing such files they would exhibit corruption at some interval. It was like watching a scrambled cable channel with brief moments of respite.

The kicker is that many, many other players broke. Very few hardware decoders could deal with this format, so it was fairly common to get dropped frames due to software decoding fallback even if your device or player could play it. And, about devices, if you were previously playing h264 anime stuff on your nice pre-smart tv, forget about doing so with the 10-bit stuff.

Years passed and most players could deal with 10-bit encoding, people bought newer devices that could hardware decode it and so on, but afaik VLC remained incompatible a while longer.

Eventually it all became mutt because the anime scene switched to h265...


8-bit and 10-bit almost give digital video too much credit. Because of analog backwards compatibility, 8-bit video only uses values 16-235, so it's actually like… 7.8 bit.

It's nowhere near enough codes, especially in darker regions. That's one reason 10-bit is so important, another is that h264 had unnecessary rounding issues and adding bit depth hid them.


Mostly that VLC has had noticeable issues with displaying some kinds of subtitles made with Advanced SubStation (especially ones taking up much of the frame, or that pan/zoom), which MPV-based players handle better.

If you want a MPV-based player GUI on macOS, https://github.com/iina/iina is quite good.


Note that, while I haven't had time to investigate them myself yet, IINA is known to have problems with color spaces (and also uses libmpv, which is quite limited at the moment and does not support mpv's new gpu-next renderer). Nowadays mpv has first-party builds for macOS, which work very well in my opinion, so I'd recommend using those directly.


Two different systems; on the mod side there are two different UIs (one to set each) as well. Yeah it's weird.


I'd guess nobody sat down and said "Here's the target demographic profile for the new UI, so let's rework our messaging, people!" It's just a funny accident of maintenance over time that the result looks like that.


Each subreddit's mod team gets to style the subreddit (within some limitations). There's presumably a separate set of style rules for the main and "old" sites; and the latter is legacy that most mods (and most users) have not even thought about for years. (Probably most current users have joined the site after the switch and never seen the "old" domain. I'm honestly surprised it still works at all.)


Pretty much. And that seems to reflect changing sentiment of the mods over time (e.g., no information, no inspiration, no trying to emulate, only emoting).

But what I thought was funny was, if you didn't know that, it would look like the two "experiences" were tailored separately: OG redditors get the constructive messaging in the spirit of RMS's mission, but modern social media redditors get the modern social media simplified passive consumption.


It is funny.

I suppose it's a consequence of the current mods having been immersed in that modern social media environment for longer.


That's what saying "noticed with Gen Z" means.

Reply to edit: generations are sequential; if you've noticed something with one generation it means that you're not accusing the prior generations of the same thing, otherwise you would've used different wording.


You can just as easily add context to the first example or skip the wrapping in the second.


Especially since the second example only gives you a stringly-typed error.

If you want to add 'proper' error types, wrapping them is just as difficult in Go and Rust (needing to implement `Error` in Go or `std::Error` in Rust). And, while we can argue about macro magic all day, the `thiserror` crate makes said boilerplate a non-issue and allows you to properly propagate strongly-typed errors with context when needed (and if you're not writing library code to be consumed by others, `anyhow` helps a lot too).


fmt.Errorf with %w directive in fact wraps an error. It will return an fmt.wrapError struct which can be inspected using `errors.Is`. So it's not stringly typed anymore.


I am fully aware of how fmt.Errorf works as well as what's inside the `errors` package in the Golang stdlib, as I do work with the language regularly.

In practice, this ends up with several issues (and I'm just as guilty of doing a bunch of them when I'm writing code not intended for public consumption, to be completely fair).

fmt.Errorf is stupid easy to use. There's a lot of Go code out there that just doesn't use anything else, and we really want to make sure we wrap errors to provide 'context' since there's no backtraces in errors (and nobody wants to force consuming code to pay that runtime cost for every error, given there's no standard way to indicate you want it).

errors.New can be used to create very basic errors, but since it gives you a single instance of a struct implementing `error` there's not a lot you can do with it.

The signature of a function only indicates that it returns `error`, we have to rely on the docs to tell users what specific errors they should expect. Now, to be fair, this is an issue for languages that use exception's - checked exceptions in Java notwithstanding.

Adding a new error type that should be handled means that consumers need to pay attention to the API docs and/or changelog. The compiler, linters, etc don't do anything to help you.

All of this culminates to an infuriating, inconsistent experience with error handling.


I don't agree. There isn't a standard convention for wrapping errors in Rust, like there is in Go with fmt.Errorf -- largely because ? is so widely-used (precisely because it is so easy to reach for).

The proof is in the pudding, though. In my experience, working across Go codebases in open source and in multiple closed-source organizations, errors are nearly universally wrapped and handled appropriately. The same is not true of Rust, where in my experience ? (and indeed even unwrap) reign supreme.


> There isn't a standard convention for wrapping errors in Rust

I have to say that's the first time I've heard someone say Rust doesn't have enough return types. Idiomatically, possible error conditions would be wrapped in a Result. `foo()?` is fantastic for the cases where you can't do anything about it, like you're trying to deserialize the user's passed-in config file and it's not valid JSON. What are you going to do there that's better than panicking? Or if you're starting up and can't connect to the configured database URL, there's probably not anything you can do beyond bombing out with a traceback... like `?` or `.unwrap()` does.

For everything else, there're the standard `if foo.is_ok()` or matching on `Ok(value)` idioms, when you want to catch the error and retry, or alert the user, or whatever.

But ? and .unwrap() are wonderful when you know that the thing could possibly fail, and it's out of your hands, so why wrap it in a bunch of boilerplate error handling code that doesn't tell the user much more than a traceback would?


> there's probably not anything you can do beyond bombing out with a traceback... like `?` or `.unwrap()` does.

`?` (i.e. the try operator) and `.unwrap()` do not do the same thing.


One would still use `?` in rust regardless of adding context, so it would be strange for someone with rust experience to mention it.

As for the example you gave:

    File::create("foo.txt")?;
If one added context, it would be

    File::create("foo.txt").context("failed to create file")?;
This is using eyre or anyhow (common choices for adding free-form context).

If rolling your own error type, then

    File::create("foo.txt").map_err(|e| format!("failed to create file: {e}"))?;
would match the Go code behavior. This would not be preferred though, as using eyre or anyhow or other error context libraries build convenient error context backtraces without needing to format things oneself. Here's what the example I gave above prints if the file is a directory:

    Error: 
       0: failed to create file
       1: Is a directory (os error 21)

    Location:
       src/main.rs:7


My experience aligns with this, although I often find the error being used for non-errors which is somewhat of an overcorrection, i.e. db drivers returning “NoRows” errors when no rows is a perfectly acceptable result of a query.

It’s odd that the .unwrap() hack caused a huge outage at Cloudflare, and my first reaction was “that couldn’t happen in Go haha” but… it definitely could, because you can just ignore returned values.

But for some reason most people don’t. It’s like the syntax conveys its intent clearly: Handle your damn errors.


I think the standard convention if you just want a stringly-typed error like Go is anyhow?

And maybe not quite as standard, but thiserror if you don’t want a stringly-typed error?


yeah but which is faster and easier for a person to look at and understand. Go's intentionally verbose so that more complicated things are easier to understand.


  let mut file = File::create("foo.txt").context("failed to create file")?;
Of all the things I find hard to understand in Rust, this isn't one of them.


Important to note that .context() is something from `anyhow`, not part of the stdlib.


What's the "?" doing? Why doesn't it compile without it? It's there to shortcut using match and handling errors and using unwrap, which makes sense if you know Rust, but the verbosity of go is its strength, not a weakness. My belief is that it makes things easier to reason about outside of the trivial example here.


The original complaint was only about adding context: https://news.ycombinator.com/item?id=46154373

If you reject the concept of a 'return on error-variant else unwrap' operator, that's fine, I guess. But I don't think most people get especially hung up on that.


> What's the "?" doing? Why doesn't it compile without it?

I don't understand this line of thought at all. "You have to learn the language's syntax to understand it!"...and so what? All programming language syntax needs to be learned to be understood. I for one was certainly not born with C-style syntax rattling around in my brain.

To me, a lot of the discussion about learning/using Rust has always sounded like the consternation of some monolingual English speakers when trying to learn other languages, right down to the "what is this hideous sorcery mark that I have to use to express myself correctly" complaints about things like diacritics.


I don't really see it as any more or less verbose.

If I return Result<T, E> from a function in Rust I have to provide an exhaustive match of all the cases, unless I use `.unwrap()` to get the success value (or panic), or use the `?` operator to return the error value (possibly converting it with an implementation of `std::From`).

No more verbose than Go, from the consumer side. Though, a big difference is that match/if/etc are expressions and I can assign results from them, so it would look more like

    let a = match do_thing(&foo) {
      Ok(res) => res,
      Err(e) => return e
    }
instead of:

     a, err := do_thing(foo)
     if err != nil {
       return err // (or wrap it with fmt.Errorf and continue the madness
                  // of stringly-typed errors, unless you want to write custom
                  // Error types which now is more verbose and less safe than Rust).
    }
I use Go on a regular basis, error handling works, but quite frankly it's one of the weakest parts of the language. Would I say I appreciate the more explicit handling from both it and Rust? Sure, unchecked exceptions and constant stack unwinding to report recoverable errors wasn't a good idea. But you're not going to have me singing Go's praise when others have done it better.

Do not get me started on actually handling errors in Go, either. errors.As() is a terrible API to work around the lack of pattern matching in Go, and the extra local variables you need to declare to use it just add line noise.


I interpret the sense of "literally" here in the opposite way, i.e. without it the sentence may be taken to mean that the books metaphorically stop mid-sentence, but with it, they're saying that it's non-metaphorical and they really do. It would be bizarre wording otherwise.


Since my work is vaguely related to superconductors, I saw this comment and was excited to dig into all the errors in the article, but actually couldn't find any in the parts discussing the superconductors specifically. (I don't know data centers and can't comment on that bit.) 77 K is indeed an appropriate temperature for LN2 coolant for high-temperature superconductors like they're using. What errors did you see?


The very first sentence is confusing. "Power demands of data centers have grown from tens to 200 kilowatts in just a few years". I assume they're talking a single rack here, not "power demand of data centers".


Well the third paragraph implies that "low-voltage" is a factor against having lots of heat and size, when the opposite is true.

Otherwise nothing pops out to me.


Maybe Geoguessr players would be good at identifying them as well?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: