I'm not sure it's even ethics, it might just also be about misaligned LLMs giving worse outputs and they don't want to make their models worse. Plus their models tend to be the least sycophantic and push back on inane stuff, giving in to those instead would also likely make the models worse.
At the same time given the already terrible reputation of such vanity TLDs, being this hard on abuse might be the only survivable way.
That's not me saying there shouldn't be a warning and a recourse, but the time-to-profit for domain abuse is really short so anti-abuse actions have to be quick.
I'm fairly sure that Safe Browsing's false-positive rate is extremely low otherwise it'd be unusable in Chrome. Which also means that acting on positive results is very likely a correct approach.
Safe browsing is meant for websites, not domain names. You really want your registry acting on it and nuking your email services, intranet services, cert renewal automation, et cetera?
Primarily because OP can't verify that the patch is truly correct. There's also the fact that anything LLM-generated will likely be frowned upon (for the same reason).
With some effort OP could review it manually and then try to submit it though.
But QEMU uses a mailing list for development, it's tedious to set up and then later keep track of. I now fundamentally refuse to contribute to projects that use mailing lists for development, the effort it takes and the experience is just so horrible.
Especially if it's a small patch that doesn't concern anyone (any big sponsors), you'll probably never get a response. Things get lost easily.
It heavily depends on what you mean by "not the code", if all the code does is implement the necessary steps for the interface, then it's part of the interface. It's an interpretation of an interpretation of a datasheet.
Yes, but how on earth is their malicious compliance at providing parental controls a good reason to go for the surveillance state that hurts absolutely everyone?
Social media operators love the surveillance state idea. That's why they aren't pushing against this.
I even cancelled YT Premium because their "made for kids" system interfered with being able to use my paid adult account. I urge other people to do the same when the solutions offered are insufficient.
I have seen LLMs be surprisingly effective at figuring out such oddities. After all it has ingested knowledge of a myriad of data formats, encryption schemes and obfuscation methods.
If anything, complex logic is what'll defeat an LLM. But a good model will also highlight such logic being intractable.
reply