Most of the problem is the "only been a week" part, likely. Though you're fighting an algorithm that's been patched in inconsistent places for all sorts of weights like "authority" and "quality".
Thousands of little weights driven by obscure attributes of the site that you're not really going to figure out by thrashing and changing stuff.
There's already buprenorphine and methadone. But, using either means some degree of responsibility, punctuality, etc. So unless you mean freely distributing it with very little process, it wouldn't change much.
Those, from what I understand, don’t hit the same and someone needs to be ready to quit to go on them, they help with withdrawal etc, definitely, but are not always successful as they don’t scratch the full itch. A bit like nicotine replacement therapy
But there’s a whole space of harm-reduction before then, which is where things like the Swiss program to provide heroin in controlled circumstances can fit in.
An opioid without respiratory depresses on problems could fit into that sort of thing pretty well.
> Troubleshooting and fixing the big mess that nobody fully understands
If that's actually the future of humans in software engineering then that sounds like a nightmare career that I want no part of. Just the same as I don't want anything to do with the gigantic mess of Cobal and Java powering legacy systems today.
And I also push back on the idea that llms can't troubleshoot and fix things, and therefore will eventually require humans again. My experience has been the opposite. I've found that llms are even better at troubleshooting and fixing an existing code base than they are at writing greenfield code from scratch.
My experience so far has been they are somewhat good at troubleshooting code, patterns, etc, that exist in the publicly viewable sphere of stuff it's trained on, where common error messages and pitfalls are "google-able"
They are much worse at code/patterns/apis that were locally created, including things created by the same LLM that's trying to fix a problem.
I think LLMs are also creating a decline in the amount of good troubleshooting information being published on the internet. So less future content to scrape.
That makes sense if it's in moderation. An overzealous asker can disproportionately eat up people's time. Context as to why you're asking helps set priorities.
Yeah ofc. I mean as someone who grew up in Guesser Land and got taught that it’s important to be able to read people’s minds, discovering that I can just, you know, ask, felt like a superpower. I don’t think I’m overdoing it.
Thousands of little weights driven by obscure attributes of the site that you're not really going to figure out by thrashing and changing stuff.
reply