> I can't empathize with the complaint that we've "lost something" at all.
I agree!. One criticism I've heard is that half my colleagues don't write their own words anymore. They use ChatGPT to do it for them. Does this mean we've "lost" something? On the contrary! Those people probably would have spoken far fewer words into existence in the pre-AI era. But AI has enabled them to put pages and pages of text out into the world each week: posts and articles where there were previously none. How can anyone say that's something we've lost? That's something we've gained!
It's not only the golden era of code. It's the golden era of content.
If vibe coding satisfies your need to build, then you're performing a deception on yourself. I guarantee it's still possible to satisfy one's need to create in the era of AI. Just switch off the autocomplete slot machine.
If your answer to that is "but I'll fall behind! It's not pragmatic to take the slower route!", think about the damage you're inflicting on yourself. You're allowing your neurons to wither while you trick your brain into releasing pleasure chemicals with a simulation of the real activity. Who is really falling behind, here?
> It's a real shame because Elon's goals of allowing an unrestricted AI are somewhat noble
When I was young it was considered improper to scrape too much data from the web. We would even set delays between requests, so as not to be rude internet citizens.
Now, it's considered noble to scrape all the world's data as fast as possible, without permission, and without any thought as to the legality the material, and then feed that data into your machine (without which the machine could not function) and use it to enrich yourself, while removing our ability to trust that an image was created by a human in some way (an ability that we have all possessed for hundreds of thousands of years -- from cave painting to creative coding -- and which has now been permanently and irrevocably destroyed).
Yeah I think it's a mistake to focus on writing "readable" or even "maintainable" code. We need to let go of these aging paradigms and be open to adopting a new one.
In my experience, LLMs perform significantly better on readable maintainable code.
It's what they were trained on after-all.
However what they produce is often highly readable but not very maintainable due to the verbosity and obvious comments. This seems to pollute codebases over time and you see AI coding efficiency slowly decline.
> Poe's law is an adage of Internet culture which says that any parodic or sarcastic expression of extreme views can be mistaken for a sincere expression of those views.
The things you mentioned are important but have been on their way out for years now regardless of LLMs. Have my ambivalent upvote regardless.
as depressing as it is to say, i think it's a bit like the year is 1906 and we're complaining that these new tyres for cars they're making are bad because they're no longer backwards compatible with the horse drawn wagons we might want to attach them to in the future.
Isn't that still considered cooking? If I describe the dish I want, and someone else makes it for me, I was still the catalyst for that dish. It would not have existed without me. So yes, I did cook it.
> If I describe the dish I want, and someone else makes it for me, I was still the catalyst for that dish. It would not have existed without me. So yes, I did "cook" it.
The person who actually cooked it cooked it. Being the "catalyst" doesn't make you the creator, nor does it mean you get to claim that you did the work.
Otherwise you could say you "cooked a meal" every time you went to MacDonald's.
I've found the biggest impediment to this strategy is social pressure. The small step methodology goes against the common sense knowledge that the greatest gains come from hard work, so it often receives a lot of push back from friends and family. In my experience, if someone witnesses you taking a small step, they're likely to tell you you're not trying hard enough, or give you some of their own advice on what you should be doing instead.
Sure it is. Forbid training models on images of humans, humanoids, or living creatures, and they won't be able to generate images of those things. It's not like AI is some uncontrollable magic force that hatched out of an egg. It can only output what you put in.
I agree!. One criticism I've heard is that half my colleagues don't write their own words anymore. They use ChatGPT to do it for them. Does this mean we've "lost" something? On the contrary! Those people probably would have spoken far fewer words into existence in the pre-AI era. But AI has enabled them to put pages and pages of text out into the world each week: posts and articles where there were previously none. How can anyone say that's something we've lost? That's something we've gained!
It's not only the golden era of code. It's the golden era of content.
reply