Hacker Newsnew | past | comments | ask | show | jobs | submit | maplethorpe's commentslogin

> I can't empathize with the complaint that we've "lost something" at all.

I agree!. One criticism I've heard is that half my colleagues don't write their own words anymore. They use ChatGPT to do it for them. Does this mean we've "lost" something? On the contrary! Those people probably would have spoken far fewer words into existence in the pre-AI era. But AI has enabled them to put pages and pages of text out into the world each week: posts and articles where there were previously none. How can anyone say that's something we've lost? That's something we've gained!

It's not only the golden era of code. It's the golden era of content.


> But AI has enabled them to put pages and pages of text out into the world each week: posts and articles where there were previously none.

Are you for real? Quantity is not equal to Quality.

I'll be sure to dump a pile of trash in your living room. There wasn't much there before, but now there is lots of stuff. Better right?


I hope this is sarcasm. :)

Quality is better than quantity.

We have more words than ever. Nice.

But all the words sound more like each other than ever. It’s not just blah, it’s blah.

And why should I bother reading what someone else “writes”? I can generate the same text myself for free.


If vibe coding satisfies your need to build, then you're performing a deception on yourself. I guarantee it's still possible to satisfy one's need to create in the era of AI. Just switch off the autocomplete slot machine.

If your answer to that is "but I'll fall behind! It's not pragmatic to take the slower route!", think about the damage you're inflicting on yourself. You're allowing your neurons to wither while you trick your brain into releasing pleasure chemicals with a simulation of the real activity. Who is really falling behind, here?


> It's a real shame because Elon's goals of allowing an unrestricted AI are somewhat noble

When I was young it was considered improper to scrape too much data from the web. We would even set delays between requests, so as not to be rude internet citizens.

Now, it's considered noble to scrape all the world's data as fast as possible, without permission, and without any thought as to the legality the material, and then feed that data into your machine (without which the machine could not function) and use it to enrich yourself, while removing our ability to trust that an image was created by a human in some way (an ability that we have all possessed for hundreds of thousands of years -- from cave painting to creative coding -- and which has now been permanently and irrevocably destroyed).


Yeah I think it's a mistake to focus on writing "readable" or even "maintainable" code. We need to let go of these aging paradigms and be open to adopting a new one.


In my experience, LLMs perform significantly better on readable maintainable code.

It's what they were trained on after-all.

However what they produce is often highly readable but not very maintainable due to the verbosity and obvious comments. This seems to pollute codebases over time and you see AI coding efficiency slowly decline.


> Poe's law is an adage of Internet culture which says that any parodic or sarcastic expression of extreme views can be mistaken for a sincere expression of those views.

The things you mentioned are important but have been on their way out for years now regardless of LLMs. Have my ambivalent upvote regardless.

[1] https://en.wikipedia.org/wiki/Poe%27s_law


as depressing as it is to say, i think it's a bit like the year is 1906 and we're complaining that these new tyres for cars they're making are bad because they're no longer backwards compatible with the horse drawn wagons we might want to attach them to in the future.


Yes, exactly.

This is a completely new thing which will have transformative consequences.

It's not just a way to do what you've always done a bit more quickly.


Do readability and maintainability not matter when AI "reads" and maintains the code? I'm pretty sure they do.


If that would be true, you could surely ask an LLM to write the same complexity apps in brainfuck, right?


Isn't that still considered cooking? If I describe the dish I want, and someone else makes it for me, I was still the catalyst for that dish. It would not have existed without me. So yes, I did cook it.


Work harder!

Now I’m a life coach because I’m responsible for your promotion.


Ok, maybe my analogy wasn't the best. But the point I was trying to make is that using AI tools to write code doesn't meant you didn't write the code.


Very apt analogy. I'm still waiting for my paycheck.


> If I describe the dish I want, and someone else makes it for me, I was still the catalyst for that dish. It would not have existed without me. So yes, I did "cook" it.

The person who actually cooked it cooked it. Being the "catalyst" doesn't make you the creator, nor does it mean you get to claim that you did the work.

Otherwise you could say you "cooked a meal" every time you went to MacDonald's.


Why is the head chef called the head chef, then? He doesn’t “cook”.


To differentiate him from the "cook", which is what we call those who carry out the actual act of cooking.


Well, don’t go around calling me a compiler!


If that's what you do, then the name is perfectly apt. Why shy away from what you are?


The difference is that the head chef can cook very well and could do a better job of the dish than the trainee.


"head chef" is a managerial position but yes often they can and do cook.


I would argue that you technically did not cook it yourself - you are however responsible for having cooked it. You directed the cooking.


Ctrl+shift+c lets you copy a frame to clipboard.


I've found the biggest impediment to this strategy is social pressure. The small step methodology goes against the common sense knowledge that the greatest gains come from hard work, so it often receives a lot of push back from friends and family. In my experience, if someone witnesses you taking a small step, they're likely to tell you you're not trying hard enough, or give you some of their own advice on what you should be doing instead.

Small steps are best taken in private.


> will we find new, better ways to find answers to technical questions?

I honestly don't think they need to. As we've seen so far, for most jobs in this world, answers that sound correct are good enough.

Is chasing more accuracy a good use of resources if your audience can't tell the difference anyway?


Sure it is. Forbid training models on images of humans, humanoids, or living creatures, and they won't be able to generate images of those things. It's not like AI is some uncontrollable magic force that hatched out of an egg. It can only output what you put in.


If it was me, and I was trying to hide my identity, I'd add those sorts of details to muddy the waters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: