Hacker Newsnew | past | comments | ask | show | jobs | submit | somekyle2's commentslogin

It also seems like the value of quality tutoring that doesn't primarily function as social/class signaling goes down as tools capable of automating high quality intellectual work are more widely available.


It depends on outcome again: is the value of tutoring the social class elevation, or is it in the outcome of becoming more skilled and knowledgable?

There's also the deeper philosophical question of what is the meaning of life, and if there's inherent value in learning outside of what remunerative advantages you reap from it.


100%. I think there are some clear distinctions between AI training and human learning in practice that compound this. Humans learning requires individual investment and doesn't scale that efficiently. If someone invests the time to consume all of my published work and learn from it, I feel good about that. That feels like impact, especially if we interact and even more if I help them. They can perhaps reproduce anything I could've done, and that's cool.

If someone trains a machine on my work and it means you can get the benefit of my labor without knowing me, interacting with my work or understanding it, or really any effort beyond some GPUs, that feels bad. And, it's much more of a risk to me, if that means anything.


> If someone invests the time to consume all of my published work and learn from it, I feel good about that.

Agreed. My goal, my moral compass, is to live in a world populated by thriving happy people. I love teaching people new things and am happy to work hard to that end and sacrifice some amount of financial compensation. (For example, both of my books can be read online for free.)

I couldn't possibly care less about some giant matrix of floats sitting in a GPU somewhere getting tuned to better emulate some desired behavior. I simply have no moral imperative to enrich machines or their billionaire owners.


If it doesn't work, it's an annoyance and you have to argue with it. If it does work, it's one more case where maybe with the right MCP plumbing and/or a slightly better model you might not be needed as part of this process. Feels a bit lose-lose.


I suspect that lots of developers who are sour on relying on AI significantly _would_ agree with most of this, but see the result of that logic leading to (as the article notes) "the skill of writing and reading code is obsolete, and it's our job to make software engineering increasingly entirely automated" and really don't like that outcome so they try to find a way to reject it.

"The skillset you've spend decades developing and expected to continue having a career selling? The parts of it that aren't high level product management and systems architecture are quickly becoming irrelevant, and it's your job to speed that process along" isn't an easy pill to swallow.


You are essentially making a character attack on anyone who disagrees with this article. You dismiss outright reasonable objections you have not heard and instead you presume fear and loathing are the only possible motivations to disagree.


Certainly not my intention. Some of my post is projection: I don't like the implications of the AI enthusiast stance, and I know I want "actually, AI can't fully take over the task of programming" to be true even though my recent experience with uses it to handle even moderately complex implementation has been quite successful. I've also seen the opposition narrow in scope but not firmness over the last year from some coworkers while watching others outsource nearly all of their actual code interaction, and I think some of the difference is how invested they are in the craft of programming vs being able to ship something. So, if you like the part AI is expected to take over and see it as part of your value, it makes sense that your threshold are higher for accepting that outcome as accurate. Seems like typical psychology rather than an attack.


> "the skill of writing and reading code is obsolete, and it's our job to make software engineering increasingly entirely automated"

This simply is a mediocre take, sometimes I feel like people never actually coded at all to have such opinions


> The parts of it that aren't high level product management and systems architecture are quickly becoming irrelevant

Embedded in this, is the assumption that many SWEs can actually do those roles better than existing specialists.

If they can't - end of the line


Remains to be seen if that pill needs swallowing at all. At least for reading code.


If one is not writing code your ability to read code will degrade quickly and be reduced to a basic sanity check as to whether you need to add more constraints (prompts, tests, etc.). Anyone who thinks they can read code without writing code at a level needed to understand what is going on (for anything non-trivial) is fooling themselves.


As if reading books was enough to make you an author.


Yep. I'd say it's an order of magnitude more effort to read code you haven't written too, compared to reading code you wrote. So there is approximately zero chance the people using AI to generate code are reading it at a level where they actually understand it, or else they would lose all of their supposed productivity gains.


Yeah if people were good at reading code we wouldn't have the whole LGTM meme where the reviewer gives up as soon the PRs is bigger than 500 lines.


I actually don't think they would agree with most of this. Why would you think that?


"force" seems a bit strong, as I remember it.


Yeah, I remember it being a fourth option alongside the others but I quit just before Google lost its serifs and its soul


Anecdotally, lots of people in SF tech hate AI too. _Most_ people out of tech do. But, enough of the people in tech have their future tied to AI that there are lot of vocal boosters.


It is not at all my experience working in local government (that is, in close contact with everybody else paying attention to local government) that non-tech people hate AI. It seems rather the opposite.


Managers everywhere love the idea of AI because it means they can replace expensive and inefficient human workers with cheap automation.

Among actual people (i.e. not managers) there seems to be a bit of a generation gap - my younger friends (Gen Z) are almost disturbingly enthusiastic about entrusting their every thought and action to ChatGPT; my older friends (young millennials and up) find it odious.


The median age of people working local politics is probably 55, and I've met more people (non-family, that is) over 70 doing this than in anything else, and all of them are (a) using AI for stuff and (b) psyched to see any new application of AI being put to use (for instance, a year or so ago, I used 4o to classify every minute spent in our village meetings according to broad subjects).

Or, drive through Worth and Bridgeview in IL, where all the middle eastern people in Chicago live, and notice all the AI billboards. Not billboards for AI, just, billboards obviously made with GenAI.

I think it's just not true that non-tech people are especially opposed to AI.


> The median age of people working local politics is probably 55, and I've met more people (non-family, that is) over 70 doing this than in anything else, and all of them are (a) using AI for stuff and (b) psyched to see any new application of AI being put to use

That seems more like a canary than anything. This is the demographic that doesn't even know which tech company they're talking to in congress. That's not the demographic in touch with tech. They have gotten more excited about even dumber stuff.

For people under 50, it's a wildly common insult to say something seems AI generated. They are disillusioned with the content slop filling the internet, the fact that 50% of the internet is bots, and their future job prospects.

The only people I've seen liking AI art, like fake cat videos, are people over 50. Not that they don't matter, but they are not the driver of what's popular or sustainable.


Mangers should realize that the thing AI might be best at is to replace them. Most of my managers don't understand the people they are managing and don't understand what the people they are managing are actually building. They job is to get a question from management that their reports can answer, format that answer for their boss and send the email. They job is to be the leader in a meeting to make sure it stays on track, not understand the content. AI can do all that shit without a problem.



A Pew Research Center survey found age correlation. But not a generation gap.[1]

[1] https://www.pewresearch.org/science/2025/09/17/ai-in-america...


I live in a medium-sized British town of 100,000 people or so. It may be a slightly more creative town than most — lots of arts and music and a really surprisingly cool music scene — but I can tell you that AI pleases (almost) nobody.

I think actually a lot about it is the sort of crass, unthinking, default-American-college-student manner about the way ChatGPT speaks. It's so American and we can feel it. But AI generated art and music is hugely unpopular, AI chatbots replacing real customer service is something we loathe.

Generally speaking I would say that AI feels like something that is being done to us by a handful of powerful Americans we profoundly distrust (and for good reason: they are untrustworthy and we can see through their bullshit).

I can tell you that this is so different to the way the internet was initially received even by older people. But again, perhaps this is in part due to our changing perspectives on America. It felt like an exciting thing to be part of, and it helped in the media that the Web was the brainchild of a British person (even if twenty years later that same media would have liked to pretend he wasn't at a European research institution when he did it).

The feeling about AI is more like the feeling we have about what the internet eventually did to our culture: destroying our high streets. We know what is coming will not be good for what makes us us.


I don't doubt that many love it. I'm just going based on SF non-tech people I know, who largely see it as the thing vaguely mentioned on every billboard and bus stop, the chatbot every tech company seems to be trying to wedge into every app, and the thing that makes misleading content on social media and enables cheating on school projects. But, sometimes it is good at summarizing videos and such. I probably have a biased sample of people who don't really try to make productive use of AI.


I can imagine reasons why non-tech people in SF would hate all tech. I work in tech and living in the middle of that was a big part of why I was in such a hurry to get out of there.


Frankly, tech deserves its bad reputation in SF (and worldwide, really).

One look at the dystopian billboards bragging about trying to replace humans with AI should make any sane human angry at what tech has done. Or the rising rents due to an influx of people working on mostly useless AI startups, 90% of which won't be around in 5 years. Or even how poorly many in tech behave in public and how poorly they treat service workers. That's just the tip of the iceberg, and just in SF alone.

I say all this as someone living in SF and working in tech. As a whole, we've brought the hate upon ourselves, and we deserve it.


There's a long list of things that have "replaced" humans all the way back to the ox drawn plow. It's not sane to be angry at any of those steps along the way. GenAI will likely not be any different.


it's plenty sane to be angry when the benefits of those technical innovations are not distributed equally.


It is absolutely sane to be angry at people's livelihoods being destroyed and most aspects of life being worsened just so a handful of multi-billionaires that already control society can become even richer.


The plough also made the rich richer, but in the long run the productivity gains it enabled drove improvements to common living standards.


I don't agree with any of this. I just think it's aggravating to live in a company town.


Non-technical people that I know have rapidly embraced it as "better google where i don't have to do as much work to answer questions." This is in a non-work context so i don't know how much those people are using it to do their day job writing emails or whatever. A lot of these people are tech-using boomers - they already adjusted to Google/the internet, they don't know how it works, they just are like "oh, the internet got even better."

There's maybe a slow trend towards "that's not true, you should know better than to trust AI for that sort of question" in discussions when someone says something like "I asked AI how [xyz was done]" but it's definitely not enough yet to keep anyone from going to it as their first option for answering a question.


Anyone involved in government procurement loves AI, irrespective of what it even is, for the simple fact that they get to pointedly ask every single tech vendor for evidence that they have "leveraged efficiency gains from AI" in the form of a lower bid.

At least, that's my wife's experience working on a contract with a state government at a big tech vendor.


Not talking about government employees, for whatever that's worth.


EDIT: Removed part of my post that pissed people off for some reason. shrug

It makes a lot of sense that someone casually coming in to use chatgpt for 30 minutes a week doesn't have any reason to think more deeply about what using that tool 'means' or where it came from. Honestly, they shouldn't have to think about it.


The claim I was responding to implied that non-techies distinctively hate AI. You're a techie.


It’s one of those “people hate noticing AI-generated stuff, but everyone and their mom is using ChatGPT to make their works easier”. There are a lot of vocal boosters and vocal anti-boosters, but the general population is using it in a Google fashion and move on. Not everyone is thinking about AI-apocalypse every day.

Personally, I’m in-between the opinions. I hate when I’m consuming AI-generated stuff, but can see the use for myself for work or asking bunch of not-so-important questions to get general idea of stuff.


Most of my FB contacts are not in tech. It is overwhelming viewed as a negative by them. To be clearer: I'm counting anyone who posts AI-generated pictures on FB as implicitly being pro-AI; if we neglect this portion the only non-negative posts about AI would be highly qualified "in some special cases it is useful" statements.


> enough of the people in tech have their future tied to AI that there are lot of vocal boosters

That's the presumption. There's no data on whether this is actually true or not. Most rational examinations show that it most likely isn't. The progress of the technology is simply too slow and no exponential growth is on the horizon.


What’s so striking to me is these “vocal boosters” almost preach like televangelists the moment the subject comes up. It’s very crypto-esque (not a hot take at all I know). I’m just tired of watching these people shout down folks asking legitimate questions pertaining to matters like health and safety.


Health and safety seems irrelevant to me. I complain about cars, I point out "obscure" facts like that they are a major cause of lung related health problems for innocent bystanders, I don't actually ride in cars on any regular basis, I use them less in fact than I use AI. There were people at the car's introduction who made all the points I would make today.

The world is not at all about fairness of benefits and impacts to all people it is about a populist mass and what amuses them and makes their life convenient, hopefully without attending the relevant funerals themselves.


> health and safety seems irrelevant to me

Honestly I don’t really know what to say to that, other than it seems rather relevant to me. I don’t really know what to elaborate on given we disagree on such a fundamental level.


Do you think the industry will stop because of your concern? If for example, AI does what it says on the box but causes goiters for prompt jockeys do you think the industry will stop then or offshore the role of AI jockey?

It's lovely that you care about health, but I have no idea why you think you are relevant to a society that is very much willing to risk extinction to avoid the slightest upset or delay to consumer convenience measured progress.


> Do you think the industry will stop because of your concern?

I’m not sure what this question is addressing. I didn’t say it needs to “stop” or the industry has to respond to me.

> It's lovely that you care about health,

1) you should care too, 2) drop the patronizing tone if you are actually serious about having a conversation.


From my PoV you are trolling with virtue signalling and thought terminating memes.. You don't want to discuss why every(?) technological introduction so far has ignored priorities such as your sentiments and any devil's adovocate must be the devil..

The members of HN are actually a pretty strongly biased sample towards people who get the omelet when the eggs get broken.


>and any devil's adovocate must be the devil.

No not the devil, but years ago I stopped finding it funny or useful when people "played" the part of devil's advocate because we all know that the vast majority of the time it's just a convenient way to be contrarian without ever being held accountable for the opinions espoused in the process. It also tends to distract people from the actual discussion at hand.


People not being assholes and having opinions is not "trolling with virtue signaling". Even where people do virtue signal, it is significant improvement over "vice signaling" which you seem to be doing and expecting others to do.


I for one have no idea what you mean by health and safety with respect to AI. Do you have an OSHA concern?


I have an “enabling suicidal ideation” concern for starters.

To be honest I’m kind of surprised I need to explain what this means so my guess is you’re just baiting/being opaque, but I’ll give you the benefit of the doubt and answer your question taken at face value: There have been plenty of high profile incidents in the news over the past year or two, as well as multiple behavioral health studies showing that we need to think critically about how these systems are deployed. If you are unable to find them I’ll locate them for you and link them, but I don’t want to get bogged down in “source wars.” So please look first (search “AI psychosis” to start) and then hit me up if you really can’t find anything.

I am not against the use of LLM’s, but like social media and other technologies before it, we need to actually think about the societal implications. We make this mistake time and time again.


> To be honest I’m kind of surprised I need to explain what this means so my guess is you’re just baiting/being opaque

Search for health and safety and see how many results are about work.


You're being needlessly prescriptive with language here. I am taking about health and safety writ large. I don't appreciate the game you're playing and it's why these discussions rarely go anywhere. It can't all be flippant retorts and needling words. I am clearly saying that we need to as a society be willing to discuss the possible issues with LLM's and make informed decisions about how we want this technology to exist in our lives.

If you don't care about that so be it - just say it out loud then. But I do not feel like getting bogged down in justifying why we should even discuss it as we circle what this is really about.


All the Ai companies are taking those concerns seriously though. Every major chat service has guardrails in place that shutdown sessions which appear to be violating such content restrictions.

If your concerns are things like AI psychosis, then I think it is fair to say that the tradeoffs are not yet clear enough to call this. There are benefits and bad consequences for every new technology. Some are a net positive on the balance, others are not. If we outlawed every new technology because someone, somewhere was hurt, nothing would ever be approved for general use.


> All the Ai companies are taking those concerns seriously though.

I do not feel they are but also I was primarily talking about the AI-evangelists who shout people asking these questions down as Luddites.


That's literally what the Luddites were doing though. It's a reasonable comparison.


Luddite is usually used as an insult based on a misunderstanding of the Luddites. That’s the definition I’m responding to here.


I would disagree. Luddite, to me, is a negative and pejorative label because history has shown Ned Ludd and his followers to have been a short-sighted, self-sabotaging reactionary movement.

I think the same thing of the precautionary movements today, including the AI skeptic position you are advocating for here. The comparison is valid, and it is negative and pejorative because history is on the side of advancing technology.


You mean "_Most_ people out of tech that write social media posts I read".


That’s fair. The bad behavior in the name of AI definitely isn’t limited to Seattle. I think the difference in SF is that there are people doing legitimately useful stuff with AI


I think this comment (and TFA) is really just painting with too broad of strokes. Of course there are going to be people in tech hubs that are very pro-AI, either because they are working with it directly and have had legitimately positive experiences or because they work with it and they begrudgingly see the writing on that wall for what it means for software professionals.

I can assure you, living in Seattle I still encounter a lot a AI boosters just as much as I encounter AI haters/skeptics


Strangely I've found the only people who are super excited about AI are executive level boomers. My mom loves AI and uses it to do her job, which of course has poor results. All the younger people I know hate AI. Perhaps it's also a generational dofference.


Yeah, it makes sense that going from a decade or so where SWE was one of the best possible career paths if you have any aptitude to a period where tech cos were staffing up aggressively (I recall reading ~60% growth), there's gonna be a hangover. The educational pipeline probably still has a few years of oversupply to work through, and all of the people laid off post covid still need to work. Even in a world where AI being able to automate some of the key skills required for SWE has no negative impact on employment, we'd expect a few more years of rough job prospects.


Even 15 years ago or so when Guido was still there I recall being told "we aren't supposed to write any new services in Python. It starts easy, then things get messy and end up needing to be rewritten." I recall it mostly being perf and tooling support, but also lack of typing, which has changed since then, so maybe they've gotten more accepting.


Oh, that explains a thing. A decade or so ago, I was a well regarded engineer at a FAANG who got an offer from a startup. I told my manager I was probably going to take it, as it sounded fun. He and his lead tried to talk me into staying, showed me other departments I might find more fun. Really, they could've offered me a trivial raise and I probably would've stayed, but I was too meek to ask for money, and they didn't bring up money at all.

That always struck me as very strange; I assumed it was either a mistake, or a "if they're going somewhere that is a pay cut, clearly it isn't money, and if you offer them money they'll leave in 6mo anyway". But, if they don't have that level to pull, that's a much simpler answer.


Thanks for Picol! I saw it as a young engineer, and found the simplicity inspiring. It inspired me to write Tcl interpreters as starter projects in the next languages I was picking up, and I learned a lot by trying to push performance , functionality, and correctness. Your little project ended up inspiring cumulative months of joyful hacking.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: