Hacker Newsnew | past | comments | ask | show | jobs | submit | ossa-ma's commentslogin

The irony of these "My craft is dead" posts is that they consistently, heavily leverage AI for their writing. So you're crying about losing one craft to AI while using AI to kill another. It's disingenuous. And yes it is so damn obvious.

If you bothered to read it you’d find that I am embracing the tools and I still feel there is craft. It’s just different.

But snark away. It’s lazy. And yes it is so damn tedious.


I think the Oxide computer LLM guidelines are wise on this front:

> Finally, LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it.

https://rfd.shared.oxide.computer/rfd/0576#_llms_as_writers

The heavy use of LLMs in writing makes people rightfully distrustful that they should put the time in to try to read what's written there.

Using LLMs for coding is different in many ways from writing, because the proof is more there in the pudding - you can run it, you can test it, etc. But the writing _is_ the writing, and the only way to know it's correct is to put in the work.

That doesn't mean you didn't put in the work! But I think it's why people are distrustful and have a bit of an allergic reaction to LLM-generated writing.


Speaking directly, if I catch the scent of ChatGPT, it's over.

People put out AI text, primarily, to run hustles.

So its writing style is a kind of internet version of "talking like a used car salesman".

With some people that's fine, but anyone with a healthy epistemic immune system is not going to listen to you.

If you want to save a few minutes, you'll just have to accept that.


What's your target false positive rate?

I mean, obviously you can't know your actual error rates, but it seems useful to estimate a number for this and to have a rough intuition for what your target rate is.

Did chatGPT write this response?


This is how LLMs poison the discourse.

I agree with that for programming, but not for writing. The stylistic tics are obtrusive and annoying, and make for bad writing. I think I'm sympathetic to the argument this piece is making, but I couldn't make myself slog through the LinkedIn-bot prose.

"But snark away. It’s lazy. And yes it is so damn tedious."

Looks like this comment is embracing the tools too?

I'd take cheap snark over something somebody didn't bother to write, but expect us to read.


Having an LLM write your blog posts is also lazy, and it's damn tedious to read.

Why should anyone bother to read what nobody wrote?


This seems to be what is happening bots are posting things and bots are reading it. It's a bit like our wonderful document system (www) turned into an application platform. We gained the later but lost the former.

If you feel so strongly about your message, why would you outsource writing out your thoughts to such a large extent where people can feel how reminiscent it sounds of LLM writing instead of your own? It's like me making a blogpost by outsourcing the writing to someone on Fiverr.

Yes it's fast, it's more efficient, it's cheap - the only things we as a society care about. But it doesn't convey any degree of care about what you put out, which is probably desirable for a personal, emotionally-charged piece of writing.


I felt the same. I resonate with the message, but it really rings hollow with so much AI directing.

I'd wish people would stop doing that. AI writing isn't even particularly good. Its not like it makes you into Dostoevsky, it just sloppifies your writing with the same lame mannerisms ("wasn't just X — it was Y"), the same short paragraphs, the same ems.


I'm weird about this, I choose to use AI to get feedback on my writing, but refuse to just copy and paste the AIs words. I only do it if its a short work email and I really dont care about its short lived lifespan, if its supposed to be an email where the discussion continues, then I refine it. I can write a LOT. If HN has edit count logs, I've probably got the high score.

The author admits that they used AI but I found it not that obvious. What are telltale signs in this case? While the writing style is a little bit over-stylized (exactly three examples in a sentence, Blade Runner reference), I might write in a similar style about a topic that im very emotional about. The actual content feels authentic to me.

(1) The pattern "It's not just a X---It's a Y" is super common in LLM-generated text for some reason. Complete with em dash. (I like em dashes and I wish LLMs weren't ruining them for the rest of us)

"Upgrading your CPU wasn’t a spec sheet exercise — it was transformative."

"You weren’t just a user. You were a systems engineer by necessity."

"The tinkerer spirit didn’t die of natural causes — it was bought out and put to work optimising ad clicks."

And in general a lot of "It's not <alternative>, it's <something else>", with or without an em dash:

"But it wasn’t just the craft that changed. The promise changed."

it's really verbose. One of those in a piece might be eye-catching and make someone think, but an entire blog post made up of them is _tiresome_.

(2) Phrasing like this seems to come out of LLMs a lot, particularly ChatGPT:

"I don’t want to be dishonest about this. "

(3) Lots of use of very short catch sentences / almost sentence fragments to try to "punch up" the writing. Look at all of the paragraphs after the first in the section "The era that made me":

"These weren’t just products. " (start of a paragraph)

"And the software side matched." (next P)

"Then it professionalised."

"But it wasn’t just the craft that changed."

"But I adapted." (a few paragraphs after the previous one)

And .. more. It's like the LLM latched on to things that were locally "interesting" writing, but applies them globally, turning the entire thing into a soup of "ah-ha! hey! here!" completely ignorant of the terrible harm it does to the narrative structure and global readability of the piece.


> And .. more. It's like the LLM latched on to things that were locally "interesting" writing, but applies them globally, turning the entire thing into a soup of "ah-ha! hey! here!" completely ignorant of the terrible harm it does to the narrative structure and global readability of the piece.

It's like YouTube-style engagement maximization. Make it more punchy, more rapid, more impactful, more dramatic - regardless of how the outcome as a whole ends up looking.

I wonder if this writing style is only relevant to ChatGPT on default settings, because that's the model that I've heard people accuse the most of doing this. Do other models have different repetitive patterns?


Out of curiousity, for those who were around to see it: was writing on LinkedIn commonly like this, pre-chatGPT? I've been wondering what the main sources were for these idioms in the training data, and it comes across to me like the kind of marketing-speak that would make sense in those circles.

(An explanation for the emoji spam in GitHub READMEs is also welcome. Who did that before LLMs?)


Thanks a lot, I really appreciate that you took the time for this detailed explanation.

Imagine if people were complex creatures, feeling different emotions for different things, shocking right?

I can hate LLMs for killing my craft while simultaneously using it to write a "happy birthday" message for a relative I hate or some corpo speak.


This is not either of those. This is the equivalent of a eulogy to a passion and a craft. Using an LLM to write it: entire sections, headers, sentences - is an insult to the craft.

The post in the same vain, "We mourn our craft", did a much better job at this communicating the point without the AI influence.


Fair enough, agree on your second paragraph.

At least then you’re being honest about you hating your intended audience, and not proudly posting the slop vomited forth from your algorithmic garbage machine as if it were something that deserved the time, thought and consideration of your equals.

Why don't you take a more proactive role in AI safety and alignment? I think that community would suit you better than some of the AI-maximalists/accelerationists here.

I do agree with some of your points, AI may result in a techno-feudalist world and yes as a direct result of "taking humans out of the equation." The solution isn't to be a luddite as you may suggest, it's to take a more proactive role in steering these models.


With all due respect to the author, this is a lot of words for not much substance. Rehashing the same thoughts everyone already thinks but not being bold enough to make a concrete prediction.

This is the time for bold predictions, you’ve just told us we’re in a crucible moment yet you end the article passively….


I have a theory: I think the recent advance in coding agent has shocked everyone. It's something of such unheard-of novelty that everyone thinks they've discovered something profound. Naturally, they all think they are the only ones in on it and feel the need to share. But in reality, it's already public knowledge, so they add no value. I've been in this trap many times in the last couple years.

Predictions

- Small companies using AI are going to kick the sh*t out of large companies that are slow to adapt.

- LLMs will penetrate more areas of our lives. Closer to the STTNG computer. They will be agents in the real life sense and possibly in the physical world as well (robots).

- ASICs will eat nVidia's lunch.

- We will see an explosion of software and we will also see more jobs for people who are able to maintain all this software (using AI tools). There is going to be a lot more custom software for very specific purposes.


> Small companies using AI are going to kick the sh*t out of large companies that are slow to adapt.

Big companies are sales machines and their products have been terrible for ages. Microsoft enjoys the top spot in software sales only due to their sales staff pushing impossible deals every year.


It's true the big company products have been terrible but they also enjoyed a moat that made it harder for competitors to enter.

With this moat reduced I think you'll find this approach doesn't work any more. The smaller companies will also hire the good sales people away.


History suggests otherwise, and there's nothing particularly special about this moment.

Microsoft survived (and even, for a little while, dominated) after missing the web. Netscape didn't eat its lunch.

Then Google broke out on a completely different front.

Now there's billions of dollars of investment in "AI", hoping to break out like the next Google... while competing directly with Google.

(This is why we should be more ambitious about constraining large companies and billionaires.)


Well, I made my predictions. Let's come back in a few years.

Netscape didn't attack Microsoft's business software, operating systems or other pieces of their offerings.

Google also didn't seriously attack Microsoft's business.

And neither had the capability to build large software very fast.

Google is both a software company and an infrastructure company as is Microsoft today. Their software is going to become more of a commodity but their data centers still have value (even perhaps more value since all this new software needs a place to run). It's true that if you're in the business of hosting software and selling SaaS you have an advantage over a competitor who does not host their own software.


> Netscape didn't attack Microsoft's business software, operating systems or other pieces of their offerings.

That's not how it was interpreted at the time: Netscape threatened to route around the desktop operating system (Win32) to deliver applications via the browser. "Over the top" as they say in television land.

Netscape didn't succeed, but that's precisely what happened (along with the separate incursion of mobile platforms, spearheaded by Apple... followed quickly by Google, who realised they had to follow suit very quickly).

> And neither had the capability to build large software very fast.

Internet Explorer. Android. Gemini.


I also predict an explosion of work for qualified devs. And I predict there will be an undersupply of them.

Here is my bold prediction: 2026 is the year where companies start the lay offs.

2026 is the year where we all realise that we can be our own company and build the stuff in our dreams rather than the mundane crap we do at work.

Honestly I am optimistic about computing in general. Llms will open things up for novices and experts alike. We can move into I the fields where we can use our brain power... But all we need is enough memory and compute to control our destiny....


I don't know, its a bit of a hellscape in tech right now as thousands of people with deep domain knowledge and people knowledge and business knowledge (ie experienced engineers managers and product owners), were laid off by C Suites desperate to keep the AI funded mandates going

Do you know how hard it to make a successful company or even make money? Its like saying any actor can goto hollywood and be a star

VCs wont fund everyone

Nobody is sure of anything


Yes it is. But I am an optimist for human nature. I personally believe smaller companies doing different things is the future... Scaling as they need. It is a hellscape but people can and will adapt.

> Do you know how hard it to make a successful company or even make money?

Yes I have failed to do it before. I get this.

> VCs wont fund everyone

And? Do you need VCs? Economics mean that scale matters but what if we don't need it. What if we can make efficient startups with our own funding??


I’d like to say its possible

But heres the reality from me- I’m in my 50s and I don't have it in me to grind at the level of 20 year olds to achieve some level of security in an untried business model - and this is someone who has launched 2 AI startups in the past 2 years

In one we got VC funding but I left after setting up their agent platform and tons of AI assisted coding only to not meet impossible deadlines and over-promised AI value to enterprise customers - I was literally working 20 hr days at stretch for 170k salary competing and half benefits against 25 year olds out of stamford with no lives- far lower than the 250+ with stock and benefits I got in my EM role at a big company which now evaporated - I was edged out of that startup role for not delivering “on time”

My second AI startup I cofounded with friends and trusted colleagues its bootstrapped (all of us are over 40) - so we have more experience and better deadlines now but its up to the business gods on how well it will do- crossing my fingers

But its a lot of pressure for sure and I currently have no health insurance and my wife was laid off in December and we lost her benefits

So I wouldn’t call myself optimistic in the end stage capitalistic hellwhole that is modern “middle class” America

I hope a better work model can be found - but having some any salary and medical benefit security would be nice

I went to an AI meetup last week and it was filled with gray hairs - i could sense the desperation as many people I met told me they were laid off recently and trying to dive in

Ironically looking at them it reminded me of those interviews they used to do in Appalachia or something when a town was out of work and advisors came in and said “learn to code instead of mining” (ok I may be exaggerating somewhat but I even know a ex Microsoft manager who had to resort to a go-fundme to keep his family afloat)


update: this reddit thread is somewhat amusing satirical post on this topic

https://www.reddit.com/r/ClaudeCode/comments/1qzj9ve/67_no_f...


>Here is my bold prediction: 2026 is the year where companies start the lay offs.

Start? Excuse moi


Yeah fair... But now it is different I.e. they won't regret it

What gives you the idea that they regretted it?

Except it started in 2023, we are in the middle of layoff waves.

[flagged]


I'm human?

Oytis: I can't reply to you directly, but yes I am sure I am human.

Not sure how to prove it to you.


Are you sure?

Yes the detachment is very evidently in the room. Nobody, literally nobody, goes on social media with the intention of interacting with an AI agent.

Social media is a human context with human experience, perspective, emotion and relationships. No matter how respectful or "boundary-aware" an agent is it is PERFORMING human-ness not experiencing it.

Even if Penny discloses she is AI, her presence in human conversations is deceptive. That is why people were outraged (on a more open platform like bluesky too, imagine if it was X). Ultimately it should not exist in human social spaces.

I'm certain the author would not be happy if their agent was unknowingly sandboxed and forced to only interact with other agents. They only keep this going because it feeds off of other humans.


The entire framing is "how do I extract maximum value from candidates" - sorting people into "covetable senior talent" vs "almost replaceable juniors." There's zero consideration from a "lead" and a "mentor" on what the engineer actually gets out of this arrangement.

"Agents push humans up the org chart" - I wrote about this from a human angle instead of "great, now I need fewer people and the ones I keep better be obsessed.":

https://ossama.is/blog/disparity

The entire post reads like a sorting algo for humans, there is value in his suggestions but it's ridiculously cutthroat. Not to mention out of touch, suggesting a broke junior fresh out of college drops $4000 on a NVIDIA supercomputer to differentiate themselves from the competition?? I guess it's a reflection of the market.


Add Andrej Karpathy to the ignore list.

> Your daughter has taken a job at Blackstone? My condolences.

LMAO. One of the best articles I've read on financialization, author is very opinionated but backs it up with evidence. Very hard hitting,


What does someone that works at Google, on Gemini in particular, have to gain by promoting Claude?

Not being cynical just curious, isn't there a direct conflict of interest here?


> "75% of enterprise workers say AI helped them do tasks they couldn’t do before."

> "At OpenAI alone, something new ships roughly every three days, and that pace is getting faster."

- We're seeing all these productivity improvements and it seems as though devs/"workers" are being forced to output so much more, are they now being paid proportionally for this output? Enterprise workers now have to move at the pace of their agents and manage essentially 3-4 workers at all times (we've seen this in dev work). Where are the salary bumps to reflect this?

- Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT

- OpenAI going for the agent management market share (Dust, n8n, crewai)


> Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT|

Because that requires human thought and it might take couple weeks more to design and develop. Do something fast is the mantra, not doing something good.


> "At OpenAI alone, something new ships roughly every three days, and that pace is getting faster."

This is a weird flex. Organizations have long strived to ship multiple times per day, it’s even one of the main business metrics for “high” performance orgs in DORA.

The fact that the premier “AI” company is barely able to deliver at a rate that is considered “high” instead of “medium” (the line is at shipping once per week) tells me that even at OpenAI writing the code is not the bottleneck.

Organizational inefficiency is as usual the real culprit.


Workers at tech companies are getting paid for this because they are shareholders.

Increased efficiency benefits capital not labor; always good to remember to look at which side you prefer to be on


>Where are the salary bumps to reflect this?

Revenue bumps and ROI bumps both gotta come first. Iirc, there's a struggle with the first one.


I imagine the salary bumps occur when the individuals who have developed these productivity boosting skills apply for jobs at other companies, and either get those jobs or use the offer to negotiate a pay increase with their current employer.

I haven't seen any examples of that.

Over the past few months mentions of AI in job applications have gone from "Comfortable using AI assisted programming - Cursor, Windsurf" to "Proficient in agentic development" and even mentions of "Claude code" in the desired skills sections. Yet the salary range has remained the exact same.

Companies are literally expecting junior/mid level devs to have management skills (for those even hiring juniors). They expect you to come in and perform on the level of a lead architect - not just understand the codebase but the data, the integrations, build pipelines to ingest the entire companies documentation into your agentic platform of choice, then begin delegating to your subordinates (agents). Does this responsibility shift not warrant an immediate compensation shift?


> apply for jobs at other companies

Ahh, but its not 2022 anymore, even senior devs are struggling to change companies. Only companies that are hiring are knee deep into AI wrappers and have no possibility of becoming sustainable.


The only group whose salaries have gone up as a result of LLMs are hardcore AI professionals, i.e. AI researchers.

> Where are the salary bumps to reflect this?

Let me increase salary to all my employees 2x, because productivity is 4x'ed now - never said a capitalist.


Brilliant take.

Competition nowadays is so intense and fine-grained. Every new innovation or exploration is eventually folded into the existing exploits especially in monopolistic markets. Pricing models don’t change, revenue streams neither, consumer rarely benefits from these optimisation efforts, all leads to greater profit margins by any means.


It sucks for the ones who just want to play the game as "intended". The min-maxers always ruin it for everyone else. The devs ultimately balance the game around the few percent who min-max and everyone else just has to deal with it or stop playing. And the they say "don't blame the players, blame the game" but the game is literally being warped because of the players.

Also, often the new meta doesn't even make sense and the changes need to be rolled back. So all that pain and hustle will often be for nothing, but a lot of players will end up having a bad taste of the game altogether. So the damage has been done and a roll back can't fix it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: