Hacker Newsnew | past | comments | ask | show | jobs | submit | GrinningFool's commentslogin

Before LLMs we had code generators and automation that eliminated a lot of time- and resource-consuming tasks. I think the point still holds.

> Meanwhile although there's no monster trucks on the White House lawn yet, Not /currently/... https://www.nbcnews.com/tech/elon-musk/trump-musk-tesla-whit...

Alternatively[1], for those of us who have enough clutter: Buying it digitally means you've paid for it. The author gets their cut, and you can now seek out unencumbered formats that best serve your usage with a clear conscience.

[1] this is not legal advice...


"The Silence" in Doctor Who touches on similar themes. https://tardis.fandom.com/wiki/Silent#Amnesia_and_hypnotic_a...

The rules say we should default to assuming good faith in comments. But it's hard when I see this comment in 2026.

“A pensar male degli altri si fa peccato ma spesso ci si indovina.” — Giulio Andreotti

(it's a sin to assume bad intent, but you often get it right)

He was a very controversial italian politician.


what would the bad faith motive even be?

$$$, one of the classic bad faith motives. Most of tech nowadays is subsidized by advertising and profiling to some degree, often quite a large degree.

Sooner or later, yes. What stops it , other than layers of imperfect process? And it's the perfect vector to exploit anyone who doesn't review and understand the generated code before running it locally

But unit and integration tests generally only catch the things you can think of. That leaves a lot of unexplored space in which things can go wrong.

Separately, but related - if you offload writing of the tests and writing of the code, how does anybody know what they have other than green tests and coverage numbers?


I have been seeing this problem building over the last year. LLM generated logic being tested by massive LLM generated tests.

Everyone just goes overboard with the tests since you can easily just tell the LLM to expand on the suite. So you end up with a massive test suite that looks very thorough and is less likely to be scrutinized.


I'm burning through pretty fast with context sizes of only 32-64kb. I regularly clear when I change topics.

A simple "how do I do x" question used 2% of my budget.

I paid extra and chewed through $5 in a few minutes of analyzing segments of log files.

At this rate it's not worth the trouble of carefully managing usage to avoid ambiguous limits that disrupt my work.

If that's the way it is in order for them to make money, that's fine - but I need a usable tool that I don't have to micromanage. This product is not worth it ($, time) to me at this rate.

I hope it changes because when it works it's a great addition to my tools.


I just fixed this bug in a summarizer. Reasoning tokens were consuming the budget I gave it (1k), so there was only a blank response. (Qwen3.5-35B-A3B)


Most inference engines would return the reasoning tokens though, wouldn't you see that the reasoning_content (or whatever your engine calls it) was filled while content wasn't?


Yeah, I had been ignoring the reasoning tokens for the summarize call


Not just paycheck. They had access to some or all of your company's internal system, code, and data for the duration. That's a much bigger threat.


I wonder how achievable this would be with even a deepfake filter?

A single person does remote interviews all day. The person who turns up is just some body to run the scam.

That said, as the saying goes that's a lot of hard work, to avoid working hard.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: