Alternatively[1], for those of us who have enough clutter: Buying it digitally means you've paid for it. The author gets their cut, and you can now seek out unencumbered formats that best serve your usage with a clear conscience.
$$$, one of the classic bad faith motives. Most of tech nowadays is subsidized by advertising and profiling to some degree, often quite a large degree.
Sooner or later, yes. What stops it , other than layers of imperfect process? And it's the perfect vector to exploit anyone who doesn't review and understand the generated code before running it locally
But unit and integration tests generally only catch the things you can think of. That leaves a lot of unexplored space in which things can go wrong.
Separately, but related - if you offload writing of the tests and writing of the code, how does anybody know what they have other than green tests and coverage numbers?
I have been seeing this problem building over the last year. LLM generated logic being tested by massive LLM generated tests.
Everyone just goes overboard with the tests since you can easily just tell the LLM to expand on the suite. So you end up with a massive test suite that looks very thorough and is less likely to be scrutinized.
I'm burning through pretty fast with context sizes of only 32-64kb. I regularly clear when I change topics.
A simple "how do I do x" question used 2% of my budget.
I paid extra and chewed through $5 in a few minutes of analyzing segments of log files.
At this rate it's not worth the trouble of carefully managing usage to avoid ambiguous limits that disrupt my work.
If that's the way it is in order for them to make money, that's fine - but I need a usable tool that I don't have to micromanage. This product is not worth it ($, time) to me at this rate.
I hope it changes because when it works it's a great addition to my tools.
I just fixed this bug in a summarizer. Reasoning tokens were consuming the budget I gave it (1k), so there was only a blank response. (Qwen3.5-35B-A3B)
Most inference engines would return the reasoning tokens though, wouldn't you see that the reasoning_content (or whatever your engine calls it) was filled while content wasn't?
reply