Hacker Newsnew | past | comments | ask | show | jobs | submit | skeledrew's commentslogin

There won't even be a quality conversation if a thing isn't built in the first place, which is the tendency when the going is slow and hard. AI makes the highly improbable very probable.

The content of the article seems OK, but I'm still going to generally disagree with it. Maybe my take is just confirming the thesis, but I don't find the larger things I do with AI to be borifying (yes I just invented that word).

I've been ramping my use since the start of this month, and have already made serious progress on a number of projects, many of which have been on ice for years, and have also built out a stable of supporting tools. And I've found it generally exhilarating; making good progress where there was none before is pretty heady.

One of the more memorable experiences I have was a problem that I had to deal with for years, had thought hard about, and then in the end Claude resolved it cleanly with about 1/3 a screen of code (would share the link, but there's no "share" in the mobile app):

=============== I often use [snoop](https://github.com/alexmojaki/snoop) because I like the features and output formatting, but it doesn't support async. I have found [ASnooper](https://github.com/LL-Ling/ASnooper), but it only supports async. I'd like to add ASnooper's async feature to snoop.

I'm thinking of creating a project that imports both, and patches the relevant parts of snoop with the async functionality from ASnooper. I have some notes from looking into the snoop code: - A `ConfiguredTracer` object is exposed in `builtins` (configuration.py, line 142). - Whenever the `ConfiguredTracer` object is called with an async function, it raises (tracer.py, line 166). - The trace event is passed to a formatter, and then a writer (tracer.py, line 280-281). ... and in the ASnooper code: - An `ASnooper` object is exposed (__init__.py). - Whenever the `ASnooper` object is called on an async function, it creates formatter and writer objects (core.py, line 69-75). The approach I'm considering is to wrap `ConfiguredTracer.__call__` with a method that delegates async function tracing to `ASnooper`, and replace the `OutputFormatter` and `OutputWriter` classes with versions that delegate to `snoop`'s implementation. I've prepared a repo for the solution: """ . ./README.md ./pyproject.toml ./tests ./tests/__init__.py ./src ./src/async_patched_snoop ./src/async_patched_snoop/__init__.py """

I've also extracted excerpts from both projects, as unified diff contexts (attached). Any issues, questions or suggestions? Or a better approach that doesn't involve forking the targeted projects? ===============

I prompted Claude with this along with a file containing the diffs (constructed with the help of another prompted tool), Claude informed me of the issues in my approach and suggested a couple alternate solutions, I selected the one 1 preferred, Claude asked for some more context and then did the implementation, and now I have async support in snoop. Probably took me over an hour to construct that initial prompt, and under 5 minutes of conversion to completed solution. I really don't think that that's boring.


They also wouldn't be getting any funding for doing such fun demos, even if they wanted to.

Not quite though. You can install Claude's apps wherever they're supported, and maybe even fiddle with the source code (I'm unsure). And you can use any other coding apps that you want. The only real restriction is how those apps are allowed to connect to the providers' services, which are running on their servers, etc. There's a movement from "my local domain" to "their remote domain", and they're allowed to have full control of theirs as you - would prefer, I think - full control of yours.

> OpenAI started nudging OpenClaw to burn even more tokens on Anthropic

Not possible: OpenClaw is run by a foundation, and is open source, which means OpenAI has no leverage to do such a thing.


Because open source has always been completely independent of unrelated corporate entities who employ people to work on it?

Because anyone can actually check the code, which means if there's any funny business, someone will come across it eventually and blow it open.

There probably wouldn’t be anything funny-looking – it might look like a genuine mistake in implementation that burns 2× or 3× tokens somehow (which, considering OpenClaw is vibe coded in the purest sense of this term, would blend right in).

Regardless, such things would eventually be found. Just as OpenClaw was tasked with finding and improving science repos (though unwelcome), it could - and very likely will - be tasked with improving its own codebase.

The bug that was causing the crazy token burn was added on Feb 15. It was claimed to have been fixed on Feb 19 (see https://github.com/openclaw/openclaw/pull/20597 ) but it's unclear to me whether that fix has been rolled out yet or if it completely solved the problem. (see https://github.com/openclaw/openclaw/issues/21785 )

TLDR: the commit broke caching so the entire conversation history was being treated as new input on each call instead of most of the conversation being cached.


You don't control what happens when a request hits their endpoint though.

> it shouldn't cost them more money

As things are currently, better models mean bigger models that take more storage+RAM+CPU, or just spend more time processing a request. All this translates to higher costs, and may be mitigated by particular configs triggered by knowledge that a given client, providing particular guarantees, is on the other side.


That’s kind of the point. Even if users can choose which model to use (and apparently the default is the largest one), they could still say (For roughly the same cost): your Opus quota is X, your Haiku quota is Y, go ham. We’ll throttle you when you hit the limit.

But they don't want the subscription to be quota'd like that. The API automatically does that though, as different models use different amounts of tokens when generating responses, and the billing is per token. And quite literally is having the user account for the actual costs of usage, which is the thing said users are trying to avoid, on their own terms, and getting upset about when they aren't.

I just use NewPipe all the way.

It hadn't worked for me for a long time, though I did notice an update recently, so maybe it's good again. I like it better than Grayjay

> All I care about is the access to the models through the client that I was already using!

But that's not a product that they're offering. That ability was an undesired (from their business perspective) trait that they're now rectifying.


> But that's not a product that they're offering

Of course it was.

  - It was possible to do it.
  - OpenCode did not break any security protocol in order to integrate with them. 
  - OAuth is *precisely* a system to let third-party applications use their resources.

It's not what they wanted, but it's not my problem. The fact that I was a customer does not mean that I need to protective of their profits.

> (from their business perspective)

So what?!

Basically, they set up an strategy they thought it was going to work in their favor (offer a subsidized service to try to lock in customers), someone else found a way to turn things around and you believe that we should be okay with this?!

Honestly, I do not understand why so many people here think it is fine to let these huge corporations run the same exploitation playbook over and over again. Basically they set up a mouse trap full of cheese and now that the mice found a way to enjoy the cheese without getting their necks broken, they are crying about it?


> Of course it was.

You'd have to point me to an authoritative source on that (explicitly saying users are allowed to use their models via private APIs in apps of the user's choosing). If something isn't explicitly provided in the contract, then it can be changed at any point in any way without notice.

Honestly, I'm not big on capitalism in general, but I don't understand why people should expect companies to provide things exactly the way they want at exactly the prices they would like to be charged (if at all). That's just not how the world/system works, or should, especially given there are so many alternatives available. If one doesn't like what's happening with some service, then let the wallet do the talking and move to another. Emigration is a far more effective message than complaining.


> I don't understand why people should expect companies to provide things exactly the way they want at exactly the prices they would like to be charged

This is a gross misrepresentation of my argument.

I wouldn't be complaining at all if they went up and said "sorry, we are not going to subsidize anyone anymore, so the prices are going up", and I wouldn't be complaining if they came up and said "sorry, using a third party client incurs an extra cost of on our side, so if you want to use that you'd have to pay extra".

What I am against is the anti-competitive practice of price discrimination and the tie-in sale of a service. If they are going to play this game, then they better be ready for the case the strategy backfires. Otherwise it's just a game of "heads I win, tails you lose" where they always get to make up the rules.

> Emigration is a far more effective message than complaining.

Why not both? I cancelled my Pro subscription today. I will stick with just Ollama cloud.


It's not tie-in. They give users 2 choices: a) use their service via their public API, with the client(s) of their choice, at the regular price point; b) use the apps they provide, which use a private API, at a discounted price point. The apps are technically negative value for them from a purely upfront cost perspective as their use trigger these discounts and they're free by themselves.

Good on you re that cancel. May you find greener grass elsewhere.


> They give users 2 choices: a) use their service via their public API, with the client(s) of their choice, at the regular price point; b) use the apps they provide, which use a private API, at a discounted price point.

There was a third choice, which was better than both of the ones presented: use any other client that can talk with our API, at whatever usage rate they deemed acceptable. If the "private API" was accessible via OAuth, then it's hardly "private".

We can argue all day, when I signed up there was nothing saying that access was exclusive via the tools they provided. They changed the rules not because it was costing them more (or even if does, they are losing money on Pro customers anyway so arguing about that is silly) but because they opened themselves for some valid and fair competition.


There was no third choice if they didn't explicitly state that there was.

> If the "private API" was accessible via OAuth, then it's hardly "private".

If you invite people on your porch for a party, and someone finds that you left the house key under the mat and went off to restock, then it's hardly "private". It's perfectly fine for whomever feels like to take the party indoors without your permission. Pretty much what you're saying, reframed, but I seriously doubt you'd agree to random people entering parts of yours premises to which you didn't explicitly invite them.


Try not making it sound like the company is doing me a favor by letting me access the thing I was paying for. I wasn't "invited to a party", I was sold on an agreement that by paying a guaranteed monthly fee I could have access to the model at a rate that was lower than the pay-as-you-go rate from the API.

The primary offering is access to the models. That's what the subscription is about. They can try as hard as they want to market it as Claude being the product and access to the model being an ancillary service, but to me this is just marketing bs. No one is signing-up for Claude because their website is nicer, or because of Claude Code.


> I was sold on an agreement that by paying a guaranteed monthly fee I could have access to the model at a rate that was lower than the pay-as-you-go rate from the API

Yes, that agreement is there, with the condition that their app is used. That's option B. And I'd think it fairly obvious that if one has to go to extraordinary lengths to gain access, like finding a key under a mat, or needing to login with an official client to gain access to a token for an unofficial client, then - implicitly - it's highly unlikely that that method of access is part of the agreement. And Anthropic has now made it explicitly clear that no, that access method is not part of the agreement.


> that agreement is there, with the condition that their app is used.

And setting this condition is what constitutes a tie-in sale.

> if one has to go to extraordinary lengths to gain access

BS! Sorry, there is nothing extraordinary about using an undocumented API.


Nope, there's no tie-in sale[0] as you do not pay for the apps. And particularly, there's no real competition angle[1] as the market is loaded with LLM service providers, not to mention downloadable options.

There's a reason in this particular case why the particular APIs aren't documented: they aren't intended for public use. And they've made it crystal clear, so all you have to do now is take your wallet somewhere that offers the access you desire. You have no case here.

[0] https://www.dictionary.com/browse/tie-in

[1] https://www.ftc.gov/advice-guidance/competition-guidance/gui...


> as the market is loaded with LLM service providers

The LLMs are not commodities. The program that interfaces with them are.

> they aren't intended for public use.

It was available at first, it made possible for people to use the LLM model without having to use their specific CLI tool. It's a bait-and-switch.

> You have no case here.

I don't need to have a legal case here to keep thinking it's a morally dsgusting practice. What I don't understand is: why do you keep defending it? Is there something in it for you, or are you just trying to rationalize your way into acceptance of their terms?


They're commodities to an appreciable extent. They all do generally the same thing, with the differing factor being output quality.

People can still use their model without using their CLI. Use the API that they've provided for such. They didn't break the agreement that they made; they clarified the terms of their existing agreement.

There's nothing morally disgusting here. They're providing a service that they've poured a lot of effort into, in a way that's (hopefully) sustainable while being valuable to users. There's significant cost involved, which must be footed by those who value and use the service. They found a way to offer a discount for some of that cost, providing even greater value, but it has a condition which is possibly directly connected to their ability to provide that discount. And you want to benefit from that discount and avoid that condition.

I have no horses here; heck I wish they could offer it all completely free. But the reality is that there's ongoing cost to them in research, hardware, electricity, etc that has to be paid. And unlike many other large companies out there, they're providing something seriously valuable (you wouldn't be complaining so passionately if it wasn't), and they haven't enshittified it (unlike what the other large player is increasingly doing, but that's actually also understandable to a point). What I see here is you - as in all who want discount without condition - acting in a way that, if allowed, will very likely lead to the detriment of the service, which I definitely don't want to happen as that'll leave the market worse off. If you like the value so much that you find it next to impossible to stay away, then you should be happily following their agreement to the letter, and lean toward paying the full amount to help ensure their continued sustainability. It's well worth it.


> Tie-in sales between software and services

Look at it this way: the service that you're accessing is really a (primarily desired) side-effect of the software. So re subscriptions, what they're actually providing are the apps (web, desktop, etc), and the apps use their service to aid the fulfillment of their functionality. Those wanting direct access to the internal service can get an API key for that purpose. That's just how their product offering is structured.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: