Hacker Newsnew | past | comments | ask | show | jobs | submit | IanCal's commentslogin

> I don't understand how any human in good faith could look at Iran's government and say they are the evil regime

You seem to be trying to force reality into a “good vs evil” storyline. There does not have to be a good side.


Can you explain the benefits over something like openrouter?

24/7 LLM for $10/month.

Isn't this a bad deal? Or is there an error in my math?

For $40, I'd get 20 tok/s * 2.6M seconds per month = 52M tokens of DeepSeek v3.2 per month if I run it 24/7, which is not realistic for most workloads.

On OpenRouter [1], $40 buys 105M tokens from the same model, which is more than 52M tokens, and I can freely choose when to use them.

[1]: https://openrouter.ai/deepseek/deepseek-v3.2


20 tok/s is an average. It can be more, it can be less. If you are running off-peak I'm sure you'd get some crazy number.

That doesn’t matter when you have the average. Even if you are somehow able to get 10000tok/s during off peak times, by virtue of how averages work, you’re still only getting 52M tokens per month (as calculated above).

Why wouldn't developers just do llm arbitrage against openrouter if it is a better deal?

The problem is different. OpenRouter is a router to LLMs. It doesn't solve GPU underutilization.

What I am saying is if your system lets me pay $x/token and open router lets me pay $y/token if x<y then someone could make money just by providing those tokens through the open router API. That would either drive up demand for your systems increasing costs or drive up supply on open router decreasing costs. Eventually the costs would converge, no?

For the same reason people don’t do server arbitrage because Hetzner is cheaper than AWS.

These have been AI for longer than most people here have been hearing the term. Neural nets have been AI since before most people here were born.

There’s something called the Gell-Mann amnesia effect where people often see what you have but then go back to assuming the other stories are all reliable.

I used to love Private Eye and they have done great journalism that’s highly acclaimed, but the only thing they wrote that I really knew about (literally the office I was in) was outrageously wrong and would have been so easy to verify (ask literally anyone in the BBC building we were in to go to that floor, or take a tour or write an email). Can’t read it any more.


Here's Wikipedia's entry on the Gell-Mann Amnesia Effect, because I've found it a very useful concept to know. Despite my media experiences, I still keep falling for it. And I love that we're still referring to it as Gell-Mann Amnesia here:

https://en.wikipedia.org/wiki/Michael_Crichton#Gell-Mann_amn...

In a speech in 2002, Crichton coined the term "Gell-Mann amnesia effect" to describe the phenomenon of experts reading articles within their fields of expertise and finding them to be error-ridden and full of misunderstanding, but seemingly forgetting those experiences when reading articles in the same publications written on topics outside of their fields of expertise, which they believe to be credible. He explained that he had chosen the name ironically, because he had once discussed the effect with physicist Murray Gell-Mann, "and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have".


> "and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have".

Ahh, yes, the SyneRyder effect.


Everything I've known anything about first hand has been utterly garbled - or was completely made up - when written up in Private Eye.

The results absolutely are interesting - in fact they’re far stronger for the willingness of many to inflict violence than the original description suggested.

> While every obedient participant reliably pressed the shock lever, they regularly neglected or ruined the other steps required to justify the shock.

Procedural violations here include things like asking the question while the person in the other room was still screaming.


> in fact they’re far stronger for the willingness of many to inflict violence than the original description suggested.

In a situation where they know that they are in a controlled lab experiment.


Saying gpt 5.4 is like gpt2 is wild.

Lol, audibly.

I'm glad AI curmudgeonry on HN has shifted from "it doesn't work, scam, they made the deployed model worse with 0 communication" to something more akin to "why does anyone use mac or windows, nix is peak personal computing"


We’ve been calling neural nets AI for decades.

> 5 years before that, a Big Data algorithm.

The DNN part? Absolutely not.

I don’t know why people feel the need for such revisionism but AI has been a field encompassing things far more basic than this for longer than most commenters have been alive.


> AI has been a field encompassing things far more basic than this for longer than most commenters have been alive.

When I was 13, having just started programming, I picked up a book from a "junk bin" at a book store on Artificial Intelligence. It must have been from the mid-80s if not older.

It had an entire chapter on syllogism[1] and how to implement a program to spit them out based on user input. As I recall it basically amounted to some string exteaction assuming user followed a template and string concatenation to generate the result. I distinctly recall not being impressed about such a trivial thing being part of a book on AI.

[1]: https://en.wikipedia.org/wiki/Syllogism


Eliza was 1960s.

In the 1990s I remember taking my friend's IRC chat history and running it through a Markov model to generate drivel, which was really entertaining.


> I don't write code the same as other devs

Most people do, most people don’t have wildly different setups do they? I’d bet there’s a lot in common between how you write code and how your coworkers do.


I bet there's a lot more consistency now that AI can factor in how things are being done and be guided on top of that too.

The benefit of digital things is that they can be copied much more cheaply than physical things. There’s perhaps migrations and upkeep though.

On the technical side perhaps the shared nature of this helps - if you can have something replicated so that you and several other members are all running replicas there’s a

On the non technical side, take some photos and print them on good paper. Print out stories on paper.

That doesn’t cover video and perhaps other things but it’s simple and does actually work for lots and lots of stories and pictures. It’s also immediately doable right now without anything new.


You could write an api, and then document it, and then add maybe useful prompts?

Then you’d need a way of passing all that info on to a model, so something top level.

It’d be useful to do things in the same way as others (so if everyone is adding Openapi/swagger you’d do the same if you didn’t have a reason not to).

And then you’ve just reinvented something like MCP.

It’s just a standardised format.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: