Hacker Newsnew | past | comments | ask | show | jobs | submit | causal's commentslogin

I've gotta say, it shows. Claude Code has a lot of stupid regressions on a regular basis, shit that the most basic test harness should catch.

I feel like our industry goes through these phases where there's an obvious thought leader that everyone's copying because they are revolutionary.

Like Rails/DHH was one phase, Git/GitHub another.

And right now it's kinda Claude Code. But they're so obviously really bad at development that it feels like a MLM scam.

I'm just describing the feeling I'm getting, perhaps badly. I use Claude, I recommended Claude for the company I worked at. But by god they're bloody awful at development.

It feels like the point where someone else steps in with a rock solid, dependable, competitor and then everyone forgets Claude Code ever existed.


I use Claude Code because Anthropic requires me to in order to get the generous subscription tokens. But better tools exist. If I was allowed to use Cursor with my Claude sub I would in a heartbeat.

There are plenty of competitors! I’ve been using Copilot, RovoCLI, Gemni, and there’s OpenAI thing.

This aren't competitors, they're clones, it's a different thing.

CC leads and they follow.


I don't think you've proven what you were hoping to prove.

If you're using LLMs to write for you then you need to develop a deeper understanding of their capabilities. This is a required reading for anyone I have using AI: https://www.theregister.com/2026/02/16/semantic_ablation_ai_...


I'm not an expert in the field, but this reads like someone trying to sound impressive by using big words without providing any solid detail.

Petty? He's accusing them of fraud, and if he's right then yeah we should all be disappointed in Eon's deceptive marketing.

Fully simulating a drosophila has been a high goal for a very long time and great claims require great proof, but Eon has been stingy with the details (and no this blog post does not reveal much beyond chaining together lots of impressive sounding words).

Imagine the skepticism on HN if someone declared they invented AGI. Similar level claim.


That's about as polite as you can get, and it's still risky: people get defensive, the output might NOT be from an LLM, etc.

That's the asymmetry of the problem: Writing with AI delegates the thinking to the reader as well as all the risk for correcting it.



Damn, then we automated bullshit generation

I like NanoClaw a lot. I found OpenClaw to be a bloated mess, NanoClaw implementation is so much tighter.

It's also the first project I've used where Claude Code is the setup and configuration interface. It works really well, and it's fun to add new features on a whim.


Amen, my OpenClaw instance broke last week.

Some update broke the OpenRouter integration and I haven't been able to fix the issue. I took a quick look at the code, hoping to narrow it down and it's pretty much exactly what you would expect, there's hidden configuration files everywhere and in general it's just a lot of code for what's effectively a for loop with Whatsapp integration (in my case :)).

Not to mention that their security model doesn't match my deployment (rootless and locked down Kubernetes container) so every Openclaw update seemed to introduce some "fix" for a security issue that broke something else to solve a problem I do not have in the first place :)

I've switched to https://github.com/nullclaw/nullclaw instead. Mostly because Zig seems very interesting so if I have to debug any issues with Nullclaw at least I'll be learning something new :)


what workflows do you implement in Nanoclaw that wouldn't be straightforward to build in Claude?

Straightforward is ambiguous. To replicate NanoClaw would probably only take about a day of work and testing and refining in Claude Code, but that's a day I didn't have to spend to get NanoClaw.

yes but then what do you use nanoclaw for, that's its a better fit for than claude code.

This is true, but the attack surface on your life is decreased by better security around the entire setup.

But I fundamentally agree that there is just too much overlap between what makes claws useful and what makes them insecure.


Wait - what was the AI tool and how did it have her face to begin with? If small-town police are doing face-matching searches across national databases then nobody is safe because the number of false positives is going to be MASSIVE by sheer number of people being searched every day.

Pretend the tool is 99.999999% specific. If it searches every face in the USA you're still getting about 3 false positives PER SEARCH.

You will never have a criminal AI tool safe enough to apply at a national scale.


This x1000. We need to suspend this shared fiction that AI has any agency. Only humans can be responsible. Full stop.

[flagged]


This question doesn't even make sense. Why wouldn't humans still be the ones responsible? Bot account?

Doesn't look like it. I've come across this account a few times now. Engages and makes reasonable comments excepts for certain politicized issues where he acts like an indoctrinated zealot.

respectfully, can you elaborate on why the answer would not be yes? or am i just misreading your comment?

Yep. I actually prefer seeing imperfect writing, there is signal there that AI would erase.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: