Hacker Newsnew | past | comments | ask | show | jobs | submit | nprz's commentslogin

Do you really believe killing 175 children[0] will bring peace and prosperity to the Iranian people?

[0]https://www.nytimes.com/2025/03/01/world/middleeast/girls-sc...


That news piece was officially dismissed after investigation by the IDF and CENTCOM. I would bring to your awareness that you're using an emotional argument with no substance, and it discounts the decades of complex history in the region.

after investigation by the IDF and CENTCOM

Neither of those can be considered reliable sources. It's possible that it was an Iranian misfire, but it would be a big coincidence that that happened right as we launched an attack on them and an even bigger coincidence that someone just happened to take a picture of it and post it on the internet to immediately exonerate the IDG and Centcom.


The IDF has burned through all credibility during their assault on Gaza. I do not think the US and Israel waging a war on Iran will result in a positive outcome for the Iranian people or the region. The end result will be chaos, misery, and suffering. The latest news is the US attempting to foment some sort or civil war[0]. I sincerely do not understand how anyone could advocate for this.

[0] https://www.itv.com/news/2026-03-03/united-states-seeking-an...


A falsifiable prediction. Thank you.

175 dead children is already far too much suffering and if you're incapable of understanding that you are operating with a fully broken moral compass.

I think it is a hard problem to discuss clearly, but it not automatically a deal breaker. What about 175 children vs 30,000 protesters? What about 30,000 protesters a year in perpetuity?

Exactly, a real moral calculus needs to be made, not a hysterical "But the IRGC said 175 children died." And a real moral calculus involves weighing the value of the deaths caused by removing the IRGC against the deaths caused by the IRGC.

My antagonist said I have no moral compass. Of course I care about the death of children. But that doesn't mean I swallow IRGC propaganda wholesale, as they apparently do. The IRGC lies constantly, it has provided no evidence that so many children died, and hasn't brought forth any evidence to indicate the destruction of the school was caused by western munitions as opposed to a failed launch of their own (which we've seen happen.


US and Israel killed more civilians in war last year than Iran in decades. So by that logic, US and Israeli terrorists must be terminated?

Well, just in the past two months, iran is thought to have killed more than 30,000 of its own citizens, while the whole civilian death toll in gaza is about 40k or less over more than two years (out of roughly 70k killed), so i'd say you just made that up.

Demographics: Approximately 70% of the 70k verified fatalities are women and children. International observers, including the OHCHR, have noted that children alone account for roughly 33-44% of the death toll.

Your information is false and out of date.

This has strong vibes of "we investigated ourselves and found no wrongdoing".

IDF has constantly rejected their war crimes in Gaza, while independent reporting (from different sources) has found multiple evidence of them.


>investigation by the IDF and CENTCOM

this has to be bait, right?


Perhaps the original comment, putting forth debunked IRGC propaganda, and presenting it as definitely true, was bait.


The main source in that Wikipedia article is "According to the IRGC." Trusting any belligerent in a war is silly, but given its history, trusting the IRGC during wartime is even sillier. No independent body like the Red Crescent (which is counting casualties in Iran) verified this. It's all "trust me, bro."

USCENTCOM and the IAF both rejected these assertions.

You should demand some evidence for the IRGC's claim. If the claim is that the US or Israel did it, why doesn't the IRGC show the munition used? Or any OSINT data, like where the munition was fired from, its trajectory, etc. The IRGC has been firing from the IRGC base where this school was located. It could just as easily have been a failed IRGC munition.

Also, was this "school" by an IRGC base actually a school, or did it serve a military purpose? Surely you can't know the answer to this, so it's tough for you to judge the military necessity of the strike.

Finally, what's the claim, really? That western powers intentionally struck a school and killed these kids to advance their war aims? Or that it was an accident? If the former, an explanation for "how" is required; and if the latter (and if it did indeed happen) it's the kind of collateral damage that occurs in all wars.


This "debunks" nothing, it's merely a demand for more evidence.

Step 1. OP makes a positive claim, repeating an IRGC narrative.

Step 2. I point out there’s no good evidence supporting it.

Step 3. You reframe that as "you’re just demanding more evidence."

That’s backwards. If someone claims something extraordinary happened, the burden is on them to provide evidence. Showing that the current evidence doesn’t support the claim is a perfectly valid rebuttal.

Otherwise we could do this with anything:

kid: "There’s a ghost in my room." dad: "I don't hear a ghost. I don't see one. There’s no heat, sound, footprints..." kid: "That doesn’t mean there's no ghost. You’re just demanding more evidence.”


>> what's the claim, really? That western powers intentionally struck a school and killed these kids

Israel or US or both struck a school and killed these kids. Nobody knows whether it was intentional or not. And this is not the first time Israel bombed schools or hospitals.

Mental gymnastics done to skew facts is amazing.


nzrf wrote: "Do you really believe killing 175 children[0] will bring peace and prosperity to the Iranian people?"

The implication is that someone thought that it would. I am saying nobody in the US or Israel thought bombing a children's school would bring peace to the iranian people. In fact, both the USAF and IAF deny they hit a school. There is no evidence the IRGC has put forward to support its claim. Without such evidence, it doesn't make sense to believe it.

Also, you talk about mental gymnastics while defending IRGC propaganda and spewing nonsense like "Israel bombed hospitals." If you're so confident that Israel has bombed hospital buildings, can you tell me which they bombed, when they did this, and any OSINT details like the munition used?



Evidence is clear: the people of Iran do the Trump dance, alongside the Jews, and lay flowers by Israelis with tears of thankfulness.

Iranian civilians love the US and Israel for setting them free.

Stop believing terrorist propaganda.


You're just linking me to lists from highly unreliable sources. I'm a simpleton, make a claim like this: "I think Israel bombed this hospital building on this date using this ordinance. Here's the evidence."

I do not know what to say. Just look at the pictures in the Wikipedia page.

Israel left newborns to rot in hospital beds, shot many children in the head & chest. Everyone, including Israelis know this. Evil, evil people.


You are being bigoted (“evil, evil people”) and if you believe what you say you can just answer my question directly. You won’t because it hasn’t happened.

No amount of proof will change your position, unfortunately.

Actually a simple statement you can actually support would: Israel bombed this hospital building on this date using this munition. You can’t meet that simple standard because it never happened.

> CENTCOM

I haven't seen anything to that effect yet. They've just said they wouldn't deliberately target a school, which I believe, but that doesn't mean it wasn't an accident based on faulty, likely outdated intelligence.


Where did you get the 175 children number. Even the article does not say that.

And this is the same Howard Lutnick who was just last week was caught blatantly lying about his relationship with Epstein?

[0] https://www.theguardian.com/us-news/2026/jan/30/new-epstein-...


You can basically hand it a design, one that might take a FE engineer anywhere from a day to a week to complete and Codex/Claude will basically have it coded up in 30 seconds. It might need some tweaks, but it's 80% complete with that first try. Like I remember stumbling over graphing and charting libraries, it could take weeks to become familiar with all the different components and APIs, but seemingly you can now just tell Codex to use this data and use this charting library and it'll make it. All you have to do is look at the code. Things have certainly changed.


It might be 80-95% complete but the last 5% is either going to take twice the time or be downright impossible.


This is like Tesla's self-driving: 95% complete very early on, still unsuitable for real life many years later.

Not saying adding few novel ideas (perhaps working world models) to the current AI toolbox won't make a breakthrough, but LLMs have their limits.


Why do you think that? Do you think human level intelligence is special somehow? Do you have any facts? Or are you just hoping?


That was the same thing with human products though.

https://en.wikipedia.org/wiki/Ninety%E2%80%93ninety_rule

Except that the either side of it is immensely cheaper now.


I figure it takes me a week to turn the output of ai into acceptable code. Sure there is a lot of code in 30 seconds but it shouldn't pass code review (even the ai's own review).


For now. Claude is worse than we are at programming. But its improving much faster than I am. Opus 4.6 is incredible compared to previous models.

How long before those lines cross? Intuitively it feels like we have about 2-3 years before claude is better at writing code than most - or all - humans.


I keep seeing this. The "for now" comments, and how much better it's getting with each model.

I don't see it in practice though.

The fundamental problem hasn't changed: these things are not reasoning. They aren't problem solving.

They're pattern matching. That gives the illusion of usefulness for coding when your problem is very similar to others, but falls apart as soon as you need any sort of depth or novelty.

I haven't seen any research or theories on how to address this fundamental limitation.

The pattern matching thing turns out to be very useful for many classes of problems, such as translating speech to a structured JSON format, or OCR, etc... but isn't particularly useful for reasoning problems like math or coding (non-trivial problems, of course).

I'm pretty excited about the applications for AI overall and it's potential to reduce human drudgery across many fields, I just think generating code in response to prompts is a poor choice of a LLM application.


> I don't see it in practice though.

Have you actually tried the latest agentic coding models?

Yesterday I asked claude to implement a working web based email client from scratch in rust which can interact with a JMAP based mail server. It did. It took about 20 minutes. The first version had a few bugs - like it was polling for mail instead of streaming emails in. But after prompting it to fix some obvious bugs, I now have a working email client.

Its missing lots of important features - like, it doesn't render HTML emails correctly. And the UI looks incredibly basic. But it wrote the whole thing in 2.5k lines of rust from scratch and it works.

This wasn't possible at all a couple of years ago. A couple of years ago I couldn't get chatgpt to port a single source file from rust to typescript without it running out of context space and introducing subtle bugs in my code. And it was rubbish at rust - it would introduce borrow checker problems and then get stuck, trying and failing to get it to compile. Now claude can write a whole web based email client in rust from scratch, no worries. I did need to manually point out some bugs in the program - claude didn't test its email client on its own. There's room for improvement for sure. But the progress is shocking.

I don't know how anyone who's actually pushed these models can claim they haven't improved much. They're lightyears ahead of where they were a few years ago. Have you actually tried them?


Honestly, I really did do this for a while, mostly in response to comments like this, with some degree of excitement.

I've been disappointed every time.

I do use the LLMs for summarization and "a better google" and am constantly confronted with how inaccurate they are.

I haven't tried with code in the past couple months because to be completely honest, I just don't care.

I enjoy my craft, I enjoy puzzling and thinking through better ways of doing things, I like being confronted with a tedious task because it pushes me towards finding more optimal approaches.

I haven't seen any research that justifies the use of LLMs for code generation, even in the short term, and plenty that supports my concerns about mid to long term impact on quality and skills.

So the TL;DR version is: nah.


It is certainly already better than most humans, even better than most humans who occasionally code. The bar is already quite high, I'd say. You have to be decent in your niche to outcompete frontier LLM Agents in a meaningful way.


I'm only allowed 4.5 at work where I do this (likely to change soon but bureaucracy...). Still the resulting code is not at a level I expect.

i told my boss (not fully serious) we should ban anyone with less than 5 years experience from using the ai so they learn to write and recognize good code.


The key difference here is that humans can progress. They can learn reasoning skills, and can develop novel methods.

The LLM is a stochastic parrot. It will never be anything else unless we develop entirely new theories.


And yet, claude is improving at programming much faster than I am. Maybe its skill will hit a ceiling at some point, but it hasn't happened yet.


> You can basically hand it a design

And, pray tell, how people are going to come up with such design?


Honestly you could just come up with a basic wireframe in any design software (MS paint would work) and a screen shot of a website with a design you like and tell it "apply the aesthetic from the website in this screenshot to the wireframe" and it would probably get 80% (probably more) of the way there. Something that would have taken me more than a day in the past.


I've been in web design since images were first introduced to browsers and modern designs for the majority of sites are more templated than ever. AI can already generate inspiration, prototypes and designs that go a long way to matching these, then juice them with transitions/animations or whatever else you might want.

The other day I tested an AI by giving it a folder of images, each named to describe the content/use/proportions (e.g., drone-overview-hero-landscape.jpg), told it the site it was redesigning, and it did a very serviceable job that would match at least a cheap designer. On the first run, in a few seconds and with a very basic prompt. Obviously with a different AI, it could understand the image contents and skip that step easily enough.


I have never once seen this actually work in a way that produces a product I would use. People keep claiming these one-shot (or nearly one-shot) successes, but in the mean time I ask it to modify a simple CSS rule and it rewrites the enter file, breaks the site, and then can't seem to figure out what it did wrong.

It's kind of telling that the number of apps on Apple's app store has been decreasing in recent years. Same thing on the Android store too. Where are the successful insta-apps? I really don't believe it's happening.

https://www.appbrain.com/stats/number-of-android-apps

I've recently tried using all of the popular LLMs to generate DSP code in C++ and it's utterly terrible at it, to the point that it almost never even makes it through compilation and linking.

Can you show me the library of apps you've launched in the last few years? Surely you've made at least a few million in revenue with the ease with which you are able to launch products.


AI is typically better at working with AI-generated code than human-authored. AI on AI tends to work great.


This, of course, is the problem.

There's a really painful Dunning-Kruger process with LLMs, coupled with brutal confirmation bias that seems to have the industry and many intelligent developers totally hoodwinked.

I went through it too. I'm pretty embarrassed at the AI slop I dumped on my team, thinking the whole time how amazingly productive I was being.

I'm back to writing code by hand now. Of course I use tools to accelerate development, but it's classic stuff like macros and good code completion.

Sure, a LLM can vomit up a form faster than I can type (well, sometimes, the devil is always the details), but it completely falls apart when trying to do something the least bit interesting or novel.


Absolutely. I also think there's a huge number of wannabe developers who don't have the patience to actually learn development. Those people desperately want this AI development dream to be true so they pretend and convince themselves that it is. They talk about how well it works on internet forums, but you ask for the product and it's crickets. It's all wishful thinking.


The number of non-technical people in my orbit that could successfully pull up Claude code and one shot a basic todo app is zero. They couldn’t do it before and won’t be able to now.

They wouldn’t even know where to begin!


You go to chatGPT and say "produce a detailed prompt that will create a functioning todo app" and then put that output into Claude Code and you now have a TODO app.


This is still a stumbling block for a lot of people. Plenty of people could've found an answer to a problem they had if they had just googled it, but they never did. Or they did, but they googled something weird and gave up. AI use is absolutely going to be similar to that.


Step one: you have to know to ask that. Nobody in that orbit knows how to do that. And these aren’t dumb people. They just aren’t devs.


Maybe I’m biased working in insurance software, but I don’t get the feeling much programming happens where the code can be completely stochastically generated, never have its code reviewed, and that will be okay with users/customers/governments/etc.

Even if all sandboxing is done right, programs will be depended on to store data correctly and to show correct outputs.


Insurance is complicated, not frequently discussed online, and all code depends on a ton of domain knowledge and proprietary information.

I'm in a similar domain, the AI is like a very energetic intern. For me to get a good result requires a clear and detailed enough prompt I could probably write expression to turn it into code. Even still, after a little back and forth it loses the plot and starts producing gibberish.

But in simpler domains or ones with lots of examples online (for instance, I had an image recognition problem that looked a lot like a typical machine learning contest) it really can rattle stuff off in seconds that would take weeks/months for a mid level engineer to do and often be higher quality.

Right in the chat, from a vague prompt.


You don't need to draw the line between tech experts and the tech-naive. Plenty of people have the capability but not the time or discipline to execute such a thing by hand.


Not really. What the FE engineer will produce in a week will be vastly different from what the AI will produce. That's like saying restaurants are dead because it takes a minute to heat up a microwave meal.


It does make the lowest common denominator easier to reach though. By which I mean your local takeaway shop can have a professional looking website for next to nothing, where before they just wouldn't have had one at all.

I think exceptional work, AI tools or not, still takes exceptional people with experience and skill. But I do feel like a certain level of access to technology has been unlocked for people smart enough, but without the time or tools to dive into the real industry's tools (figma, code, data tools etc).


The local takeaway shop could have had a professional looking website for years with Wix, Squarespace, etc. There are restaurant specific solutions as well. Any of these would be better than vibe coding for a non-tech person. No-code has existed for years and there hasn't been a flood of bespoke software coming from end users. I find it hard to believe that vibe-coding is easier or more intuitive than GUI tooling designed for non-experts...

I think the idea that LLM's will usher in some new era where everyone and their mom are building software is a fantasy.


I more or less agree specifically on the angle that no-code has existed, yet non-technical people still aren't executing on technical products. But I don't think vibe-coding is where we see this happening, it will be in chat interfaces or GUIs. As the "scafolding" or "harnesses" mature more, and someone can just type what they want, then get a deployed product within the day after some back and forth.

I am usually a bit of an AI skeptic but I can already see that this is within the realm of possibility, even if models stopped improving today. I think we underestimate how technical things like WIX or Squarespace are, to a non-technical person, but many are skilled business people who could probably work with an LLM agent to get a simple product together.

People keep saying code was never the real skill of an engineer, but rather solving business logic issues and codifying them. Well people running a business can probably do that too, and it would be interesting to see them work with an LLM to produce a product.


> I think we underestimate how technical things like WIX or Squarespace are, to a non-technical person, but many are skilled business people who could probably work with an LLM agent to get a simple product together.

In the same vein, I think you underestimate how much "hidden" technical knowledge must be there to actually build a software that works most of the time (not asking for a bug-free program). To design such a program with current LLM coding agents you need to be at very least a power user, probably a very powerful one, in the domain of the program you want to build and also in the domain of general software. Maybe things will improve with LLM and agents and "make it work" will be enough for the agent to create tests, try extensively the program, finding bugs and squashing them and do all the extra work needed, who know. But we are definitely not there today.


Yeah I've thought for a while that the ideal interface for non-tech users would be these no-code tools but with an AI interface. Kinda dumb to generate code that they can't make sense of, with no guard rails etc.


Wouldn’t we have more restaurants if there was no microwave ovens? But microwave oven also gave rise to many frozen food industry. Overall more industrializations.


There were some good and some pretty terrible FE devs though, and it's not clear which ones prevailed.


The last 20% is usually what takes 80% of the time

I think there is more existential fear that is left unaddressed.

Most commenters in this thread seem to be under the impression that where the agents are right now is where they will be for a while, but will they? And for how long?

$660 billion is expected to be spent on AI infrastructure this year. If the AI agents are already pretty good, what will the models trained in these facilities be capable of?


When the VC money runs out, the AI will have to get twice as good in order to make the price work out to be the same. Or they'll keep the price and enshittify the results.



There is research[0] currently being done on how to divide tasks and combine the answers to LLMs. This approach allows LLMs reach outcomes (solving a problem that requires 1 million steps) which would be impossible otherwise.

[0]https://arxiv.org/abs/2511.09030


All they did was prompt an LLM over and over again to execute one iteration of a towers of hanoi algorithm. Literally just using it as a glorified scripting language:

```

Rules:

- Only one disk can be moved at a time.

- Only the top disk from any stack can be moved.

- A larger disk may not be placed on top of a smaller disk.

For all moves, follow the standard Tower of Hanoi procedure: If the previous move did not move disk 1, move disk 1 clockwise one peg (0 -> 1 -> 2 -> 0).

If the previous move did move disk 1, make the only legal move that does not involve moving disk1.

Use these clear steps to find the next move given the previous move and current state.

Previous move: {previous_move} Current State: {current_state} Based on the previous move and current state, find the single next move that follows the procedure and the resulting next state.

```

This is buried down in the appendix while the main paper is full of agentic swarms this and millions of agents that and plenty of fancy math symbols and graphs. Maybe there is more to it, but the fact that they decided to publish with such a trivial task which could be much more easily accomplished by having an llm write a simple python script is concerning.


Good lord, I can only imagine the wasted electricity.


No offense to the academic profession, but they're not a good source of advice for best practices in commercial software development. They don't have the experience or the knowledge sufficient to understand my workplace and tasks. Their skill set and job is orthogonal to the corporate world.


Yes, the problem solved in the paper (Tower of Hanoi) is far more easily defined than 99% of actual problems you would find in commercial software development. Still proof of "theoretically possible" and seems like an interesting area of research.


I was just reading about Steve Yegge's Gas Town[0], it sounds like agent orchestration is now integrated into Claude Code?

[0]https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...


Casual, informal, friendly, hip, young, etc.

Can make sense on twitter to convey personality, but an entire blog post written in lower case is a bit much.


I used not to capitalize "I" in my own writing, because it seemed a bit silly to do that, even though making it more distinct visually seems okay now, some years later.

At the same time, in my language (Latvian) you/yours should also get capitalized in polite text corespondence, like formal letters and such. Odd.


> but they’re much more common than enthusiastic internet commenters would suggest

How do you know this?


Lots of remote opportunities at both https://www.workatastartup.com/ and https://angel.co/ I imagine a mid-to-senior dev would be able to find work at a company listed there fairly quickly.


These are the same kind of listings you can find on LinkedIn or many other job sites, not really answering the question of finding especially available jobs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: