Hacker Newsnew | past | comments | ask | show | jobs | submit | bmitc's commentslogin

Basically, nothing has changed except the increase in noise. So all the suits who refuse to understand what software is have yet again decided to make things worse for professionals and for people who actually know what they're doing.

The departments / roles that LLMs most deeply need to be pointed at - business development, contracts, requirements, procurement - are the places least likely get augmented, due to how technology decisions are made structurally, socially.

I've already heard - many times - that the place that needs the LLMs isn't really inside the code. It's the requirements.

History has a ton of examples of a new technology that gets pushed, but doesn't displace the culture of the makers & shakers. Even though it is more than capable of doing so and indeed probably should.


Can you expand on what you think software is in this context?

Why do you think taht the suits refuse to understand what it is?


To the "suits" AI means "efficiency".

Efficiency also means to them, "less costs" and when they talk about "costs" they mean "headcount" which that is employees.

Put it together and the suits want to reduce headcount using AI.

To them, "clean code" is a scam and a waste of time that doesn't yield them quick returns, but a weak reason for software engineers to justifying their roles.


There are not many good resources on Kalman filters. In fact, I have found a single one that I'd consider good. This is someone who has spent a lot of time to newly understand Kalman filters.

Link to that good one?

It was a typo. I meant to say I haven't found a good one yet.

There's also settings available in some offerings and not in others. For example, the Anthropic Claude API supports setting model temperature, but the Claude Agent SDK doesn't.

> This blog post shows the journey that anyone not in one of those two vocal minorities is going through right now: A realization that AI coding tools can be a large accelerator but you need to learn how to use them correctly in your workflow and you need to remain involved in the code. It’s not as clickbaity as the extreme takes that get posted all the time. It’s a little disappointing to read the part where they said hard work was still required. It is a realistic and balanced take on the state of AI coding, though.

I appreciate the balanced takes and also the notion that one can use these AI tools to build software with principled use.

However, what I am still failing to see is concrete evidence that this is all faster and cheaper than just a human learning and doing everything themself or with a small team. The cat is out of the bag, so to speak, but I think it's still correct to question these things. I am putting in a _lot_ of work to reach a principled status quo with these tools, and it is still quite unclear whether it's actually improvement versus just a side quest to wrangle tools that everyone else is abusing.


American citizens have willingly given up their freedom and allowed themselves to be captured by corporate control.

I hope to god this comes into vogue in the U.S.

Big f'ing surprise.

The source is linked to in this thread. Is that not the source code?

Of what possible purpose is this man's DNA except for framing him for crimes later?

Take a look at Anthropic's repo. They auto-close issues after just a few weeks.

I don't think I've seen an issue of theirs that wasn't auto-closed.


Wait, isn’t software engineering a solved problem?


Yes, that’s why they have such great up time. They don’t go down multiple times per day.


Yes


I just randomly plugged in numbers to look up random Claude issues on github, and out of the 20 I checked, only one was closed as fixed. :-(

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: