Basically, nothing has changed except the increase in noise. So all the suits who refuse to understand what software is have yet again decided to make things worse for professionals and for people who actually know what they're doing.
The departments / roles that LLMs most deeply need to be pointed at - business development, contracts, requirements, procurement - are the places least likely get augmented, due to how technology decisions are made structurally, socially.
I've already heard - many times - that the place that needs the LLMs isn't really inside the code. It's the requirements.
History has a ton of examples of a new technology that gets pushed, but doesn't displace the culture of the makers & shakers. Even though it is more than capable of doing so and indeed probably should.
Efficiency also means to them, "less costs" and when they talk about "costs" they mean "headcount" which that is employees.
Put it together and the suits want to reduce headcount using AI.
To them, "clean code" is a scam and a waste of time that doesn't yield them quick returns, but a weak reason for software engineers to justifying their roles.
There are not many good resources on Kalman filters. In fact, I have found a single one that I'd consider good. This is someone who has spent a lot of time to newly understand Kalman filters.
There's also settings available in some offerings and not in others. For example, the Anthropic Claude API supports setting model temperature, but the Claude Agent SDK doesn't.
> This blog post shows the journey that anyone not in one of those two vocal minorities is going through right now: A realization that AI coding tools can be a large accelerator but you need to learn how to use them correctly in your workflow and you need to remain involved in the code. It’s not as clickbaity as the extreme takes that get posted all the time. It’s a little disappointing to read the part where they said hard work was still required. It is a realistic and balanced take on the state of AI coding, though.
I appreciate the balanced takes and also the notion that one can use these AI tools to build software with principled use.
However, what I am still failing to see is concrete evidence that this is all faster and cheaper than just a human learning and doing everything themself or with a small team. The cat is out of the bag, so to speak, but I think it's still correct to question these things. I am putting in a _lot_ of work to reach a principled status quo with these tools, and it is still quite unclear whether it's actually improvement versus just a side quest to wrangle tools that everyone else is abusing.
reply