Hacker Newsnew | past | comments | ask | show | jobs | submit | skeeter2020's commentslogin

This would be terrible for McKinsey as they sell exclusively through executives who then punch all their wisdoms down on the plebs

So it would be great for the rest of mankind.

Have you seen current AI deals? This IS the future, but so much more efficient than requiring OpenAI, NVidia, MS, Amazon, etc. all be involved.

Is this true for you? How often do you get 99% of a complete, valuable thought?

My experience is that it is quite rare. Occasionally high 90's for simple things of low value, 60's or less for things that approximate "thinking". At best it feels like a new search channel that amalgamates data better, and hasn't been thoroughly polluted by ads and SEO - yet.


an LLM making this up would be much closer to AGI than anything else I've seen

that would at least be defensible, but unfortunately it's really just the hype & headline.

>> but also make it available for others in case other people want to use it for the extra features and polish as well.

this feels like the place where your approach breaks down. I have had very poor results trying to build a foundation that CAN be polished, or where features don't quickly feel like a jenga tower. I'm wondering if the success we've seen is because AI is building on top of, or we're early days in "foundational" work? Is anyone aware of studies comparing longer term structural aspects? is it too early?


I've been able to make very clear, modular, well put together architectural foundations for my greenfield projects with AI. We don't have studies, of course, so it is only your anecdote versus mine.

even if this was true or someday will be (big IF), is it worth looking for valid counter workflows? example: in many parts of the US and Canada the Mennonites are incredibly productive farmers and massive adopters of technology while also keeping very strict limits on where/how and when it is used. If we had the same motivations and discipline in software could we walk a line that both benefited from and controlled AI? I don't know the answer.

Good one, I had not made the connection, but yes. Tech is here to serve, at our pleasure, not to be forcibly consumed.

if it wasn't so maddening it would be funny when you literally have to tell it to slow down, focus and think. My tinfoil hat suggests this is intentional to make me treat it like a real, live junior dev!

"you literally have to tell it to slow down, focus and think" - This soo much! When I get an unexpected result from claude, I ask it why - what caused it to do such-and-such. After one back and forth session like this putting up tons of guardrails on a prompt, claude literally said "you shouldn't have to teach me to think every session" !!

> When I get an unexpected result from claude, I ask it why - what caused it to do such-and-such.

No LLM can answer this question for you, it has no insight into how or why it outputted what it outputted. The reasons it gives might sound plausible, but they aren't real.


let me translate this for the GP: "you're doing it wrong".

>> Are you completely missing the point of the submission

no, and that's what people are noting: the headline deliberately tries to blow this up into a big deal. When did you last see the HN post about Amazon's mandatory meeting to discuss a human-caused outage, or a post mortem? It's not because they don't happen...


Amazon has had a really bad string of various outages recently. Assuming they're internally treating this as business as usual in post-mortems then perhaps the newsworthy thing is actually that they aren't taking their outages seriously enough.

> the headline deliberately tries to blow this up into a big deal

I do not understand how “company that runs half the internet has had major recent outages and now explicitly names lax/non-existent LLM usage guidelines as a major reason” can possibly not be a big deal in the midst of an industry-wide hype wave over how the world’s biggest companies now run agent teams shipping 150 pull requests an hour.

The chain of events is “AWS has been having a pretty awful time as far as outages go”, and now “result of an operational meeting is that the company will cut down on the use of autonomous AI.” You don’t need CoT-level reasoning to come to the natural conclusion here.

If we could, as a species, collectively, stop measuring the relevance of a piece of news proportionally by how much we like hearing it, please?


The defensiveness is almost as interesting as the meeting itself.

Way too many people have tied their egos to the success of AI.

And too many people have their egos tied to its failure, too.

Im a massive AI skeptic. If anyone were to be jumping up and down on the corpse of AI and this incessant drive to use it everywhere, it’d be me. But I also work at Amazon. I got the email. I attended the meeting. I can personally attest that there are no new requirements for AI-generated code. The articles about this in the meeting at extremely misleading, if not outright wrong. But instead of believing the person that was actually there in the room, this thread is full of people dismissing my first-hand account of the situation because it doesn’t align with the “haha AI failed” viewpoint.


Not just their egos, but their paychecks. This place is either going to get very quiet or really weird when the hype train derails and the AI bubble bursts.

The subject of the media coverage is not AWS, it is a peer organization to AWS that runs using significant amounts of non-AWS infrastructure. They are both part of an umbrella called Amazon but are not at all the same thing.

Maybe your CoT-level reasoning isn’t so robust.


It's hard to that this objection seriously. The publication is literally called the Financial Times. It's not exactly crazy for them to think that their readers might care about the entity that shows up the stock ticker rather than how the company happens to divide up things internally.

Even if it weren't a finance publication, I have trouble imagining you making this argument if a headline said something like "Google deals with outages in the cloud" because of the idea that it's misleading to refer to it as anything other than GCP. I think you're fundamentally not understanding how people communicate about this sort of thing if you actually think that someone saying "Amazon" is misleading in any meaningful way.


You’re describing reasonable misunderstandings, but they are still misunderstandings.

The cause and effect statements just don’t correspond to reality.

I guess I’m stuck on the idea that the actual facts are relevant. If the question instead is how the dance of optics and PR is going in the minds of people who don’t know enough to doubt what they read, I don’t know what to say about that.


The message and meeting being discussed here have nothing to do with AWS or any outages AWS has faced recently. I think you’re missing the point of the discussion.

I don’t blame you, because this is just bad reporting (and potentially intentionally malicious to make you think it’s about AWS). But the meeting and discussion was with the Amazon retail teams, talking about Amazon retail processes, and Amazon retail services. The teams and processes that handle this are entirely separate from any AWS outages you are thinking of.

The outages that Amazon retail has faced also have nothing to do with AI, and there was no “explicit call out” about AI causing anything.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: