Hacker Newsnew | past | comments | ask | show | jobs | submit | AstroBen's commentslogin

Everyone wouldn't starve in a few months. There is more than enough food and I have faith it'd be given out. The starvation we see today in a world where most genuinely have a chance to get out of it is nothing like a world in which people can't earn an income.

The government only has as much power as they are given and can defend, and the only way I could see that happening is via automated weapons controlled by a few- which at this point aren't enough to stop everyone. What army is going to purge their own people? Most humans aren't psychopaths.

I think it'd end in a painful transition period of "take care of the people in a just system or we'll destroy your infrastructure".


> The government only has as much power as they are given and can defend, and the only way I could see that happening is via automated weapons controlled by a few- which at this point aren't enough to stop everyone. What army is going to purge their own people? Most humans aren't psychopaths.

I think you're right for the immediate future.

I suspect while we're still employing large numbers of humans to fight wars and to maintain peace on the streets it would be difficult for a government to implement deeply harmful policies without risking a credible revolt.

However, we should remember the military is probably one of the first places human labour will be largely mechanised.

Similarly maintaining order in the future will probably be less about recruiting human police officers and more about surveillance and data. Although I suppose the good news there is that US is somewhat of an outlier in resisting this trend.

But regardless, the trend is ultimately the same... If we are assuming that AI and robotics will reach a point where most humans are unable to find productive work, therefore we will need UBI, then we should also assume that the need for humans in the military and police will be limited. Or to put it another way, either UBI isn't needed and this isn't a problem, or it is and this is a problem.

I also don't think democracy would collapse immediately either way, but I'd be pretty confident that in a world where fewer than 10% of people are in employment and 99%+ of the wealth is being created by the government or a handful of companies it would be extremely hard to avoid corruption over the span of decades. Arguably increasing wealth concentration in the US is already corrupting democratic processes today, this can only worsen as AI continues exacerbates the trend.


It seems inevitable that costs will come down over time. Expensive models today will be cheap models in a few years.

Of course it's what they're going for. If they could do it they'd replace all human labor - unfortunately it's looking like SWE might be the easiest of the bunch.

The weirdest thing to me is how many working SWEs are actively supporting them in the mission.


The day I start freaking out about my job is the day when my non-engineer friend turned vibe coder understands how, or why the thing that AI wrote works. Or why something doesn't work exactly the way he envisioned and what does it take to get it there.

If it can replace SWEs, then there's no reason why it can't replace say, a lawyer, or any other job for that matter. If it can't, then SWE is fine. If it can - well, we're all fucked either way.


> If it can replace SWEs, then there's no reason why it can't replace say, a lawyer

SWE is unique in that for part of the job it's possible to set up automated verification for correct output - so you can train a model to be better at it. I don't think that exists in law or even most other work.


Enthusiastically supporting them. It’s quite depressing to watch over the last few years. It’s not like they’re being coy about their aim…

Agree. Anthrophic in particular have been quite clear in what they are trying to do. Every blog post about every new model almost dismisses every other use case other than coding - every other use case seems almost a footnote in their communication.

Our DNA does contain our pre-training, though. It's not true that we're an entirely blank slate.

Pre-training is not a good term if you are trying to compare it to LLM pre-training. Closer would be the model's architecture and learning algorithms which has been designed through decades of PhD research, and my point on that is that the differences are still much greater than the similarities.

The difference here is that everyone else in this product category are also sprinting full steam ahead trying to get as many users as they can

If they DIDN'T heavily vibe-code it they might fall behind. Speed of implementation short term might beat out long-term maintenance and iteration they'd get from quality code

They're just taking on massive tech debt


> If they DIDN'T heavily vibe-code it they might fall behind

For you and I, sure - sprint as fast as we can using whatever means we can find. But when you have infinite money, hiring a solid team of traditional/acoustic/human devs is a negligible cost in money and time.

Especially if you give those devs enough agency that they can build on the product in interesting and novel ways that the ai isn’t going to suggest.

Everything is becoming slop now, and it almost always shows. I get why when you’re resource constrained. I don’t get why when you’re not.


> Everything is becoming slop now, and it almost always shows. I get why when you’re resource constrained. I don’t get why when you’re not

Every dollar spent is a dollar that shareholders can't have and executives can't hope for in their bonuses


> it doesn't really matter in the end

if you have one of the top models in a disruptive new product category where everyone else is sprinting also, sure..


Code quality never really mattered to users of the software. You can have the most <whatever metric you care about> code and still have zero users or have high user frustration from users that you do have.

Code quality only matters in maintainability to developers. IMO it's a very subjective metric


It's not subjective at all. It's not art.

Code quality = less bugs long term.

Code quality = faster iteration and easier maintenance.

If things are bad enough it becomes borderline impossible to add features.

Users absolutely care about these things.


Okay, but I meant how you measure is subjective.

How do you measure code quality?

> Users absolutely care about these things.

No, users care about you adding new features, not in your ability to add new features or how much it cost you to add features.


99.999999% of products can't get away with what Anthropic is able to - this is a one in a billion disruptive product with minimal competition, and its success so far is mostly due to Claude the model, not the agent harness

Strange, even

Do you really think developers are going through the hellish pain of dealing with Google and Apple for no reason? Real world users prefer and expect apps as opposed to web versions for many product categories.

Kimi K2.5 (as an example) is an open model with 1T params. I don't see a reason it has to be local for most use cases- the fact that it's open is what's important.

That is just idealism. Being "open" doesnt get you any advantage in the real world. You're not going to meaningfully compete in the new economy using "lesser" models. The economy does not care about principles or ethics. No one is going to build a long term business that provides actual value on open models. They can try. They can hype. And they can swindle and grift and scalp some profit before they become irrelevant. But it will not last.

Why? Because what was built with an open model can be sneezed into existence by a frontier model ran via first party API with the best practice configurations the providers publish in usage guides that no one seems to know exist.

The difference between the best frontier model (gpt-5.4-xhigh or opus 4.6) and the best open model is vast.

But that is only obvious when your use case is actually pushing the frontier.

If you're building a crud app, or the modern equivalent of a TODO app, even a lemon can produce that nowadays so you will assume open has caught up to closed because your use case never required frontier intelligence.


A model with open weights gives you a huge advantage in the real world.

You can run it on your own hardware, with perfectly predictable costs and predictable quality, without having to worry about how many tokens you use, or whether your subscription limits will be reached in the most inconvenient moment, forcing you to wait until they will be reset, or whether the token price will be increased, or your subscription limits will be decreased, or whether your AI provider will switch the model with a worse one, and so on.

Moreover, no matter how good a "frontier model" may be, it can still produce worse results than a worse model when the programmer who manages it does not also have "frontier intelligence". When liberated of the constraints of a paid API, you may be able to use an AI coding assistant in much more efficient ways, exactly like when the time-sharing access to powerful mainframes has been replaced with the unconstrained use of personal computers.

When I was very young I have passed through the transition from using remotely a mainframe to using my own computer. I certainly do not want to return to that straitjacket style of work.


The vision has been that the open and/or small models, while 8-16 months behind, would eventually reach sufficient capabilities. In this vision, not only do we have freedom of compute, we also get less electricity usage. I suspect long-term the frontier mega models will mainly be used for distillation, like we see from Gemini 3 to Gemma 4.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: