Hacker Newsnew | past | comments | ask | show | jobs | submit | Peritract's commentslogin

Not to stretch the metaphor too far, but those workbenches require understanding (and hammers) to set up.

Will the paid tools always tell their users how to use the free versions, and if not, how will the users learn to do it independently?


> Will the paid tools always tell their users how to use the free versions, and if not, how will the users learn to do it independently?

The same way any open-source infrastructure finds widespread use, I’d say. If you’re willing to put in the elbow grease, you can probably set it up yourself (maybe even with the help of one of the frontier, uh, hammers, in its free tier). Or there might be services that act as middlemen to make it all more convenient and cheaper. But the difference is that if Service X pisses you off, then there will be Services Y, Z, A, and B who sell the same service using the same open-source infrastructure, so you always have a choice.

If you don’t like GitHub, try Gitlab, Codeberg, Gitea, and so forth. Or Bitbucket or Azure DevOps. (Don’t actually, though.)


> the idea that this category of people will never be able to improve into the first category of people

The fundamental difference between the categories is that the first is filled with people who put the effort in to learning/understanding, and the second is filled with people who take the shortcut around learning/understanding.

Changing from the second category to the first is something that would require already being in the first.


> The fundamental difference between the categories is that the first is filled with people who put the effort in to learning/understanding, and the second is filled with people who take the shortcut around learning/understanding.

Exactly! That’s my entire point. Because now you’re separating the categories by “is willing to put in effort” and “is not willing to put in effort” rather than by “has done the thing” and “hasn’t done the thing”.

I think the disagreement doesn’t lie in this concept, but rather in whether an LLM can be used by someone who’s willing to put in effort to assist them in doing so, rather than just having it do it for them. But as long as you understand what the thing you’re using it is for, you don’t have to understand how it works exactly. You can shift gears in a car without a physics degree.


> I think the disagreement doesn’t lie in this concept, but rather in whether an LLM can be used by someone who’s willing to put in effort to assist them in doing so, rather than just having it do it for them

No, you misunderstood here. People aren't saying "it is harder to learn in the future", the issue is "it will be harder to make sure that someone will learn in the future".

Currently you need an engineering degree and experience to do engineering work. However if in the future a lot of people get their degree and experience just by calling LLM for every problem, those engineers will not understand at all what they are doing. Before someone having that experience will have had solved a lot of problems manually on the job, that experience made them an expert. The same person solving those by calling an LLM and pasting in the answer will just as ignorant as someone with no experience.

Most such people today didn't wanna learn to be engineers out of curiosity, they just wanted a job. In the future all such people would use LLM and never learn. Those are the main parts of our workforce, so it is a scary prospect that in the future we cannot force them to learn things properly in the same way since LLM allows them to do basic tasks without learning.

If you argue there are plenty of people who learn for fun, then you would be wrong. Extremely few people learn enough in their own time to contribute meaningfully to for example mathematics, it isn't enough to matter. People learn those fundamentals primarily because they are forced to do it for a degree they need for a job, if they weren't forced to learn and pass tests they would happily go do the job without any knowledge or skills.


How do you know you have?

I have been using it to learn Chinese along with other standard resources. My reading comprehension has improved a lot after I started to use LLMs to understand sentence structures and grammar.

Actually, I think this is a case where LLMS _can_ be useful. If we're prompting for small enough outputs, for examples around things we can already sort of reason about it, we're able to judge whether or not what's presented to use makes sense.

Presumably you're also reading some kind of learning text about the Chinese language, so the sole source isn't just the LLM?

In my experience, asking an LLM to produce small examples of well-known things (or rather, things that are going to be talked about frequently in the training data, so generally basic or fundamental topics) tend to work fine, and is going to be at a level where you yourself can judge what's presented.

I think the real danger is when a person is prompting things they don't know how to verify for themselves, since then we're basically just rolling dice and hoping


> I'm not suggesting that Iran shouldn't have a military, but instead questioning the purposes for which it would have one.

Well, they're currently being attacked. "Defending against attackers" is a pretty important purpose for a military.


[flagged]


> Notice how it's just Iran that's being attacked

https://en.wikipedia.org/wiki/2026_Lebanon_war


Yes, Hezbollah is an Iranian proxy who has, in violation of UN actions and against Lebanese government wishes seized and held territory in Lebanon from which to launch rockets into Israel lol.

If you're going to use that as such a loose category than the list of countries that have been attacked expands quite a bit. Israel has attacked Iran, while Iran has attacked Israel, Turkey, Azerbaijan, Iraq, Oman, Kuwait, Saudi Arabia, UAE, USA, and maybe one or two others that I'm not thinking of.


Iran hasn't attacked USA or Israel. USA and Israel are the invaders that attacked Iran.

Do we now start listing American proxies and their terrorism? CONTRA alone should make the USA deserving of several nukes dropped on its lands by that measure.

Like, this very second?

It’s been ones of months since USA attacked Venzuela. We are openly musing about invading Greenland. We are actively embargoing and threatening to invade Cuba. We are the unhinged aggressor in all of this.


There is no civilization on the planet that would accept full disarmament under the logic that they should just trust that you won’t attack them if they weren’t armed.

Let's be fair, if someone bombed trump right now, most of the world would be happy, including a lot of americans.

Does that mean that someone should bomb US because of your regime? I mean... you have more homeless people living in tents than most cities post some natural disaster, your people can't afford education, healthcare nor (as above) homes, and you guys are spending money to bomb a place half a planet away that is in no way endangering you... and that after you've bombed it once before and "completely destroyed the nuclear program"... and before that and before that.

I mean... i understand americans are well... americans, but you guys can't even imprison pedos running your country, why should you decide who to bomb?

I mean.. what's next? Iranian special forces will eventually start destroying stuff in US, and you guys will claim "terrorism" or something again... well, it's not terrorism if you're in a war.


Not being punished for something doesn't mean it isn't a crime, and doesn't mean it isn't wrong.

Children have a more developed sense of ethics than that.


> Majority of AI text, music, images, videos and code is indistinguishable and you use it every day.

I really don't think this is true. If it was, we'd be able to point to countless examples of things assumed not to be AI that actually were, but there's a dearth of such examples.


Examples _are_ countless. Look around yourself - it's simply indistinguishable. Videos are still not quite there, the biggest telltale is how short they are, but we are very very close.

> Examples are countless

Then you should be able to name at least one.


It's worth emphasizing here that you haven't changed it to iambic pentameter with an easy button. Your A lines are, but the Bs butcher it.

> There's also the point that LLMs can give you explicit control over features like reading age, social register, metaphor frames/ themes/imagery, sentence structure, grammatical uniqueness, rhythmic variation, and other linguistic markers.

You already have this. Control over your writing is the default position.


Not stupid, but I think it's fair to say "careless about/unaware of the wider impact of their work".

What do you mean by wider impact? Model collapse would be the opposite of a wider impact: it's an immediate impact, and I'm fairly sure the people training these models have good incentives to avoid that.

Eg by filtering data, by procuring better data, by applying techniques for making do with more limited data (we used to have a lot of those, and they are still known), or you can also adapt your training process to be less vulnerable to model collapse. Just because some researchers have shown that this happened for the models they tested, doesn't mean it has to be a universal thing.


You could, you'd just license them at creation time for X years. It would stop large corporations hoarding everything.

a licensing agreement is not work for hire - work for hire means the person doing the hiring owns the copyright, not the person who did the work.

That's how it works now, but we're talking about changing it. That's the context of the conversation.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: