Hacker Newsnew | past | comments | ask | show | jobs | submit | est31's commentslogin

Not a lawyer but the German constitution, Article 12a, speaks of men above 18, not of citizens, or even residents of Germany.

https://www.gesetze-im-internet.de/englisch_gg/englisch_gg.h...

So that article can in theory be used to conscript any man, citizen or not, living in Germany or not.

The Wehrpflichtgesetz, which is a simple law and requires just the 50% Bundestag majority to have it changed, refines this very wide constitutional power in article 1, to require men who hold German citizenship above 18.

https://www.gesetze-im-internet.de/wehrpflg/BJNR006510956.ht...

Article 3 refines it even further to folks below 45 or 60, depending on the severity of the situation.

But yes, in theory it can be changed to include any non-German citizen man, people aged 80, living inside of Germany since a while or never having been to Germany ever, or just random men who happen to change flights at FRA.


This one might last longer. The AI race is on, and the US tries its best to make it as expensive for China as possible to participate in it. Every dollar China spends on GPUs they get at markup is one not spent on building navy ships.

If there is an escalation over Taiwan, then that will cause the loss of most of the world's high grade chip manufacturing capacity. TSMC is busy doing technology transfers into the US, but it is going to take time, those fabs won't have capacity for the whole world, and they still heavily depend on Taiwan based engineers if something goes wrong etc.

Just like with COVID you don't know how long this shortage will last.


Hypothetically what would happen if China took over Tawain and TSMC?

It will incredibly hard for China to conquer Taiwan. One hundred kilometers across the straits introduces a brutal geographic hurdle. If anything, the fabs will probably be severely damaged in the war. Plus most senior execs and elite engineers would be moved to US offices in Arizona.

I’m not a military expert, but I’ll bet my left nut that if push comes to shove Taiwan will go scorched earth and just blow up the chip factories

Which will put the whole world back for a decade while we rebuild the factories from scratch.


All modern technology becomes unobtainable.

We are going to have that now in a couple of months regardless. So it won't matter if Taiwan's manufacturing base gets disrupted, the hardware will have already effectively stopped.

Wow, I wasn't aware Samsung, Intel, SMSC were unable to produce "modern technology." Not everything needs to be on a 3nm TSMC process, believe it or not.

TSMC makes a lot of stuff besides the EUV-scale parts that all the YouTube videos talk about.

Almost everything you own that runs on electricity has some parts from Taiwan in it. TSMC alone makes MEMS components, CMOS image sensors, NVRAM, and mixed-signal/RF/analog parts to name a few.

Also, people seem to assume that TSMC is an autonomous entity that receives sand at one loading dock and ships wafers out at another. That's not how fabs work. Their processes depend on a continuous supply of exotic materials and proprietary maintenance support from other countries, many of them US-aligned. There is no need to booby-trap any equipment at TSMC; it will grind to an unrecoverable halt soon after the first Chinese soldier fires a rifle or launches a missile.

Hopefully Xi understands that. But some say it's a personal beef/legacy thing with him, and that he doesn't even care about TSMC.


There would be a brief, or possibly extended depending on how much damage the fabs took, outage, then it'd be back to business as usual.

Russia weren't able to take Ukraine even when they were able to just drive their tanks right up to Kiyv. Modern warfare tech just favors the defender too much. China has ninety km of sea to cross before they even get to Taiwan. Missiles and drones have already taken out the Russian naval fleet in the Black Sea. China will be losing a lot in the same way if they ever attempt the crossing.

It is public knowledge that the critical equipment has "kill switches".

I wouldn't be surprised if there was enough damage that building a new fab from scratch is easier.


To the people who downvoted my comment: Are you doing that because you know it's not correct or because you really hope and wish it wasn't correct?

It's a loss leader but this is normal. Same has happened with Uber, Airbnb, Amazon, etc. Using VC money to buy marketshare and once you have it, you can milk it.

The question is more around the moats that these companies have and it seems to me while their models are amazing technology, they don't really have a moat. The open/chinese models still continuously catch up to the american ones.


And what possible moat. It isn't hard to foresee that in just a couple of years, models outpacing the latest frontier tech we have today will run on consumer hardware. With open source workflows anyone can pull in to run, providers won't see a penny.

Another scenario is that dense models get replaced entirely, in which case the likelyhood of OpenAI and co pioneering the concept is pretty slim. They will be left with billions worth of infrastructure which cost them 10 times that 2 years earlier, faced with the reality touched by the article: liquidate.


If I look around in the FLOSS communities, I see a lot of skepticism towards LLMs. The main concerns are:

1. they were trained on FLOSS repositories without consent of the authors, including GPL and AGPL repos

2. the best models are proprietary

3. folks making low-effort contribution attempts using AI (PRs, security reports, etc).

I agree those are legitimate problems but LLMs are the new reality, they are not going to go away. Much more powerful lobbies than the OSS ones are losing fights against the LLM companies (the big copyright holders in media).

But while companies can use LLMs to build replacements for GPL licensed code (where those LLMs have that GPL code probably in their training set), the reverse thing can also be done: one can break monopolies open using LLMs, and build so much open source software using LLMs.

In the end, the GPL is only a means to an end.


> LLMs are the new reality, they are not going to go away

That's the conventional wisdom, but it isn't a given. A lot of financial wizardry is taking place to prop up the best of these things, and even their most ardent proponents are starting to recognize their futility once a certain complexity level is reached. The open weight models are the stalking horse that gives this proposition the most legs, but it's not given that Anthropic and OpenAI exist as anything more than shells of their current selves in 5 years.


But LLMs themselves are literally not going away, I think that's the point. Once a model is trained and let out into the open for free download, it's there, and can be used by anyone - and it's only going to get cheaper and easier.

Yeah like Kimi is good enough, if there was some kind of LLM fire and all the closed source models suddenly burnt down and could never be remade, Kimi 2.5 is already good enough forever.

Good enough is probably redundant, it's amazing compared to last year's models


> 3. folks making low-effort contribution attempts using AI (PRs, security reports, etc).

Meanwhile as people sleep on LLMs to help them audit their code for security holes, or even any security code auditing tools. Script kiddies don't care that you think AI isn't ready, they'll use AI models to scrape your website for security gaps. They'll use LLMs to figure out how to hack your employees and steal your data. We already saw that hackers broke into government servers for the Mexican government, basically scraping every document of every Mexican citizen. Now is the time to start investing in security auditing, before you become the next news headline.

AI isn't the future, it's already here, and hackers will use it against you.


This reads like a "You wouldn't download a car!" ad but it's trying to scare you into using AI instead.

More like, you're still using horses to move your product, meanwhile thieves and your competitors are using trucks to outpace you. A truck can get in the way of your horse carriage and then they can rob you easily and take all your cargo. Yes, you can still get your cargo from point A to point B, but you're going to be targeted by bad actors in vehicles.

> one can break monopolies open using LLMs

Let me know when you succeed.

> the GPL is only a means to an end

And how this end is closer with LLMs?


> And how this end is closer with LLMs?

The blog post of this thread argues that now, even average users have the ability to modify GPL'd code thanks to LLMs. The bigger advantage though is that one can use it to break open software monopolies in the first place.

A lot of such monopolies are based on proprietary formats.

If LLM swarms can build a browser (not from scratch) and C compiler (from scratch), they can also build an LLVM backend for a bespoke architecture that only has a proprietary C compiler for it. They can also build adobe software replacements, pdf editors, debug/fix linux driver issues, etc.


Not interested what they can build. Show me the fruits, not image of fruits

LMMs can be used for example faster reverse engineering, to turn proprietary content into free.

I am not asking what they can be used for. Tell me what they are actually being used for

The end game is a resource based economy as all sorts of labor becomes cheap.

Think of Saudi Arabia, Iran, Putin's Russia, or Norway. I.e. risk for highly nepotic dictatorships, with the potential that it might end up well despite the odds (Norway).

Before, if you made a product that improved the lives of everyone, say you invented Google or Heinz ketchup, you could make a lot of money through that, and you did a good deed and became rich the same time. The masses of humans would reward you for delivering the benefits of your invention to them by giving you a piece of their work output.

As their work becomes less and less worth, why focus on those humans though? I am asking rhetorically of course.

An economy that thrives from innovation enriches the innovators, making them powerful. A brute in power causes the innovators to leave or in the worst case, he mass-executes them outright (think of what Stalin did in Russia). With AI, you can have a brute in power though, as an oil rig or datacenter can be protected by a bunch of machine guns.

An economy with AI everywhere will be, after a short and very innovative period, just be about who controls which resource, i.e. water for a datacenter, production lines for robots, mining rights, operational control of robot fleets, etc.

The working 95% will probably experience a sharp decrease in purchasing power, making a lot of products unaffordable to them, so consumption wise we'll have a further shift towards plutonomics. The owning top 10% will probably be affected by this major shift in consumption as well, E.g. a tower full of condos becomes worthless if the tenants can't pay rent because they got laid off, etc.

Need for robots and AI will further increase. Eventually most economic activity will revolve around those robots. It's a bit like paperclip optimizer here, whether those robots protect gay luxury space communism from counterrevolutionaries, or they project the will of the Davos council of Forbes 400, economically it will be quite similar.

There will still be human societies, humans will still talk to other humans. We won't be all exclusively conversing with LLMs, I doubt that. There will still be social mobility but it will revolve around nepotism, lying, and various escalation steps of war.

We might end up in different scenarios depending on the country, but some countries like Germany might lose relevance as most of their value lies in stuff that is going to be replaced by AI, i.e. they have less natural resources, or they have been depleted already.

We might also see companies that automate everything from end to end, from mining to producing and running weaponized robot fleets. Shareholders of those companies will do great too, if the leadership of the companies respects minority shareholder rights that is (why should they though, they will outgun any law enforcement).

Do I like this future? I don't think so. We will probably have solved cancer, communicable diseases, and aging in the next 30 years if AI continues its successful trajectory, but not sure if it will be accessible to 8 billion humans.


You have a lot of control over LLM quality. There is different models available. Even with different effort settings of those models you have different outcomes.

E.g. look at the "SWE-Bench Pro (public)" heading in this page: https://openai.com/index/introducing-gpt-5-4/ , showing reasoning efforts from none to high.

Of course, they don't learn like humans so you can't do the trick of hiring someone less senior but with great potential and then mentor them. Instead it's more of an up front price you have to pay. The top models at the highest settings obviously form a ceiling though.


You also have control over the workflow they follow and the standards you expect them to stick through, through multiple layers of context. Expecting a model to understand your workflow and standards without doing the effort of writing them down is like expecting a new hire to know them without any onboarding. Allowing bad AI code into your production pipeline is a skill issue.


Imagine you opened a job posting and had all applicants complete SWE-bench.

Ignoring the useless/unqualified candidates and models, human applicants have a much wider range of talent for you to choose from than the top models + tooling.

The frontier models + tooling are, in the grand scheme of things, basically equivalent at any given moment.

Humans can be just as bad as the worst models, but models are no where near as good as the best humans.


AI etiquette is a great term. AI is useful in general but some patterns of AI usage are annoying. Especially if the other side spent 10 seconds on something and expects you to treat it seriously.

Currently it's a bit of a wild west, but eventually we'll need to figure out the correct set of rules of how to use AI.


I'm hearing nightmare stories from my friends in retail and healthcare where someone walks in holding a phone and asks you talk to them through their chatbot on their phone. Friend had a person last week walk in and ask they explain what he does to Grok and then ask Grok what questions they should ask him.


What shocks me is the complete lack of self awareness of the person holding the phone. People have been incapable of independent thought for a while, but to hold up a flag and announce this to your surroundings is really something else.


I think the older AI users are even held back because they might be doing things that are not neccessary any more, like explaining basic things, like please don't bring in random dependencies but prefer the ones that are there already, or the classic think really strongly and make a plan, or try to use a prestigious register of the language in attempts to make it think harder.

Nowadays I just paste a test, build, or linter error message into the chat and the clanker knows immediately what to do, where it originated, and looks into causes. Often times I come back to the chat and see a working explanation together with a fix.

Before I had to actually explain why I want it to change some implementation in some direction, otherwise it would refuse no I won't do that because abc. Nowadays I can just pass the raw instruction "please move this into its own function", etc, and it follows.

So yeah, a lot of these skills become outdated very quickly, the technology is changing so fast, and one needs to constantly revisit if what one had to do a couple of months earlier is still required, whether there's still the limits of the technology precisely there or further out.


I’ve no idea what you’re using, but “I simply paste” isn’t giving me good results at all.

An hour ago Gemini decided it needed to scan my entire home folder to find the test file I asked it to look into. Sonnet will definitely try to install new dependencies, even though I’m doing SDD and have a clear AGENTS.md.

I’m always baffled at people’s magic results with LLMs. I’m impressed by the new tools, but lots of comments here would suggest my Gemini/Sonnet/Opus are much worse than yours.


> I think the older AI users are even held back because they might be doing things that are not neccessary any more

As the same age as Linus Torvalds, I'd say that it can be the opposite.

We are so used to "leaky abstractions", that we have just accepted this as another imperfect new tech stack.

Unlike less experienced developers, we know that you have to learn a bit about the underlying layers to use the high level abstraction layer effectively.

What is going on under the hood? What was the sequence of events which caused my inputs to give these outputs / error messages?

Once you learn enough of how the underlying layers work, you'll get far fewer errors because you'll subconciously avoid them. Meanwhile, people with a "I only work at the high-level"-mindset keeps trying to feed the high-level layer different inputs more or less at random.

For LLMs, it's certainly a challenge.

The basic low level LLM architecture is very simple. You can write a naive LLM core inference engine in a few hundred lines of code.

But that is like writing a logic gate simulator and feeding it a huge CPU gate list + many GBs of kernel+rootfs disk images. It doesn't tell you how the thing actually behaves.

So you move up the layers. Often you can't get hard data on how they really work. Instead you rely on empirical and anecdotal data.

But you still form a mental image of what the rough layers are, and what you can expect in their behavior given different inputs.

For LLMs, a critical piece is the context window. It has to be understood and managed to get good results. Make sure it's fed with the right amount of the right data, and you get much better results.

> Nowadays I just paste a test, build, or linter error message into the chat and the clanker knows immediately what to do

That's exactly the right thing to do given the right circumstances.

But if you're doing a big refactoring across a huge code base, you won't get the same good results. You'll need to understand the context window and how your tools/framework feeds it with data for your subagents.


I think GP meant 'longer time users of AI', not 'older aged users of AI'.

Their point being that it's not really an advantage to have learnt the tricks and ways to deal with it a year, two years ago when it's so much better now, and that's not necessary or there's different tricks.


Yeah I meant it in the context of the comment I was replying to, to be precise in the context of the comment that one was replying to, i.e. "10 years of certified Claude Code experience required".

The technology is moving so fast that the tricks you learned a year ago might not be relevant any more.


Thanks, I agree 100% with that.


I still see people doing "you are a world class distributed systems engineer" think. Never fails me to chuckle.


I've been 8 years on this site, and I have 8 favorite comments. This comment just made it into a very exclusive club.


Have you tried the latest models at best settings?

I've been writing software for 20 years. Rust since 10 years. I don't consider myself to be a median coder, but quite above average.

Since the last 2 years or so, I've been trying out changes with AI models every couple months or so, and they have been consistently disappointing. Sure, upon edits and many prompts I could get something useful out of it but often I would have spent the same amount of time or more than I would have spent manually coding.

So yes, while I love technology, I'd been an LLM skeptic for a long time, and for good reason, the models just hadn't been good. While many of my colleagues used AI, I didn't see the appeal of it. It would take more time and I would still have to think just as much, while it be making so many mistakes everywhere and I would have to constantly ask it to correct things.

Now 5 months or so ago, this changed as the models actually figured it out. The February releases of the models sealed things for me.

The models are still making mistakes, but their number and severity is lower, and the output would fit the specific coding patterns in that file or area. It wouldn't import a random library but use the one that was already imported. If I asked it to not do something, it would follow (earlier iterations just ignored me, it was frustrating).

At least for the software development areas I'm touching (writing databases in Rust), LLMs turned into a genuinely useful tool where I now am able to use the fundamental advantages that the technology offers, i.e. write 500 lines of code in 10 minutes, reducing something that would have taken me two to three days before to half a day (as of course I still need to review it and fix mistakes/wrong choices the tool made).

Of course this doesn't mean that I am now 6x faster at all coding tasks, because sometimes I need to figure out the best design or such, but

I am talking about Opus 4.6 and Codex 5.3 here, at high+ effort settings, and not about the tab auto completion or the quick edit features of the IDEs, but the agentic feature where the IDE can actually spend some effort into thinking what I, the user, meant with my less specific prompt.


> I am talking about Opus 4.6 and Codex 5.3 here, at high+ effort settings

So you have to burn tokens at the highest available settings to even have a chance of ending up with code that's not completely terrible (and then only in very specific domains), but of course you then have to review it all and fix all the mistakes it made. So where's the gain exactly? The proper goal is for those 500 lines to be almost always truly comparable to what a human would've written, and not turn into an unmaintainable mess. And AI's aren't there yet.


You really do need to try the latest ones. You can’t extrapolate from your previous experiences.


I do not think they are impartial - all I can see is lots of angst.


I feel like we're talking about different things. You seem to be describing a mode of working that produces output that's good enough to warrant the token cost. That's fine, and I have use cases where I do the same. My gripe was with the parent poster's quote:

> Claude and GPT regularly write programs that are way better than what I would’ve written

What you're describing doesn't sound "way better" than what you would have written by hand, except possibly in terms of the speed that it was written.


yeah it writing stuff that's way better than mine is not the case for me, at least for areas I'm familiar with. In areas I'm not familiar with, it's way better than what I could have produced.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: