The people in these industries are collectively responsible for millions of preventable deaths, and they, their families, and generations of their offspring are and will be living the best lives money can buy.
And yet one person kills a CEO, and they're a terrorist.
Large and complex systems are fundamentally unpredictable and have tradeoffs and consequences that can't be foreseen by anybody. Error rates are never zero. So basically anything large enough is going to kill people in one way or another. There are intelligent ways to deal with this, and then there is shooting the CEO, which will change nothing because the next CEO faces the exact same set of choices and incentives as the last one.
Well, given what you said, one obvious mechanism is to cap the sizes of these organizations so that any errors are less impactful. Break up every single company into little pieces.
That doesn't really help because the complexity isn't just internal to the companies, but also exists in the network between entities that make up the industry. I may well even make it worse because it is much harder to coordinate. e.g. If I run into a bug cause by another team at work, it's massively easier to get that fixed than if the bug is in vendor software.
In terms of health insurance, which is the industry where the CEO got shot, we can pretty definitively say that it's worse. More centralized systems in Europe tend to perform better. If you double the number of insurance companies, then you double the number of different systems every hospital has to integrate with.
We see this on the internet too. It's massively more centralized than 20 years ago, and when Cloudflare goes down it's major news. But from a user's perspective the internet is more reliable than ever. It's just that when 1% of users face an outage once a day it gets no attention, but when 100% of users face an outage once a year everyone hears about it even though it is more reliable than the former scenario.
I'm not talking about unpredictable tradeoffs and consequences.
I'm talking about intentional actions that lead to deaths. E.g. [1] and [2], but there are numerous such examples. There is no plausible defense for this. It is pure evil.
Well those get handled. Perdue was sued into bankruptcy and the Tobacco Institute was shut down when the industry was forced to settle for $200 billion in damages.
But do they need it? How do you know? And don't say because the doctor said so, because doctors disagree all the time. When my grandfather was dying in his late 80s, the doctor said there was nothing he could do. So his children took him to another doctor, who said the same. And then another doctor, who agreed with the first two. But then they took him to a 4th doctor, who agreed to do open heart surgery, which didn't work, and if anything hastened his inevitable death due to the massive stress. The surgery cost something like 70 grand and they eventually got the insurance company to pay for it. But the insurance company should not have paid for it because it was a completely unnecessary waste of money. And of course there will be mistakes in the other direction because this just isn't an exact science.
Why is it on me to come up with a new model for healthcare? I can acknowledge shortcomings of the present system without having to come up with solutions for them.
> Pretty predictable what happens when you deny coverage for a treatment someone needs
Other poster demonstrated that you have no idea what "need" is. So you also have no idea what a "shortcoming of the present system" is either, because how the hell would you even know?
Why does it matter if it personally occurred to him or someone related to him? It happens to plenty of people. You can have empathy for people not bound by blood.
There is a great deal of injustice in the world. Psychologically healthy adults have learned to add a reflection step between anger and action.
By all evidence, Luigi is a smart guy. So one can only speculate on his psychological health, or whether he believed that there was an effective response to the problem which included murdering an abstract impersonal enemy.
I'm stumped, honestly. The simplest explanations are mental illness, or a hero complex (but I repeat myself). Maybe we'll learn someday.
He could die quietly making no impact on the issue. Or he could sacrifice the rest of his free life to put a spotlight on the issue. That is what he chose to do. Not an easy decision I'm sure.
You say “a CEO” like it’s just a fungible human unit. In reality, a CEO is much much more valuable than a median human. Think of how many shareholders are impacted, many little old grey haired grannies, dependent on their investments for food, shelter and medical expenses. When you think of the fuller context, surely you see how sociopathic it is to shrug at the killing of a CEO, let alone a CEO of a major corporation. Or maybe sociopathy is the norm these days, for the heavily online guys.
A message is certainly sent in the process that previously was going unheard.
"Former UnitedHealth CEO Andrew Witty published an op-ed in The New York Times shortly after the killing, expressing sympathy with public frustrations over the “flawed” healthcare system. The CEO of another insurer called on the industry to rebuild trust with the wider public, writing: “We are sorry, and we can and will be better.”
Mr. Thompson’s death also forced a public reckoning over prior authorization. In June, nearly 50 insurers, including UnitedHealthcare, Aetna, Cigna and Humana, signed a voluntary pledge to streamline prior authorization processes, reduce the number of procedures requiring authorization and ensure all clinical denials are reviewed by medical professionals. "
IMO the ThinkPad brand died after IBM sold its PC division. Lenovo has kept it alive because it is a cash cow, but the machines have shoddy build quality and are ludicrously overpriced.
My previous X1 Extreme Gen 1 (2018) had annoying coil whine and screen backlight bleed. One of the key caps broke off after a couple of years. Eventually I ended up doing a full keyboard and battery replacement.
My current X1 Carbon Gen 13 is nice and light, has no coil whine, but it's still made from cheap plastic. Considering it's a $2k+ machine, it sure doesn't feel like it.
In comparison, ThinkPads from the IBM era were built like tanks. Still plastic, sure, but they felt solid, and were reliable workhorses for years.
At this point the only thing keeping me on ThinkPads is the TrackPoint, but since trackpads are decent on Linux nowadays, I think I'm ready to finally ditch the brand. Some of the new Dell and HP machines look interesting. Frameworks seem nice, but I've read many issues about their build quality, and they're not cheap either.
Thinkpad T or X series had always been the best Linux laptops I've ever had. All of the hardware has always Just Worked, they have a nice selection of bios-level security features, and the build quality has always been just fine. Cases / keyboards / track point / touchpads never failed after 6-8y of owning each.
I'll add that my P14s (gen 3) keyboard sticks up in one place. It's not the biggest problem, but this an odd fit problem I never saw on my older ones. Typing on it is not as great either than my other TPs or external, which at least have a longer travel. (the old layout was OK, they key feel was nice, and the T430s felt like the best laptop keyboard I had used)
I've had the same experience. My IBM ThinkPads had great keyboards. Lenovo modernized them, but introduced a bunch of quality issues.
Oh, and I forgot to mention the most annoying issue of all: the TrackPoint still drifts on every modern ThinkPad! This was a big issue on IBM ThinkPads, and Lenovo hasn't bothered to fix it in all this time. I've had it on my 2018 X1 Extreme, and now on this 2025 X1 Carbon. It's enfuriating that such a high profile functionality has such a glaring issue for such a long time. Just shows Lenovo's lack of care and attention for this brand.
I got mine just for the Trackpoint too :D I just use it on my desk though. But I am waiting in the mail for the new Shinobi. There are very few other keyboards on the market with this feature. There is the HHKB Studio, but it's much more expensive and not full-size. But, I didn't look far either. I am not obsessed with mechanical keyboards, so the only thing for me was guessing what switch to use (cherry-mx silent red, they are fine enough :) ) I just hope for no more hacking around to get a stable keyboard. It's been a nuisance.
There's no question that "AI" is the next advertising frontier. I've been saying this for years[1][2][3]. It is going to be the most lucrative form of it yet, and no "AI" company will be able to resist it. Given the exorbitant amount of resources required for this technology, advertising will probably be the only viable business model that can sustain it at scale.
The exorbitant costs could save us from ads actually. instead everyone will have to pay subscriptions like for mobile phone plans.
Haven't heard of the ad-based Rolls-Royce yet.
> “We’ve seen this before and it didn’t happen” is not analysis. It’s selective pattern matching used when the conclusion feels safe.
> If you want to argue seriously, you have to start with ground truth. What is happening now. What the trendlines look like. What follows if those trendlines continue.
Wait, so we can infer the future from "trendlines", but not from past events? Either past events are part of a macro trend, and are valuable data points, or the micro data points you choose to focus on are unreliable as well. Talk about selection bias...
I would argue that data points that are barely a few years old, and obscured by an unprecedented hype cycle and gold rush, are not reliable predictors of anything. The safe approach would be to wait for the market to settle, before placing any bets on the future.
> Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals.
What is hypothetical is what will happen to all this software and the companies that produced it a few years down the line. How reliable is it? How maintainable is it? How many security issues does it have? What has the company lost because those issues were exploited? Will the same people who produced it using these new tools be able to troubleshoot and fix it? Will the tools get better to allow them to do that?
> The reason this argument keeps reappearing has little to do with tools and everything to do with identity.
Really? Everything? There is no chance that some people are simply pointing out the flaws of this technology, and that the marketing around it is making it out to be far more valuable than it actually is, so that a bunch of tech grifters can add more zeroes to their net worth?
I don't get how anyone can speak about trends and what's currently happening with any degree of confidence. Let alone dismiss the skeptics by making wild claims about their character. Do better.
>Wait, so we can infer the future from “trendlines”, but not from past events? Either past events are part of a macro trend, and are valuable data points, or the micro data points you choose to focus on are unreliable as well. Talk about selection bias…
If past events can be dismissed as “noise,” then so can selectively chosen counterexamples. Either historical outcomes are legitimate inputs into a broader signal, or no isolated datapoint deserves special treatment. You cannot appeal to trendlines while arbitrarily discarding the very history that defines them without committing selection bias.
When large numbers of analogous past events point in contradictory directions, individual anecdotes lose predictive power. Trendlines are not an oracle, but once the noise overwhelms the signal, they are the best approximation we have.
>What is hypothetical is what will happen to all this software and the companies that produced it a few years down the line. How reliable is it? How maintainable is it? How many security issues does it have? What has the company lost because those issues were exploited? Will the same people who produced it using these new tools be able to troubleshoot and fix it? Will the tools get better to allow them to do that?
These are legitimate questions, and they are all speculative. My expectation is that code quality will decline while simultaneously becoming less relevant. As LLMs ingest and reason over ever larger bodies of software, human oriented notions of cleanliness and maintainability matter less. LLMs are far less constrained by disorder than humans are.
>Really? Everything? There is no chance that some people are simply pointing out the flaws of this technology, and that the marketing around it is making it out to be far more valuable than it actually is, so that a bunch of tech grifters can add more zeroes to their net worth?
The flaws are obvious. So obvious that repeatedly pointing them out is like warning that airplanes can crash while ignoring that aviation safety has improved to the point where you are far more likely to die in a car than in a metal tube moving at 500 mph.
Everyone knows LLMs hallucinate. That is not contested. What matters is the direction of travel. The trendline is clear. Just as early aviation was dangerous but steadily improved, this technology is getting better month by month.
That is the real disagreement. Critics focus on present day limitations. Proponents focus on the trajectory. One side freezes the system in time; the other extrapolates forward.
>I don’t get how anyone can speak about trends and what’s currently happening with any degree of confidence. Let alone dismiss the skeptics by making wild claims about their character. Do better.
Because many skeptics are ignoring what is directly observable. You can watch AI generate ultra complex, domain specific systems that have never existed before, in real time, and still hear someone dismiss it entirely because it failed a prompt last Tuesday.
Repeating the limitations is not analysis. Everyone who is not a skeptic already understands them and has factored them in. What skeptics keep doing is reciting known flaws while refusing to reason about what is no longer a limitation.
At that point, the disagreement stops being about evidence and starts looking like bias.
Respectfully, you seem to love the sound of your writing so much you forget what you are arguing about. The topic (at least for the rest of the people in this thread) seems to be whether AI assistance can truly eliminate programmers.
There is one painfully obvious, undeniable historical trend: making programmer work easier increases the number of programmers. I would argue a modern developer is 1000x more effective than one working in the times of punch cards - yet we have roughly 1000x more software developers than back then.
I'm not an AI skeptic by any means, and use it everyday at my job where I am gainfully employed to develop production software used by paying customers. The overwhelming consensus among those similar to me (I've put down all of these qualifiers very intentionally) is that the currently existing modalities of AI tools are a massive productivity boost mostly for the "typing" part of software (yes, I use the latest SOTA tools, Claude Opus 4.5 thinking, blah, blah, so do most of my colleagues). But the "typing" part hasn't been the hard part for a while already.
You could argue that there is a "step change" coming in the capabilities of AI models, which will entirely replace developers (so software can be "willed into existence", as elegantly put by OP), but we are no closer to that point now than we were in December 2022. All the success of AI tools in actual, real-world software has been in tools specifically design to assist existing, working, competent developers (e.g. Cursor, Claude Code), and the tools which have positioned themselves to replace them have failed (Devin).
There is no respectful way of telling someone they like the sound of their own voice. Let’s be real, you were objectively and deliberately disrespectful. Own it if you are going to break the rules of conduct. I hate this sneaky shit. Also I’m not off topic, you’re just missing the point.
I responded to another person in this thread and it’s the same response I would throw at you. You can read that as well.
Your “historical trend” is just applying an analogy and thinking that an analogy can take the place of reasoning. There are about a thousand examples of careers where automation technology increased the need of human operators and thousands of examples where automation eliminated human operators. Take pilots for example. Automation didn’t lower the need for pilots. Take intellisense and autocomplete… That didn’t lower the demand for programmers.
But then take a look at Waymo. You have to be next level stupid to think that ok, cruise control in cars raised automation but didn’t lower the demand for drivers… Therefore all car related businesses including Waymo will always need physical drivers.
As anyone is aware… this idea of using analogy as reasoning fails here. Waymo needs zero physical drivers thanks to automation. There is zero demand here and your methodology of reasoning fails.
Analogies are a form of manipulation. They only help allow you to elucidate and understand things via some thread of connection. You understand A therefore understanding A can help you understand B. But you can’t use analogies as the basis for forecasting or reasoning because although A can be similar to B, A is not in actuality B.
For AI coders it’s the same thing. You just need to use your common sense rather than rely on some inaccurate crutch of analogies and hoping everything will play out in the same way.
If AI becomes as good and as intelligent as a human swe than your job is going out the fucking window and replaced by a single
Prompter. That’s common sense.
Look at the actual trendline of the actual topic: AI taking over our jobs and not automation in other sectors of engineering or other types of automation in software. What happened with AI in the last decade? We went from zero to movies, music and coding.
What does your common sense tell you the next decade will bring?
If the improvement of AI from the last decade keeps going or keeps accelerating, the conclusion is obvious.
Sometimes the delusion a lot of swes have is jarring. Like literally if AGI existed thousands of jobs will be displaced. That’s common sense, but you still see tons of people clinging to some irrelevant analogy as if that exact analogy will play out against common sense.
How ironic of you to call my argument an analogy while it isn't an analogy, yet all you have to offer is exactly that - analogies. Analogies to pilots, drivers, "a thousand examples of careers".
My argument isn't an analogy - it's an observation based on the trajectory of SWE employment specifically. It's you who's trying to reason about what's going to happen with software based on what happened to three-field crop rotation or whatever, not me.
I argued that a developer today is 1000x more effective than in the days of punch cards, yet we have 1000x more developers today. Not only that, this correlation tracked fairly linearly throughout the last many decades.
I would also argue that the productivity improvement between FORTRAN and C, or between C and Python was much, much more impactful than going from JavaScript to JavaScript with ChatGPT.
Software jobs will be redefined, they will require different skill sets, they may even be called something else - but they will still be there.
>How ironic of you to call my argument an analogy while it isn't an analogy, yet all you have to offer is exactly that
Bro I offered you analogies to show you how it's IRRELEVANT. The point was to show you how it's an ineffective form of reasoning via demonstrating it's ineffectiveness FOR YOUR conclusion because using this reasoning can allow you to conclude the OPPOSITE. Assuming this type of reasoning is effective means BOTH what I say is true and what you say is true which leads to a logical contradiction.
There is no irony, only misunderstanding from you.
>I argued that a developer today is 1000x more effective than in the days of punch cards, yet we have 1000x more developers today. Not only that, this correlation tracked fairly linearly throughout the last many decades.
See here, you're using an analogy and claiming it's effective. To which I would typically offer you another analogy that shows the opposite effect, but I feel it would only confuse you further.
>Software jobs will be redefined, they will require different skill sets, they may even be called something else - but they will still be there.
Again, you believe this because of analogies. I recommend you take a stab at my way of reasoning. Try to arrive at your own conclusion without using analogies.
Look at the past decade. Zero AI to AI that codes and makes movies in an inferior way when matched with humans.
What does common sense tell you the next decade will bring? Does the trendline predict flat lining that LLMs or AI in general won’t improve? Or will the trendline continue like most trendlines typically trend on doing? What is the most logical conclusion?
> You cannot appeal to trendlines while arbitrarily discarding the very history that defines them without committing selection bias.
> When large numbers of analogous past events point in contradictory directions, individual anecdotes lose predictive power. Trendlines are not an oracle, but once the noise overwhelms the signal, they are the best approximation we have.
I'm confused. So you're agreeing with me, up until the very last part of the last sentence...? If the "noise overwhelms the signal", why are "trendlines the best approximation we have"? We have reliable data of past outcomes in similar scenarios, yet the most recent noisy data is the most valuable? Huh?
(Honestly, your comments read suspiciously like they were LLM-generated, as others have mentioned. It's like you're jumping on specific keywords and producing the most probable tokens without any thought about what you're saying. I'll give you the benefit of the doubt for one more reply, though.)
To be fair, I think this new technology is fundamentally different from all previous attempts at abstracting software development. And I agree with you that past failures are not necessarily indicative that this one will fail as well. But it would be foolish to conclude anything about the value of this technology from the current state of the industry, when it should be obvious to anyone that we're in a bull market fueled by hype and speculation.
What you're doing is similar to speculative takes during the early days of the internet and WWW. How it would transform politics, end authoritarianism and disinformation, and bring the world together. When the dust settled after the dot-com crash, actual value of the technology became evident, and it turns out that none of the promises of social media became true. Quite the opposite, in fact. That early optimism vanished along the way.
The same thing happened with skepticism about the internet being a fad, that e-commerce would never work, and so on. Both groups were wrong.
> What skeptics keep doing is reciting known flaws while refusing to reason about what is no longer a limitation. At that point, the disagreement stops being about evidence and starts looking like bias.
Skepticism and belief are not binary states, but a spectrum. At extreme ends there are people who dismiss the technology altogether, and there are people who claim that the technology will cure diseases, end poverty, and bring world prosperity[1].
I think neither of these viewpoints are worth paying attention to. As usual, the truth is somewhere in the middle. I'm leaning towards the skeptic side simply because the believers are far louder, more obnoxious, and have more to gain from pushing their agenda. The only sane position at this point is to evaluate the technology based on personal use, discuss your experience with other rational individuals, and wait for the hype to die down.
>I'm confused. So you're agreeing with me, up until the very last part of the last sentence...? If the "noise overwhelms the signal", why are "trendlines the best approximation we have"? We have reliable data of past outcomes in similar scenarios, yet the most recent noisy data is the most valuable? Huh?
Let me help you untangle the confusion. Historical data on other phenomenons is not a trendline for AI taking over your job. It's a typical logical mistake people make. It's reasoning via analogy. Because this trend happened for A, and A fits B like an analogy therefore what happened to A must happen to B.
Why is that stupid logic? Because there are thousands of things that fit B as an analogy. And out of those thousands of things that fit, some failed and some succeeded. What you're doing and not realizin is you are SELECTIVELY picking the analogy you like to use as evidence.
When I speak of a trendline. It's deadly simple. Literally look at AI as it is now, as it is in the past and use that to project into the future. Look at exact data of the very thing you are measuring rather then trying to graft some analogous thing onto the current thing and make a claim from that.
>What you're doing is similar to speculative takes during the early days of the internet and WWW. How it would transform politics, end authoritarianism and disinformation, and bring the world together. When the dust settled after the dot-com crash, actual value of the technology became evident, and it turns out that none of the promises of social media became true. Quite the opposite, in fact. That early optimism vanished along the way.
Again same thing. The early days of the internet is not what's happening to AI currently. You need to look at what happened to AI and software from the beginning to now. Observe the trendline of the topic being examined.
>I think neither of these viewpoints are worth paying attention to. As usual, the truth is somewhere in the middle. I'm leaning towards the skeptic side simply because the believers are far louder, more obnoxious, and have more to gain from pushing their agenda. The only sane position at this point is to evaluate the technology based on personal use, discuss your experience with other rational individuals, and wait for the hype to die down.
Well if you look at the pace and progress of AI, the quantitative evidence points against your middle ground opinion here. It's fashionable to take the middle ground because moderates and grey areas seem more level headed and reasonable than extremism. But this isn't really applicable to reality is it? Extreme events that overload systems happen in nature all the time, taking the middle ground without evidence pointing to the middle ground is pure stupidity.
So all you need to look at is this, in the past decade look at the progress we've made until now. A decade ago AI via ML was non-existent. Now AI generates movies, music and code, and unlike AI in music and movies, code is being in actuality used by engineers.
That's ZERO to coding in a decade. What do you think the next decade will bring. Coding to what? That is reality and the most logical analysis. Sure it's ok to be a skeptic, but to ignore the trendline is ignorance.
> You need to know what competitors shipped, what research dropped, what patterns are emerging across 50+ sources continuously. Generic ChatGPT can't do that.
You're saying that a pattern recognition tool that can access the web can't do all of this better than a human? This is quintessentially what they're good at.
> The real question is how do you build personal AI that learns YOUR priorities and filters the noise? That's where the leverage is now.
Sounds like another Markdown document—sorry, "skill"—to me.
It's interesting to see people praising this technology and enjoying this new "high-level" labor, without realizing that the goal of these companies is to replace all cognitive labor. I strongly doubt that they will actually succeed at that, and I don't even think they've managed to replace "low-level" labor, but pretending that some cognitive labor is safe in a world where they do succeed is wishful thinking.
Did you actually learn C? Be thankful nothing like this existed in 1997.
A machine generating code you don't understand is not the way to learn a programming language. It's a way to create software without programming.
These tools can be used as learning assistants, but the vast majority of people don't use them as such. This will lead to a collective degradation of knowledge and skills, and the proliferation of shoddily built software with more issues than anyone relying on these tools will know how to fix. At least people who can actually program will be in demand to fix this mess for years to come.
It would’ve been nice to have a system that I could just ask questions to teach me how it works instead of having to pour through the few books that existed on C that was actually accessible to a teenager learning on their own
Going to arcane websites, forum full of neckbeards to expect you to already understand everything isn’t exactly a great way to learn
The early Internet was unbelievably hostile to people trying to learn genuinely
I had the books (from the library) but never managed to get a compiler for many years! Was quite confusing trying to understand all the unix references when my only experience with a computer was the Atari ST.
I don't understand how OP thinks that being oblivious how anything work underneath is a good thing. There is a threshold of abstraction to which you must know how it works to effectively fix it when it breaks.
You can be a super productive Python coder without any clue how assembly works. Vibe coding is just one more level of abstraction.
Just like how we still need assembly and C programmers for the most critical use cases, we'll still need Python and Golang programmers for things that need to be more efficient than what was vibe coded.
But do you really need your $whatever to be super efficient, or is it good enough if it just works?
Humans writing code are also non deterministic. When you vibe code you're basically a product owner / manager. Vibe coding isn't a higher level programming language, it's an abstraction over a software engineer / engineering team.
That's not what determinism means though. A human coding something, irrespective of whether the code is right or wrong, is deterministic. We have a well defined cause and effect pathway. If I write bad code, I will have a bug - deterministic. If I write good code, my code compiles - still deterministic. If the coder is sick, he can't write code - deterministic again. You can determine the cause from the effect.
Every behavior in the physical World has a cause and effect chain.
On the other hand, you cannot determine why a LLM hallucinated. There is no way to retrace the path taken from input parameters to generated output. At least as of now. Maybe it will change in the future where we have tools that can retrace the path taken.
You misunderstand. A coder will write different code for the same problem each time unless they have the solution 100% memorised. And even then a huge number of factors can influence them not being able to remember 100% of the memorised code, or opt for different variations.
People are inherently nondeterministic.
The code they (and AI) writes, once written, executes deterministically.
> A coder will write... or opt for different variations.
Agreed.
> People are inherently nondeterministic.
We are getting into the realm of philosophy here. I, for one, believe in the idea of living organisms having no free will (or limited will to be more precise. but can also go so far as to say "dependent will"). So one can philosophically explain that people are deterministic, via concepts of Karma and rebirth. Of course none of this can be proven. So your argument can be true too.
> The code they (and AI) writes, once written, executes deterministically.
Yes. Execution is deterministic. I am however talking only about determinism in terms of being able to know the entire path: input to output. Not just the outputs characteristic (which is always going to be deterministic). It is the path from input to output that is not deterministic due to presence of a black box - the model.
I mostly agree with you, but I see what afro88 is saying as well.
If you consider a human programmer as a "black box", in the sense that you feed it a set of inputs—the problem that needs to be solved, vague requirements, etc.—and expect a functioning program as output that solves the problem, then that process is similarly nondeterministic as an LLM. Ensuring that the process is reliable in both scenarios boils down to creating detailed specifications, removing ambiguity, and iterating on the product until the acceptance tests pass.
Where I think there is a disconnect is that humans are far more capable at producing reliable software given a fuzzy set of inputs. First of all, they have an understanding of human psychology, and can actually reason about semantics in ways that a pattern matching and token generation tool cannot. And in the best case scenario of experienced programmers, they have an intuitive grasp of the problem domain, and know how to resolve ambiguities in meatspace. LLMs at their current stage can at best approximate these capabilities by integrating with other systems and data sources, so their nondeterminism is a much bigger problem. We can hope that the technology will continue to improve, as it clearly has in the past few years, but that progress is not guaranteed.
Agree with most of what you say. The only reason I say humans are different from LLMs when it comes to being a "black box" is because you can probe humans. For instance, I can ask a human to explain how he/she came to the conclusion and retrace the path taken to come to said conclusion from known inputs. And this can also be correlated with say brainwave imaging by mapping thoughts to neurons being triggered in that portion of the brain. So you can have a fairly accurate understanding of the path taken. I cannot probe the LLM however. At least not with the tools we have today.
> Where I think there is a disconnect is that humans are far more capable at producing reliable software given a fuzzy set of inputs.
Yes true. Another thought that comes to my mind is I feel it might also have to do with us recognizing other humans as not as alien to us as LLMs are. So there is an inherent trust deficit when it comes to LLMs vs when it comes to humans. Inherent trust in human beings, despite being less capable, is what makes the difference. In everything else we inherently want proper determinism and trust is built on that. I am more forgiving if a child computes 2 + 1 = 4, and will find it in me to correct the child. I won't consider it a defect. But if a calculator computes 2 + 1 = 4 even once, I would immediately discard it and never trust it again.
> We can hope that the technology will continue to improve, as it clearly has in the past few years, but that progress is not guaranteed.
Perhaps there is no need to actually understand assembly, but if you don't understand certain basic concepts actually deploying any software you wrote to production would be a lottery with some rather poor prizes. Regardless of how "productive" you were.
Somebody needs to understand, to the standard of "well enough".
The investors who paid for the CEO who hired your project manager to hire you to figure that out, didn't.
I think in this analogy, vibe coders are project managers, who may indeed still benefit from understanding computers, but when they don't the odds aren't anywhere near as poor as a lottery. Ignorance still blows up in people's faces. I'd say the analogy here with humans would be a stereotypical PHB who can't tell what support the dev needs to do their job and then puts them on a PIP the moment any unclear requirement blows up in anyone's face.
> There was a time when you had to know ‘as’, ‘ld’ and maybe even ‘ar’ to get an executable.
No, there wasn't: you could just run the shell script, or (a bit later) the makefile. But there were benefits to knowing as, ld and ar, and there still are today.
> But there were benefits to knowing as, ld and ar, and there still are today.
This is trivially true. The constraint for anything you do in your life is time it takes to know something.
So the far more interesting question is: At what level do you want to solve problems – and is it likely that you need knowledge of as, ld and ar over anything else, that you could learn instead?
Knowledge of as, ld, ar, cc, etc is only needed when setting up (or modifying) your build toolchain, and in practice you can just copy-paste the build script from some other, similar project. Knowledge of these tools has never been needed.
Knowledge of cc has never been needed? What an optimist! You must never have had headers installed in a place where the compiler (or Makefile author) didn’t expect them. Same problems with the libraries. Worse when the routine you needed to link was in a different library (maybe an arch-specific optimized lib).
The library problems you described are nothing that can't be solved using symlinks. A bad solution? Sure, but it works, and doesn't require me to understand cc. (Though when I needed to solve this problem, it only took me about 15 minutes and a man page to learn how to do it. `gcc -v --help` is, however, unhelpful.)
"A similar project" as in: this isn't the first piece of software ever written, and many previous examples can be found on the computer you're currently using. Skim through them until you find one with a source file structure you like, then ruthlessly cannibalise its build script.
If you don't see a difference between a compiler and a probabilistic token generator, I don't know what to tell you.
And, yes, I'm aware that most compilers are not entirely deterministic either, but LLMs are inherently nondeterministic. And I'm also aware that you can tweak LLMs to be more deterministic, but in practice they're never deployed like that.
Besides, creating software via natural language is an entirely different exercise than using a structured language purposely built for that.
We're talking about two entirely different ways of creating software, and any comparison between them is completely absurd.
They can function kind-of-the-same in the sense that they can both change things written in a higher level language into a lower level language.
100% different in every other way, but for coding in some circumstances if we treat it as a black box, LLMs can turn higher level pseudocode into lower level code (inaccurately), or even transpile.
Kind of like how email and the postal service can be kind of the same if you look at it from a certain angle.
> Kind of like how email and the postal service can be kind of the same if you look at it from a certain angle.
But they're not the same at all, except somewhat by their end result, in that they are both ways of transmitting information. That similarity is so vague that comparing them doesn't make sense for any practical purpose. You might as well compare them to smoke signals at that point.
It's the same with LLMs and programming. They're both ways of producing software, but the process of doing that and even the end result is completely different. This entire argument that LLMs are just another level of abstraction is absurd. Low-Code/No-Code tools, traditional code generators, meta programming, etc., are another level of abstraction on top of programming. LLMs generate code via pattern matching and statistics. It couldn't be more different.
People negating down your comment are just "engineers" doomed to fail sooner or later.
Meanwhile, 9front users have read at least the plan9 intro and know about nm, 1-9c, 1-9l and the like. Wibe coders will be put on their place sooner or later. It´s just a matter of time.
That's not true. All of the examples you mentioned are possible without Big Tech. There are F/LOSS and community supported alternatives for all of them. Big Tech might've contributed to parts of the technology that make these alternatives possible, but that could've been done by anyone else, and they are certainly not required to keep the technology functional today.
Relying on Big Tech is a personal choice. None of these companies are essential to humanity.
> none of them is based on surveillance, ads or social media.
That's not true either. All Alphabet and Meta products are tied to and supported in some way by advertising. All of these companies were/are part of government surveillance programs.
So you're highly overestimating the value of Big Tech, and highly underestimating the negative effects they've had, have, and will continue to have on humanity.
>Big Tech might've contributed to parts of the technology that make these alternatives possible, but that could've been done by anyone else, and they are certainly not required to keep the technology functional today.
Not only that, but big tech proprietary products have depended and depend heavily on F/LOSS and community supported code.
I partly agree, but where this analogy breaks down, and what the neofeudal rulers are too shortsighted to realize, is that if lower classes don't have income to spend, there will be nobody to buy their products, and the entire economy collapses. Proposals to address this like UBI are a pipe dream.
The idea is that neofeudal lords would own fully automated systems to produce everything they need. They wouldn't need to make stuff to sell to us at all, they could just pursue their own goals. We would be irrelevant to them, other than a nuisance if we tried to get some of their resources for ourselves.
Imagine if you owned a million humanoid robots and a data center with your own super-intelligence. Would you produce doodads to sell to people? Or would you build your own rockets to mine asteroids, fortresses and weapons systems to protect yourself, and palaces for you to live in?
I don't agree that this is where we are headed, but that is the idea. Thinking about this in relation to our current economy is missing the point.
Aha, so late-game Factorio. It's a nice fantasy, but I don't think that the rest of humanity will stand by and allow the entire system to function autonomously. It's more likely that heads will roll far before such system is in place.
And yet one person kills a CEO, and they're a terrorist.
reply