Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did you actually learn C? Be thankful nothing like this existed in 1997.

A machine generating code you don't understand is not the way to learn a programming language. It's a way to create software without programming.

These tools can be used as learning assistants, but the vast majority of people don't use them as such. This will lead to a collective degradation of knowledge and skills, and the proliferation of shoddily built software with more issues than anyone relying on these tools will know how to fix. At least people who can actually program will be in demand to fix this mess for years to come.



It would’ve been nice to have a system that I could just ask questions to teach me how it works instead of having to pour through the few books that existed on C that was actually accessible to a teenager learning on their own

Going to arcane websites, forum full of neckbeards to expect you to already understand everything isn’t exactly a great way to learn

The early Internet was unbelievably hostile to people trying to learn genuinely


*pore through

(not a judgment, just mentioning in case the distinction is interesting to anyone)


I had the books (from the library) but never managed to get a compiler for many years! Was quite confusing trying to understand all the unix references when my only experience with a computer was the Atari ST.


I don't understand how OP thinks that being oblivious how anything work underneath is a good thing. There is a threshold of abstraction to which you must know how it works to effectively fix it when it breaks.


You can be a super productive Python coder without any clue how assembly works. Vibe coding is just one more level of abstraction.

Just like how we still need assembly and C programmers for the most critical use cases, we'll still need Python and Golang programmers for things that need to be more efficient than what was vibe coded.

But do you really need your $whatever to be super efficient, or is it good enough if it just works?


One is deterministic the other is not. I leave it to you to determine which is which in this scenario.


Humans writing code are also non deterministic. When you vibe code you're basically a product owner / manager. Vibe coding isn't a higher level programming language, it's an abstraction over a software engineer / engineering team.


> Humans writing code are also non deterministic

That's not what determinism means though. A human coding something, irrespective of whether the code is right or wrong, is deterministic. We have a well defined cause and effect pathway. If I write bad code, I will have a bug - deterministic. If I write good code, my code compiles - still deterministic. If the coder is sick, he can't write code - deterministic again. You can determine the cause from the effect.

Every behavior in the physical World has a cause and effect chain.

On the other hand, you cannot determine why a LLM hallucinated. There is no way to retrace the path taken from input parameters to generated output. At least as of now. Maybe it will change in the future where we have tools that can retrace the path taken.


You misunderstand. A coder will write different code for the same problem each time unless they have the solution 100% memorised. And even then a huge number of factors can influence them not being able to remember 100% of the memorised code, or opt for different variations.

People are inherently nondeterministic.

The code they (and AI) writes, once written, executes deterministically.


> The code they (and AI) writes, once written, executes deterministically.

very rarely :)


> A coder will write... or opt for different variations.

Agreed.

> People are inherently nondeterministic.

We are getting into the realm of philosophy here. I, for one, believe in the idea of living organisms having no free will (or limited will to be more precise. but can also go so far as to say "dependent will"). So one can philosophically explain that people are deterministic, via concepts of Karma and rebirth. Of course none of this can be proven. So your argument can be true too.

> The code they (and AI) writes, once written, executes deterministically.

Yes. Execution is deterministic. I am however talking only about determinism in terms of being able to know the entire path: input to output. Not just the outputs characteristic (which is always going to be deterministic). It is the path from input to output that is not deterministic due to presence of a black box - the model.


I mostly agree with you, but I see what afro88 is saying as well.

If you consider a human programmer as a "black box", in the sense that you feed it a set of inputs—the problem that needs to be solved, vague requirements, etc.—and expect a functioning program as output that solves the problem, then that process is similarly nondeterministic as an LLM. Ensuring that the process is reliable in both scenarios boils down to creating detailed specifications, removing ambiguity, and iterating on the product until the acceptance tests pass.

Where I think there is a disconnect is that humans are far more capable at producing reliable software given a fuzzy set of inputs. First of all, they have an understanding of human psychology, and can actually reason about semantics in ways that a pattern matching and token generation tool cannot. And in the best case scenario of experienced programmers, they have an intuitive grasp of the problem domain, and know how to resolve ambiguities in meatspace. LLMs at their current stage can at best approximate these capabilities by integrating with other systems and data sources, so their nondeterminism is a much bigger problem. We can hope that the technology will continue to improve, as it clearly has in the past few years, but that progress is not guaranteed.


Agree with most of what you say. The only reason I say humans are different from LLMs when it comes to being a "black box" is because you can probe humans. For instance, I can ask a human to explain how he/she came to the conclusion and retrace the path taken to come to said conclusion from known inputs. And this can also be correlated with say brainwave imaging by mapping thoughts to neurons being triggered in that portion of the brain. So you can have a fairly accurate understanding of the path taken. I cannot probe the LLM however. At least not with the tools we have today.

> Where I think there is a disconnect is that humans are far more capable at producing reliable software given a fuzzy set of inputs.

Yes true. Another thought that comes to my mind is I feel it might also have to do with us recognizing other humans as not as alien to us as LLMs are. So there is an inherent trust deficit when it comes to LLMs vs when it comes to humans. Inherent trust in human beings, despite being less capable, is what makes the difference. In everything else we inherently want proper determinism and trust is built on that. I am more forgiving if a child computes 2 + 1 = 4, and will find it in me to correct the child. I won't consider it a defect. But if a calculator computes 2 + 1 = 4 even once, I would immediately discard it and never trust it again.

> We can hope that the technology will continue to improve, as it clearly has in the past few years, but that progress is not guaranteed.

Agreed.


This is true. What are the implications of that?


Perhaps there is no need to actually understand assembly, but if you don't understand certain basic concepts actually deploying any software you wrote to production would be a lottery with some rather poor prizes. Regardless of how "productive" you were.


Somebody needs to understand, to the standard of "well enough".

The investors who paid for the CEO who hired your project manager to hire you to figure that out, didn't.

I think in this analogy, vibe coders are project managers, who may indeed still benefit from understanding computers, but when they don't the odds aren't anywhere near as poor as a lottery. Ignorance still blows up in people's faces. I'd say the analogy here with humans would be a stereotypical PHB who can't tell what support the dev needs to do their job and then puts them on a PIP the moment any unclear requirement blows up in anyone's face.


I’m vaguely aware that transistors are like electronic switches and if I serve my memory I could build and and/or/not gate

I have no idea how an i386 works, let alone a modern cpu. Sure there are registers and different levels of cache before you get to memory.

My lack of knowledge of all this doesn’t prevent me from creating useful programs using higher abstraction layers like c.


That’s what a C compiler does when generating a binary.

There was a time when you had to know ‘as’, ‘ld’ and maybe even ‘ar’ to get an executable.

In the early days of g++, there was no guarantee the object code worked as intended. But it was fun working that out and filing the bug reports.

This new tool is just a different sort of transpiler and optimiser.

Treat it as such.


> There was a time when you had to know ‘as’, ‘ld’ and maybe even ‘ar’ to get an executable.

No, there wasn't: you could just run the shell script, or (a bit later) the makefile. But there were benefits to knowing as, ld and ar, and there still are today.


> But there were benefits to knowing as, ld and ar, and there still are today.

This is trivially true. The constraint for anything you do in your life is time it takes to know something.

So the far more interesting question is: At what level do you want to solve problems – and is it likely that you need knowledge of as, ld and ar over anything else, that you could learn instead?


Knowledge of as, ld, ar, cc, etc is only needed when setting up (or modifying) your build toolchain, and in practice you can just copy-paste the build script from some other, similar project. Knowledge of these tools has never been needed.


Knowledge of cc has never been needed? What an optimist! You must never have had headers installed in a place where the compiler (or Makefile author) didn’t expect them. Same problems with the libraries. Worse when the routine you needed to link was in a different library (maybe an arch-specific optimized lib).

That post is only true in the most vacuous sense.

“A similar project” discovered where, on BITNET?


The library problems you described are nothing that can't be solved using symlinks. A bad solution? Sure, but it works, and doesn't require me to understand cc. (Though when I needed to solve this problem, it only took me about 15 minutes and a man page to learn how to do it. `gcc -v --help` is, however, unhelpful.)

"A similar project" as in: this isn't the first piece of software ever written, and many previous examples can be found on the computer you're currently using. Skim through them until you find one with a source file structure you like, then ruthlessly cannibalise its build script.


I feel like this really just says your tools are bad and leaky?


If you don't see a difference between a compiler and a probabilistic token generator, I don't know what to tell you.

And, yes, I'm aware that most compilers are not entirely deterministic either, but LLMs are inherently nondeterministic. And I'm also aware that you can tweak LLMs to be more deterministic, but in practice they're never deployed like that.

Besides, creating software via natural language is an entirely different exercise than using a structured language purposely built for that.

We're talking about two entirely different ways of creating software, and any comparison between them is completely absurd.


They are 100% different and yet kind-of-the-same.

They can function kind-of-the-same in the sense that they can both change things written in a higher level language into a lower level language.

100% different in every other way, but for coding in some circumstances if we treat it as a black box, LLMs can turn higher level pseudocode into lower level code (inaccurately), or even transpile.

Kind of like how email and the postal service can be kind of the same if you look at it from a certain angle.


> Kind of like how email and the postal service can be kind of the same if you look at it from a certain angle.

But they're not the same at all, except somewhat by their end result, in that they are both ways of transmitting information. That similarity is so vague that comparing them doesn't make sense for any practical purpose. You might as well compare them to smoke signals at that point.

It's the same with LLMs and programming. They're both ways of producing software, but the process of doing that and even the end result is completely different. This entire argument that LLMs are just another level of abstraction is absurd. Low-Code/No-Code tools, traditional code generators, meta programming, etc., are another level of abstraction on top of programming. LLMs generate code via pattern matching and statistics. It couldn't be more different.


People negating down your comment are just "engineers" doomed to fail sooner or later.

Meanwhile, 9front users have read at least the plan9 intro and know about nm, 1-9c, 1-9l and the like. Wibe coders will be put on their place sooner or later. It´s just a matter of time.


Competent C programmers know about nm, as, ld and a bunch of other binary sections in order to understand issues and proper debugging.

Everyone else are deluding themselves. Even the 9front intro requieres you to at least know the basics of nm and friends.


It's just another layer.

Assembly programmers from years gone by would likley be equally dismissive of the self-aggrandizing code block stitchers of today.

(on topic, RCT was coded entirely in assembly, quite the achievement)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: