Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"since code is just text" . . .

No. It really isn't. I haven't gotten to play around with GPT-3 yet and I am sure it is very advanced, but code is extremely fragile in a way human language is not. I only say this as someone who started a company trying to use AI to generate code and banged my head against the wall until credit limits and negative bank balances forced me to quit.

I estimated we could write 7 PhD theses if we solved the technical hurdles that would get us to code good enough for a product someone would pay for.



"code is extremely fragile in a way human language is not"

Very well put. Change a single character in a working complex program and it may start doing something completely different, or, much worse, subtly different.


Another thing we all learned the hard way when people tried "model-driven development" (that is, code that built other code from UML diagrams and flowcharts) is that writing code the first time is one thing. Modifying it later is something else entirely.


In fairness, this does apply to natural language too.

"Let's eat, grandma!"

"Let's eat grandma!"

The diffference is that we have no notion of getting people to blindly and literally follow instructions generated by AI. People in the execution loop creates an implicit layer of sanity-checking, plus language is inherently ambiguous and the reader will tend to interpret things in sensible ways even if the writer didn't understand fully.


Isn't that because compilers aren't written to cope with variations though - that rigour is necessary because humans can't deal with ambiguity. A compiler written using AI could happily understand what 'int', 'itn', 'it', 'integer', 'IntyMcintyFace', and every conceivable variation mean, and still compile them all to the same machine code. Humans don't want that in a language because it makes it hard to use. AI doesn't care.


I disagree with this. I think humans excel at ambiguity (they also excel at getting the intended meaning wrong, of course). Computers on the other hand take instructions literally. You could train them to probabilistically guess what the misspelling means, but whether they'll be better than a human remains to be seen (I personally doubt it but this can be tested).

What irks me about the assertion that "code is text" is that it's false. Code has a textual representation, which some people (not me!) argue is not even the best one; what's clear is that text is just a representation, not the only one and it's not directly code. To have an AI learn to "type" code as a string of words and characters seems obtuse if the goal is to have AI generated software. AI could operate at a different level, why bother with typing characters? It seems to me the wrong level of abstraction, akin to designing a robot hand and driving it with an AI to physically use a keyboard as a way to write code.


> To have an AI learn to "type" code as a string of words and characters seems obtuse if the goal is to have AI generated software.

It sure does seem obtuse. An AI generated computer architecture sounds more reasonable. Why stop at AST or byte code?


>AI could operate at a different level, why bother with typing characters?

Because if it does something wrong, you have to be able to find out what it is.


Actually I think this illustrates what is wrong in the idea of AI-generated code.

If you feel uneasy about AI-generated binary code ("I want to be able to debug it if something goes wrong!") you should feel equally-uneasy about AI-generated high-level language. The changes that it'll be broken in subtle ways are likely to be very similar, and I don't see good reason to believe that debugging AI-generated Haskell is going to be that much easier than debugging AI-generated executables.


You can generate human readable data out of what the AI generates. As an input representation, text made of words doesn't seem optimal for an AI.


You can just decompile whatever it is the AI writes.


Yeah, keep hearing about how AI will make programmers obsolete. Good luck teaching an AI how to interpret the product managers' super vague requests and understand the context in one sentence.


> Yeah, keep hearing about how AI will make programmers obsolete. Good luck teaching an AI how to interpret the product managers' super vague requests and understand the context in one sentence.

This is the point the article was trying to make! Coding is not the same thing as understanding requests and translating them in to software. Someone who is doing that (e.g. a software engineer) will be able to make use of a bot that can "code".


For what it’s worth, people said the same thing about Chess, then Go, then Starcraft. In these battles, AI seems to win with enough time.


These are all games with rules and absolute information. Product managers that can't express themselves is different.

Making an AI that can generate code from requirements is probably difficult, but manageable. Making an AI that asks the right questions, gets stakeholders to agree on something reasonable, and create solid specifications from that, is probably a long time from now.


The absolute information requirement got dropped when the AI started playing Starcraft with fog of war. It then has to decide how and when to scout, which is pretty cool


I only disagree in the aspect that fragile isn't a strong enough word to describe what you're talking about.

Plus, correct me if I'm wrong, AI still isn't at a problem solving capacity yet. It's still just a fast acting statistical machine that fits "round enough" square pegs into round holes.


Agree. It may happen soon, but GPT-3 isn’t it. One of the biggest problems is that it doesn’t have any idea when it is wrong. This is a big problem, even in the human domain, but especially with AI

https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.h...


It can easily do copy paste programing, in the interpreter languages.


Would be curious what it comes up with when writing J :p


This is an interesting nuance, and you make a great point, but I actually just meant this as literally as possible and wasn't trying to get into the details of GPT-3 specifically - I just referenced it as an introduction to the topic.

That's actually kind of the point - I think AI writing code is something that will actually add value to the skills Software Engineers have (TLDR, much more than writing code), and the article is a discussion of why that is, not if technologies like this work but when.

If they can be made to work is also definitely an interesting discussion, though, and somewhat predicates the stuff I talk about in this article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: