Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Personally I'm with the parent poster - I use LLMs to help me with intent in new codebases I don't understand yet all the time, and empirically they seem to understand it pretty well. Useful, especially when you don't have good documentation on hand.


Boy it sure is useful that you can trust code you don’t understand to have clear variable names and comments for the LLM to key off of.


Clear variable names and comments aren't a requirement at all.

It sounds to me like you have a philosophical problem with LLMs, which is something I don't think we can debate in good faith. I can just share my experience, which is that they are excellent tools for this kind of thing. Obvious caveats apply - only a fool would rely entirely on an LLM without giving it any thought of their own.


I don’t have a philosophical problem with LLMs. I have a problem with treating LLMs as something other than what they are: Predictive text generators. There’s no understanding beneath that which informs the generation, just compression techniques that arise as part of training. Thus I wouldn’t trust them for anything except churning out plausibly-structured text.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: