Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's helpful to view working solutions and quality code as separate things to the LLM.

* If you ask it to solve a problem and nothing more, chances are the code isn't the best as it will default to the most common solutions in the training data.

* If you ask it to refactor some code idiomatically, it will apply most common idiomatic concepts found in the training data.

* If you ask it to do both at the same time you're more likely to get higher quality but incorrect code.

It's better to get a working solution first, then ask it to improve that solution, rinse/repeat in smallish chunks of 50-100 loc at a time. This is kinda why reasoning models are of some benefit, as they allow a certain amount of reflection to tie together disparate portions of the training data into more cohesive, higher quality responses.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: