Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm going the exact opposite way. I provide all important details in the prompt and when I see that the LLM understood something wrong, I start over and add the needed information to the prompt. So the LLM either gets it on the first prompt, or I write the code myself. When I get the "Yes, you are right ..." or "now I see..." crap, I throw everything away, because I know that the LLM will only find shit "solutions".


This is actually a great approach. Essentially you're using time travel to prevent misunderstandings, which prevents the context from getting clogged up with garbage.


This is the best approach and avoids long context windows that get the LLM confused




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: