Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your objective has explicit instruction that car has to be present for a wash. Quite a difference from the original phrasing where the model has to figure it out.


That's the answer of his LLM which has decomposed the question and built the answer following the op prompt obviously. I think you didn't get it.


> I think you didn't get it.

I did get it, and in my view my point still stands. If I need to use special prompts to ask such a simple question, then what are we doing here? The LLMs should be able to figure out a simple contradiction in the question the same way we (humans) do.


Not really a special prompt. It's basically my custom instruction to ChatGPT, the purpose of that instruction is to disambiguate my ramblings, basically. It's pretty effective. I always use speech to text, so it's messy and this cleanup really helps.


> Your objective has explicit instruction that car has to be present for a wash.

Which is exactly how you're supposed to prompt an LLM, is the fact that giving a vague prompt gives poor results really suprising?


In this case, with such a simple task, why even bother to prompt it?

The whole idea of this question is to show that pretty often implicit assumptions are not discovered by the LLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: