Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nota bene - there is a fair amount of research that indicates models outputs and ‘thoughts’ do not necessarily align with their chain of reasoning output.

You can validate this pretty easily by asking some logic or coding questions: you will likely note that a final output is not necessarily the logical output of the end of the thinking; sometimes significantly orthogonal to it, or returning to reasoning in the middle.

All that to say - good idea to read it, but stay vigilant on outputs.



That's a good note. I use DeepSeek for early planning of a project because of how valuable its reasoning output can be. It's common that I'll describe my problem and first draft architecture and see something in the output like "Since this has to be mobile optimized..." Then I'll stop generation, edit the original prompt to specify that I don't have to worry about mobile, and run it again.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: