Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Perhaps so. On the other hand, there's probably a lot of low hanging fruit they can pick just by reading the article, reading the cited sources, and making corrections. Humans can do this, but rarely do because it's so tedious.

I don't know how it will turn out. I don't have very high hopes, but I'm not certain it will all get worse either.



The entire point of the article is that LLMs cannot make accurate text, but ironically you claiming LLMs can do accurate texts illustrates your point about human reliability perfectly.

I guess the conclusion is there simply is no avenues to gain knowledge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: