> He's been saying something like this for a long time [...] it's increasingly looking like LeCunn is right.
No? LLMs are getting smarter and smarter, only three years have passed since ChatGPT was released and we have models generating whole apps, competently working on complex features, solving math problems at a level only reached by a small percentage of the population, and much more. The progress is constant and the results are stunning. Really it makes me wonder in what sort of denial are those who think this has been proven to be a dead end.
If you call that AGI as many do or ASI, then we are not talking about the same thing. I'm talking about conversing with AI and being unable to tell if it's human or not in kind of a Turing Plus test. Turing Plus 9 would be 90% of humans can't tell if it's human or not. We're at Turing Plus 1. I can easily tell Claude Opus 4..5 is a machine by the mistakes it made. It's dumb as a box of rocks. That's how I define AGI and beyond to ASI
No? LLMs are getting smarter and smarter, only three years have passed since ChatGPT was released and we have models generating whole apps, competently working on complex features, solving math problems at a level only reached by a small percentage of the population, and much more. The progress is constant and the results are stunning. Really it makes me wonder in what sort of denial are those who think this has been proven to be a dead end.