Sure and that is fair. Seldom are extreme viewpoints likely scenarios anyways, but my disagreement with him stems from his unwarranted confidence in his own abilities to predict the future when he's already wrong about LLMs.
He has zero epistemic humility.
We don't know the nature of intelligence. His difficulties in scaling up his research is a testament to this fact. This means we really have no theoretical basis upon which to rest the claim that superintelligence cannot in principle emerge from LLM adjacent architectures--how can we make such a statement, when we don't even know what such thing looks like?
We could be staring at an imperative definition of superintelligence and not know it, nevermind that approximations to such a function could in principle be learned by LLMs (universal approximation theorem). It sounds exceedingly unlikely, but would you rather be comforted by false confidence or be told the honest truth of what our current understanding of the sciences can tell us?