Skip Navigation

A leading AI contrarian says he's been proved right that LLMs and scaling won't lead to AGI, and the AI bubble is about to burst.

32

You're viewing a single thread.

32 comments
  • It seems difficult that anyone could make that statement anywhere near definitively until we have quantum computers in our pockets. The real issue with scaling is the binary digital computing paradigm, not the limits of AI/neural network intelligence. In fact, it really depends on how you define "intelligence" and my own research into a unified "theory of everything" indicates that ours as humans is fundamentally repetitive imitation (mimicry). No different than AI learning algorithms, simply more advanced: we have far more neurons than any AI model.

32 comments