Skip Navigation

Super-recursive fractal of not-even-wrongness

skyview.social

A BlueSky thread by Mat G on Skyview

I went into a rabbit-hole yesterday, looking into the super-recursive wikipedia article we first sneered at here and the revisited in Tech Takes. I will regret it forever.

You can view my live descent into madness, but I reformatted the thread and added some content in a more reasonable form below. Note that there is plenty that I didn't have time/wine to sneer at in the article itself and then in the amazing arxiv-published "paper" that serves as an underpinning.

Content warning: if you know anything about Turing machines you will want to punch Darko Roglic straight in the control unit near the end.

20 comments
  • please check, I have taken out and shot a lot of it

  • holy fuck this is the kind of craziness I was hoping someone would dig up (rants about the orthodoxy and all) when I realized the Wikipedia articles had some flat earth level shit in them. thank you for the great read! if there’s ever a sneercon, I owe you a few bottles of wine (or your choice of stronger alcohol)

    sans the evolutionary computing and other nonsense, the theoretical core (if you can call it that) of this bullshit seems to be that you can ignore the halting problem if you don’t halt — that implementing (practically) a step limit for your Turing machine is some kind of revolutionary step. I’m not much of a CS practitioner outside of my hobbies, but even I know that solves nothing, for all the reasons you explained eloquently in your post. so it’s kind of fucking amazing to me how frequently I see the opinion on the orange site, among the Rationalists, and even from the Urbit fascists (check our urbit threads if you haven’t already — those folks go deep on CS crankery, including the idea that their bullshit lambda calculus variant is somehow capable of modeling problems the original can’t) that the halting problem is easily solved via workarounds and tricks like that. it’s actually kinda scary how hard the Rationalists in particular try to reject the basics of CS (because they easily disprove their religious beliefs) and replace them with pseudoscience, and how much of this bullshit that places like the orange site echo just because someone cited a crank paper or wikipedia article

    • "The halting problem is easily solved via workarounds" sounds like an Elon Musk tweet trying to reassure investors that self-driving is just around the corner you guys.

      • “these scientist eggheads told me it was impossible and a bad idea but I did it anyway” is the fantasy scenario for so much of the capitalist class; it’s fucking bizarre how much these folks hate CS theory, but they still require their workers to have a BS in CS or a similar field because they feel it increases their prestige (though going after anything higher than a BS is usually very heavily discouraged — I’ve had managers shrug and go “if you want to waste your money” when I mentioned wanting to further my education, and a lot of these bullshit moonshot projects tend to prefer fresh college grads who don’t know how to say no for all positions)

    • I'm glad I found at least one person who enjoyed the rant, makes me feel much better about wasting my braincells on this nonsense.

    • those folks go deep on CS crankery, including the idea that their bullshit lambda calculus variant is somehow capable of modeling problems the original can’t

      what. where did i miss this.

      • it’s been a minute, but I believe that was one of the replies you got in one of the orange site urbit threads after it was pointed out that Nock is just lambda calculus with a bunch of bits glued on. that’s not an uncommon way to derive a functional language (it’s the rough origin of the ML family of languages), but yarvin claimed that Nock is much more efficient than lambda calculus (absolutely not and that’s not even a high bar) and somehow revolutionary. when challenged on the latter point, the urbit fans in the thread started claiming that Nock is capable of solving problems that lambda calculus can’t and gave a very similar abuse of the CTT to what we’ve seen in this thread. it was pure crankery, but it being the orange site I remember the crankery seemed to get a bunch of upvotes

  • This is a quality sneer, bravo 👍 . I had randomly encountered this super-recursive article in the wild after unfortunately stumbling upon some 🐀 blog post about how either the CTT lives in your heart or it doesn't (NOT A CULT).

    Speaking of hyper computation, reminds my of how ol' Ilya was saying 'obviously' the only reason NNs success could be explained is because they are minimizing Kolmogorov complexity, a literally uncomputabile problem. Try teaching a NN to learn 4-digit addition and tell me if the result looks Kolmogorov optimal.

  • hey speaking of Turing crankery, here’s a reply to a Mastodon post in this thread about how LLMs are a million monkeys producing garbage:

    the “they’re indistinguishable from Turing machines” bullshit is something we’ve seen straight from Ilya; the simulation of a universe garbage is new to me, though it reeks of Rationalist dogma

    • LLM’s aren’t nearly random enough to ever produce the entire works of Shakespeare, no matter how much infinite time you give them (though I’m sure they are capable of abominable stitchings of regurgitated quotes/snippets).

      It’s always baffling when people (who’ve given it adequate thought) take library of babel type of things seriously, while ignoring the overwhelming amount of nonsense, that would be hard to separate unless all you looking for is an exact echo of your query.

  • Reading the OP wiki article looks exhausting. Is it just the ontological argument but for juicy computing? As in, juicy enough for brain simulation or AGI or whatever.

    Not willing to look into the abyss tonight, basically.

    • tl;dr it would be SO cool if computers were magical and if i do fancy enough maths i might make them magical and you can't prove i can't unless you assume such tawdry details as "the Church-Turing thesis".

20 comments