Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BU
Posts
0
Comments
79
Joined
2 yr. ago

  • it’s likely that the user will spend at least 30 minutes to an hour unable to articulate language.

    This presumes Johnson was able to articulate language in the first place, which given that his brain has melted at an incredible pace since 2020 may be a bit of a stretch.

  • will these guys ever get to rainbow-chart levels of galaxy brain or will just be content to fudge some numbers on regular visuals?

    This is reminiscent of memestock/buttcoin charts where a new asymptomatic curve is dropped onto the same rather flat graph over and over again.

  • the mention in QAA came during that episode and I think there it was more illustrative about how a person can progress to conspiratorial thinking about AI. The mention in Panic World was from an interview with Ed Zitron's biggest fan, Casey Newton if I recall correctly.

  • One thing I've heard repeated about OpenAI is that "the engineers don't even know how it works!" and I'm wondering what the rebuttal to that point is.

    While it is possible to write near-incomprehensible code and make an extremely complex environment, there is no reason to think there is absolutely no way to derive a theory of operation especially since any part of the whole runs on deterministic machines. And yet I've heard this repeated at least twice (one was on the Panic World pod, the other QAA).

    I would believe that it's possible to build a system so complex and with so little documentation that on its surface is incomprehensible but the context in which the claim is made is not that of technical incompetence, rather the claim is often hung as bait to draw one towards thinking that maybe we could bootstrap consciousness.

    It seems like magical thinking to me, and a way of saying one or both of "we didn't write shit down and therefore have no idea how the functionality works" and "we do not practically have a way to determine how a specific output was arrived at from any given prompt." The first might be in part or on a whole unlikely as the system would need to be comprehensible enough so that new features could get added and thus engineers would have to grok things enough to do that. The second is a side effect of not being able to observe all actual input at the time a prompt was made (eg training data, user context, system context could all be viewed as implicit inputs to a function whose output is, say, 2 seconds of Coke Ad slop).

    Anybody else have thoughts on countering the magic "the engineers don't know how it works!"?

  • “I’m sort of a complex chaotic systems guy, so I have a low estimate that I actually know what the nonlinear dynamic in the memosphere really was,”

    I say right before inhaling deeply from a bag in which I have dispensed a hefty amount of spray paint.

  • That WSJ review is something special and I didn't have sound on or CC so I'm sure there's some weapons-grade stupid going on in dialog that I am missing. I stopped watching about when they put up a picture of Allen Turing (AI Pioneer!) and a picture of the "AI Pilot" who's first name is Turing and then highlighted that both have the word "Turing" in their names.

    Also that overgrown Roomba with hip dysplasia took 5 minutes to put two glasses in a dishwasher poorly.