Hochman correctly deduces that R9PRESENTATIONALism originated in the 18th century, but fails to realize that the roots of this movement actually lie with the Bavarian Illuminati, a secret society which was supposedly suppressed by edict in 1784 but has in fact maintained a shadowy existence influencing politics up to the present day.
I don't really understand what point Zitron is making about each query requiring a "completely fresh static prompt", nor about the relative ordering of the user and static prompts. Why would these things matter?
And sure enough, just within the last day the user "Hand of Lixue" has rewritten large portions of the article to read more favorably to the rationalists.
I thought part of the schtick is that according to the rationalist theory of mind, a simulated version of you suffering is exactly the same as the real you suffering. This relies on their various other philosophical claims about the nature of consciousness, but if you believe this then empathy doesn't have to be a concern.
If the growth is superexponential, we make it so that each successive doubling takes 10% less time.
(From AI 2027, as quoted by titotal.)
This is an incredibly silly sentence and is certainly enough to determine the output of the entire model on its own. It necessarily implies that the predicted value becomes infinite in a finite amount of time, disregarding almost all other features of how it is calculated.
To elaborate, suppose we take as our "base model" any function f which has the property that lim_{t → ∞} f(t) = ∞. Now I define the concept of "super-f" function by saying that each subsequent block of "virtual time" as seen by f, takes 10% less "real time" than the last. This will give us a function like g(t) = f(-log(1 - t)), obtained by inverting the exponential rate of convergence of a geometric series. Then g has a vertical asymptote to infinity regardless of what the function f is, simply because we have compressed an infinite amount of "virtual time" into a finite amount of "real time".
You're totally misunderstanding the context of that statement. The problem of classifying an image as a certain animal is related to the problem of generating a synthetic picture of a certain animal. But classifying an image of as a certain animal is totally unrelated to generating a natural-language description of "information about how to distinguish different species". In any case, we know empirically that these LLM-generated descriptions are highly unreliable.
I like how all of the currently running attempts have been equipped with automatic navigation assistance, i.e. a pathfinding algorithm from the 60s. And that's the only part of the whole thing that actually works.
The multiple authors thing is certainly a joke, it's a reference to the (widely accepted among scholars) theory that the Torah was compiled from multiple sources with different authors.
I'm not sure what you mean by your last sentence. All of the actual improvements to omega were invented by humans; computers have still not made a contribution to this.
Hochman correctly deduces that R9PRESENTATIONALism originated in the 18th century, but fails to realize that the roots of this movement actually lie with the Bavarian Illuminati, a secret society which was supposedly suppressed by edict in 1784 but has in fact maintained a shadowy existence influencing politics up to the present day.