Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SC
Posts
5
Comments
214
Joined
2 yr. ago

  • The hidden prompt is only cheating if the reviewers fail to do their job right and outsource it to a chatbot, it does nothing to a human reviewer actually reading the paper properly. So I won't say it's right or ethical, but I'm much more sympathetic to these authors than to reviewers and editors outsourcing their job to an unreliable LLM.

  • The only question is who will get the blame.

    Isn't it obvious? Us sneerers and the big name skeptics (like Gary Marcuses and Yann LeCuns) continuously cast doubt on LLM capabilities, even as they are getting within just a few more training runs and one more scaling of AGI Godhood. We'll clearly be the ones to blame for the VC funding drying up, not years of hype without delivery.

  • I think we mocked this one back when it came out on /r/sneerclub, but I can't find the thread. In general, I recall Yudkowsky went on a mini-podcast tour a few years back. I think the general trend was that he didn't interview that well, even by lesswrong's own standards. He tended to simultaneously assume too much background familiarity with his writing such that anyone not already familiar with it would be lost and fail to add anything actually new for anyone already familiar with his writing. And lots of circular arguments and repetitious discussion with the hosts. I guess that's the downside of hanging around within your own echo chamber blog for decades instead of engaging with wider academia.

  • For purposes of something easily definable and legally valid that makes sense, but it is still so worthy of mockery and sneering. Also, even if they needed a benchmark like that for their bizarre legal arrangements, there was no reason besides marketing hype to call that threshold "AGI".

    In general the definitional games around AGI are so transparent and stupid, yet people still fall for them. AGI means performing at least human level across all cognitive tasks. Not across all benchmarks of cognitive tasks, the tasks themselves. Not superhuman in some narrow domains and blatantly stupid in most others. To be fair, the definition might not be that useful, but it's not really in question.

  • Gary Marcus has been a solid source of sneer material and debunking of LLM hype, but yeah, you're right. Gary Marcus has been taking victory laps over a bar set so so low by promptfarmers and promptfondlers. Also, side note, his negativity towards LLM hype shouldn't be misinterpreted as general skepticism towards all AI... in particular Gary Marcus is pretty optimistic about neurosymbolic hybrid approaches, it's just his predictions and hypothesizing are pretty reasonable and grounded relative to the sheer insanity of LLM hypsters.

    Also, new possible source of sneers in the near future: Gary Marcus has made a lesswrong account and started directly engaging with them: https://www.lesswrong.com/posts/Q2PdrjowtXkYQ5whW/the-best-simple-argument-for-pausing-ai

    Predicting in advance: Gary Marcus will be dragged down by lesswrong, not lesswrong dragged up towards sanity. He'll start to use lesswrong lingo and terminology and using P(some event) based on numbers pulled out of his ass. Maybe he'll even start to be "charitable" to meet their norms and avoid down votes (I hope not, his snark and contempt are both enjoyable and deserved, but I'm not optimistic based on how the skeptics and critics within lesswrong itself learn to temper and moderate their criticism within the site). Lesswrong will moderately upvote his posts when he is sufficiently deferential to their norms and window of acceptable ideas, but won't actually learn much from him.

  • Unlike with coding, there are no simple “tests” to try out whether an AI’s answer is correct or not.

    So for most actual practical software development, writing tests is in fact an entire job in and of itself and its a tricky one because covering even a fraction of the use cases and complexity the software will actually face when deployed is really hard. So simply letting the LLMs brute force trial-and-error their code through a bunch of tests won't actually get you good working code.

    AlphaEvolve kind of did this, but it was testing very specific, well defined, well constrained algorithms that could have very specific evaluation written for them and it was using an evolutionary algorithm to guide the trial and error process. They don't say exactly in their paper, but that probably meant generating code hundreds or thousands or even tens of thousands of times to generate relatively short sections of code.

    I've noticed a trend where people assume other fields have problems LLMs can handle, but the actually competent experts in that field know why LLMs fail at key pieces.

  • Exactly. I would almost give the AI 2027 authors credit for committing to a hard date... except they already have a subtly hidden asterisk in the original AI 2027 noting some of the authors have longer timelines. And I've noticed lots of hand-wringing and but achkshuallies in their lesswrong comments about the difference between mode and median and mean dates and other excuses.

    Like see this comment chain https://www.lesswrong.com/posts/5c5krDqGC5eEPDqZS/analyzing-a-critique-of-the-ai-2027-timeline-forecasts?commentId=2r8va889CXJkCsrqY :

    My timelines move dup to median 2028 before we published AI 2027 actually, based on a variety of factors including iteratively updating our models. But it was too late to rewrite the whole thing to happen a year later, so we just published it anyway. I tweeted about this a while ago iirc.

    ...You got your AI 2027 reposted like a dozen times to /r/singularity, maybe many dozens of times total across Reddit. The fucking vice president has allegedly read your fiction project. And you couldn't be bothered to publish your best timeline?

    So yeah, come 2028/2029, they already have a ready made set of excuse to backpedal and move back the doomsday prophecy.

  • So two weeks ago I linked titotal's detailed breakdown of what is wrong with AI 2027's "model" (tldr; even accepting the line goes up premise of the whole thing, AI 2027's math was so bad that they made the line always asymptote to infinity in the near future regardless of inputs). Titotal went to pretty extreme lengths to meet the "charitability" norms of lesswrong, corresponding with one of the AI 2027 authors, carefully considering what they might have intended, responding to comments in detail and depth, and in general not simply mocking the entire exercise in intellectual masturbation and hype generation like it rightfully deserves.

    But even with all that effort, someone still decided make an entire (long, obviously) post with a section dedicated to tone-policing titotal: https://thezvi.substack.com/p/analyzing-a-critique-of-the-ai-2027?open=false#%C2%A7the-headline-message-is-not-ideal (here is the lw link: https://www.lesswrong.com/posts/5c5krDqGC5eEPDqZS/analyzing-a-critique-of-the-ai-2027-timeline-forecasts)

    Oh, and looking back at the comments on titotal's post... his detailed elaboration of some pretty egregious errors in AI 2027 didn't really change anyone's mind, at most moving them back a year to 2028.

    So, morale of the story, lesswrongers and rationalist are in fact not worth the effort to talk to and we are right to mock them. The numbers they claim to use are pulled out of their asses to fit vibes they already feel.

    And my choice for most sneerable line out of all the comments:

    https://forum.effectivealtruism.org/posts/KgejNns3ojrvCfFbi/a-deep-critique-of-ai-2027-s-bad-timeline-models?commentId=XbPCQkgPmKYGJ4WTb

    And I therefore am left wondering what less shoddy toy models I should be basing my life decisions on.

  • Following up because the talk page keeps providing good material..

    Hand of Lixue keeps trying to throw around the Wikipedia rules like the other editors haven't seen people try to weaponize the rules to push their views many times before.

    Particularly for the unflattering descriptions I included, I made sure they reflect the general view in multiple sources, which is why they might have multiple citations attached. Unfortunately, that has now led to complaints about overcitation from @Hand of Lixue. You can't win with some people...

    Looking back on the original lesswrong brigade organizing discussion of how to improve the wikipedia article, someone tried explaining to Habyrka the rules then and they were dismissive.

    I don’t think it counts as canvassing in the relevant sense, as I didn’t express any specific opinion on how the article should be edited.

    Yes Habyrka, because you clearly have such a good understanding of the Wikipedia rules and norms...

    Also, heavily downvoted on the lesswrong discussion is someone suggesting Wikipedia is irrelevant because LLMs will soon be the standard for "access to ground truth". I guess even lesswrong knows that is bullshit.

  • The wikipedia talk page is some solid sneering material. It's like Habryka and HandofLixue can't imagine any legitimate reason why Wikipedia has the norms it does, and they can't imagine how a neutral Wikipedian could come to write that article about lesswrong.

    Eigenbra accurately calling them out...

    "I also didn't call for any particular edits". You literally pointed to two sentences that you wanted edited.

    Your twitter post also goes against Wikipedia practices by casting WP:ASPERSIONS. I can't speak for any of the other editors, but I can say I have never read nor edited RationalWiki, so you might be a little paranoid in that regard.

    As to your question:

    Was it intentional to try to pick a fight with Wikipedians?

    It seems to be ignorance on Habyrka's part, but judging by the talk page, instead of acknowledging their ignorance of Wikipedia's reasonable policies, they seem to be doubling down.

  • Also lol at the 2027 guys believing anything about how grok was created.

    Judging by various comments the AI 2027 authors have made, sucking up to techbro side of the alt-right was in fact a major goal of AI 2027, and, worryingly they seem to have succeeded somewhat (allegedly JD Vance has read AI 2027) but lol at the notion they could ever talk any of the techbro billionaires into accepting any meaningful regulation. They still don't understand their doomerism is free marketing hype for the techbros, not anything any of them are actually treating as meaningfully real.

  • Yeah AI 2027's model fails back of the envelope sketches as soon as you try working out any features of it, which really draws into question the competency of it's authors and everyone that has signal boosted it. Like they could have easily generated the same crit-hype bullshit with "just" an exponential model, but for whatever reason they went with this model. (They had a target date they wanted to hit? They correctly realized adding in extraneous details would wow more of their audience? They are incapable of translating their intuitions into math? All three?)

  • So us sneerclubbers correctly dismissed AI 2027 as bad scifi with a forecasting model basically amounting to "line goes up", but if you end up in any discussions with people that want more detail titotal did a really detailed breakdown of why their model is bad, even given their assumptions and trying to model "line goes up": https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models

    tldr; the AI 2027 model, regardless of inputs and current state, has task time horizons basically going to infinity at some near future date because they set it up weird. Also the authors make a lot of other questionable choices and have a lot of other red flags in their modeling. And the picture they had in their fancy graphical interactive webpage for fits of the task time horizon is unrelated to the model they actually used and is missing some earlier points that make it look worse.

  • If you wire the LLM directly into a proof-checker (like with AlphaGeometry) or evaluation function (like with AlphaEvolve) and the raw LLM outputs aren't allowed to do anything on their own, you can get reliability. So you can hope for better, it just requires a narrow domain and a much more thorough approach than slapping some extra firm instructions in an unholy blend of markup languages in the prompt.

    In this case, solving math problems is actually something Google search could previously do (before dumping AI into it) and Wolfram Alpha can do, so it really seems like Google should be able to offer a product that does math problems right. Of course, this solution would probably involve bypassing the LLM altogether through preprocessing and post processing.

    Also, btw, LLM can be (technically speaking) deterministic if the heat is set all the way down, its just that this doesn't actually improve their performance at math or anything else. And it would still be "random" in the sense that minor variations in the prompt or previous context can induce seemingly arbitrary changes in output.

  • We barely understsnd how LLMs actually work

    I would be careful how you say this. Eliezer likes to go on about giant inscrutable matrices to fearmoner, and the promptfarmers use the (supposed) mysteriousness as another avenue for crithype.

    It's true reverse engineering any specific output or task takes a lot of effort and requires access to the model's internals weights and hasn't been done for most tasks, but the techniques exist for doing so. And in general there is a good high level conceptual understanding of what makes LLMs work.

    which means LLMs don’t understand their own functioning (not that they “understand” anything strictly speaking).

    This part is absolutely true. If you catch them in mistake, most of their data about responding is from how humans respond, or, at best fine-tuning on other LLM output and they don't have any way of checking their own internals, so the words they say in response to mistakes is just more bs unrelated to anything.