Skip Navigation

Posts
16
Comments
488
Joined
2 yr. ago

  • I wonder if this is just a really clumsy attempt to invent stretching the overton window from first principles or if he really is so terminally rationalist that he thinks a political ideology is a sliding scale of fungible points and being 23.17% ancap can be a meaningful statement.

    That the exchange of ideas between friends is supposed to work a bit like the principle of communicating vessels is a pretty weird assumption, too. Also, if he thinks it's ok to admit that he straight up tries to manipulate friends in this way, imagine how he approaches non-friends.

    Between this and him casually admitting that he keeps "culture war" topics alive on the substack because they get a ton of clicks, it's a safe bet that he can't be thinking too highly of his readership, although I suspect there is an esoteric/exoteric teachings divide that is mostly non-obvious from the online perspective.

  • In his early blog posts, Scott Alexander talked about how he was not leaping through higher education in a single bound

    He starts his recent article on AI psychosis by mixing up psychosis with schizophrenia (he calls psychosis a biological disease), so that tracks.

    Other than that, I think it's ok in principle to be ideologically opposed to something even if you and yours happened to benefit from it. Of course, it immediately becomes iffy if it's a mechanism for social mobility that you don't plan on replacing, since in that case you are basically advocating for pulling up the ladder behind you.

  • Shamelessly reproduced from the other place:

    A quick summary of his last three posts:

    "Here's a thought experiment I came up with to try to justify the murder of tens of thousands of children."

    "Lots of people got mad at me for my last post; have you considered that being mad at me makes me the victim and you a Nazi?"

    "I'm actually winning so much right now: it's very normal that people keep worriedly speculating that I've suffered some sort of mental breakdown."

  • I’m even grateful, in a way, to SneerClub, and to Woit and his minions. I’m grateful to them for so dramatically confirming that I’m not delusional: some portion of the world really is out to get me. I probably overestimated their power, but not their malevolence. […]

    Honestly what he should actually be grateful for is how all his notoriety ever amounted to[1] was a couple of obscure forums going 'look at this dumb asshole' and moving on.

    He is an insecure and toxic serial overreactor with shit opinions and a huge unpopular-young-nerd chip on his shoulder, and who comes off as being one mildly concerted troll effort away from a psych ward at all times. And probably not even that, judging from Graham Linehan's life trajectory.

    [1] besides Siskind using him to broaden his influence on incels and gamer gaters.

  • This was an excellent read if you're aware of the emails but never bothered to read his citations or to dig into what the blather about object-level and meta-level problems was specifically about, which is presumably most people.

    So, a deeper examination of the email paints 2014 Siskind as a pretty run of the mill race realist who's really into black genes are dumber, you guys studies and who thinks that higher education institutions not taking them seriously means they are deeply broken and untrustworthy, especially with anything to do with pushing back against racism and sexism. Oh, and he is also very worried that immigration may destroy the West, or at least he gently urges you to get up to speed with articles coincidentally pushing that angle, and draw your own conclusions based on pure reason.

    Also it seems that in private he takes seriously stuff he has already debunked in public, which makes it basically impossible to ever take anything he writes in good faith.

  • Plus he's gay so if he dies hell awaits, or so the evangelical worldview tends to go.

  • It's like a one-and-a-half-page article that also comes in audio and video form, don't be lazy.

  • Oh no, you must have missed the surprise incelism, let me fix that:

    And as the world learned a decade ago, I was able to date, get married, and have a family, only because I finally rejected what I took to be the socially obligatory attitude for male STEM nerds like me—namely, that my heterosexuality was inherently gross, creepy, and problematic, and that I had a moral obligation never to express romantic interest to women.

  • Modern move money between pockets for profit economics seem to give The Hitchhiker's Guide bistromathics a run for their money.

  • I wonder what this means for US GDP

    Don't worry, unchecked inflation and increasing housing costs will keep the GDP propped up at least for a while longer.

  • Zitron taking every opportunity to shit on Scott's AI2027 is kind of cathartic, ngl

  • He has capital L Lawfulness concerns. About the parent and the child being asymmetrically skilled in context engineering. Which apparently is the main reason kids shouldn't trust LLM output.

    Him showing his ass with the memory comment is just a bonus.

  • I feel dumber for having read that, and not in the intellectually humbled way.

  • This hits differently over the recent news that ChatGPT encouraged and aided a teen suicide.

  • Not who you asked, but both python and javascript have code smell as a core language feature and we are stuck with them by accident of history, not because anyone in particular thought it would be such a great idea for them to overshoot their original purpose to such a comical degree.

    Also there's a long history of languages meant to be used as an introduction to coding being spun off into ridiculously verbose enterprise equivalents that then everyone had to deal with (see delphi and visual basic) so there's certainly a case for refusing to cede any more ground to dollar store editions of useful stuff under the guise of education.

  • AI innovation in this space usually means automatically adding stuff to the model's context.

    It probably started meaning the (failed) build output got added in every iteration, but it's entirely possible to feed the LLM debugger data from a runtime crash and hope something usable happens.

  • When I was at computer toucher school at about the start of the century, under the moniker AI were taught (I think) fuzzy logic, incremental optimization and graph algorithms, and neural networks.

    AI is a sci-fi trope far more than it ever was a well-defined research topic.

  • SneerClub @awful.systems

    Where Scoot makes the case about how an AGI could build an army of terminators in a year if it wanted.

    TechTakes @awful.systems

    OpenAI scuttles for-profit transformation

    TechTakes @awful.systems

    "If a man really wants to make a million dollars, the best way would be to start his own social network." -- L. Ron Altman

    TechTakes @awful.systems

    UK creating ‘murder prediction’ tool to identify people most likely to kill

    NotAwfulTech @awful.systems

    Advent of Code 2024 - Historian goes looking for history in all the wrong places

    SneerClub @awful.systems

    New article from reflective altruism guy starring Scott Alexander and the Biodiversity Brigade

    TechTakes @awful.systems

    It can't be that the bullshit machine doesn't know 2023 from 2024, you must be organizing your data wrong (wsj)

    TechTakes @awful.systems

    Generating (often non-con) porn is the new crypto mining

    SneerClub @awful.systems

    SBF's effective altruism and rationalism considered an aggravating circumstance in sentencing

    SneerClub @awful.systems

    Rationalist org bets random substack poster $100K that he can't disprove their covid lab leak hypothesis, you'll never guess what happens next

    SneerClub @awful.systems

    Hi, I'm Scott Alexander and I will now explain why every disease is in fact just poor genetics by using play-doh statistics to sorta refute a super specific point about schizophrenia heritability.

    SneerClub @awful.systems

    Reply guy EY attempts incredibly convoluted offer to meet him half-way by implying AI body pillows are a vanguard threat that will lead to human extinction...

    SneerClub @awful.systems

    Existential Comics on rationalism and parmesan

    TechTakes @awful.systems

    Turns out Altman is a lab-leak covid truther, calls virus 'synthetic' according to Spectator piece on AI risk.

    SneerClub @awful.systems

    Rationalist literary criticism by SBF, found on the birdsite