Skip Navigation

Posts
44
Comments
1,296
Joined
2 yr. ago

  • Explains his gushing over Scott in the intro.

    I still think he makes a lot of good points in that promptfondlers are losing their shit because people aren't buyin the swill they're selling.

    In a similar vein, check out this comment on LW.

    [on "starting an independent org to research/verify the claims of embryo selection companies"] I see how it "feels" worth doing, but I don't think that intuition survives analysis.

    Very few realistic timelines now include the next generation contributing to solving alignment. If we get it wrong, the next generation's capabilities are irrelevant, and if we get it right, they're still probably irrelevant. I feel like these sorts of projects imply not believing in ASI. This is standard for most of the world, but I am puzzled how LessWrong regulars could still coherently hold that view.

    https://www.lesswrong.com/posts/hhbibJGt2aQqKJLb7/shortform-1?commentId=25HfwcGxC3Gxy9sHi

    So belieiving in the inevitable coming of the robot god is dogma on LW now. This is a cult.

  • Lev Grossman's The Magicians takes a stab at this. In essence it's basically Harry Potter meets The Rules of Attraction, but Grossman does discuss what magicians do after graduation. Public service is big, as are NGOs.

  • There are a bit different axes here. The tax money doesn't directly go towards alleviating the suffering of family members of alcoholics, nor does it directly lower the effects of drunk driving. The income is a nice to have, for sure, but the stated aim is to be a "sin tax" which makes the bad thing less affordable.

  • OK now there's another comment

    I think this is a good plea since it will be very difficult to coordinate a reduction of alcohol consumption at a societal level. Alcohol is a significant part of most societies and cultures, and it will be hard to remove. Change is easier on an individual level.

    Excepting cases like the legal restriction of alcohol sales in many many areas (Nordics, NSW in Aus, Minnesota in the US), you can in fact just tax the living fuck out of alcohol if you want. The article mentions this.

    JFC these people imagine they can regulate how "AGI" is constructed, but faced with a problem that's been staring humanity in the face since the first monk brewed the first beer they just say "whelp nothing can be done, except become a teetotaller yourself)

  • To be scrupulously fair it is a repost of another slubbslack[1]. Amusingly, both places have a comment with the gist of "well alcohol gets people laid so what's the problem". This of course is a reflection that most LWers cannot get a girl into bed without slipping her a roofie.


    [1] is that even ok? I know the LW software has a "mirroring" functionality b/c a lot of content is originally on the member's SS, maybe you cna point it at any SS entry and get it onto LW.

  • Nothing expresses the inherent atomism and libertarian nature of the rat community like this

    https://www.lesswrong.com/posts/HAzoPABejzKucwiow/alcohol-is-so-bad-for-society-that-you-should-probably-stop

    A rundown of the health risks of alcohol usage, coupled with actual real proposals (a consumption tax), finishes with the conclusion that the individual reader (statistically well-off and well-socialized) should abstain from alcohol altogether.

    No calls for campaigning for a national (US) alcohol tax. No calls to fund orgs fighting alcohol abuse. Just individual, statistically meaningless "action".

    Oh well, AGI will solve it (or the robot god will be a raging alcoholic)

  • Here's LWer "johnswentworth", who has more than 57k karma on the site and can be characterized as a big cheese:

    My Empathy Is Rarely Kind

    I usually relate to other people via something like suspension of disbelief. Like, they’re a human, same as me, they presumably have thoughts and feelings and the like, but I compartmentalize that fact. I think of them kind of like cute cats. Because if I stop compartmentalizing, if I start to put myself in their shoes and imagine what they’re facing… then I feel not just their ineptitude, but the apparent lack of desire to ever move beyond that ineptitude. What I feel toward them is usually not sympathy or generosity, but either disgust or disappointment (or both).

    "why do people keep saying we sound like fascists? I don't get it!"

  • The artillery branch of most militaries has long been a haven for the more brainy types. Napoleon was a gunner, for example.

  • Oh, but LW has the comeback for you in the very first paragraph

    Outside of niche circles on this site and elsewhere, the public's awareness about AI-related "x-risk" remains limited to Terminator-style dangers, which they brush off as silly sci-fi. In fact, most people's concerns are limited to things like deepfake-based impersonation, their personal data training AI, algorithmic bias, and job loss.

    Silly people! Worrying about problems staring them in the face, instead of the future omnicidal AI that is definitely coming!

  • LessWronger discovers the great unwashed masses , who inconveniently still indirectly affect policy through outmoded concepts like "voting" instead of writing blogs, might need some easily digested media pablum to be convinced that Big Bad AI is gonna kill them all.

    https://www.lesswrong.com/posts/4unfQYGQ7StDyXAfi/someone-should-fund-an-agi-blockbuster

    Cites such cultural touchstones as "The Day After Tomorrow", "An Inconvineent Truth" (truly a GenZ hit), and "Slaughterbots" which I've never heard of.

    Listen to the plot summary

    • Slowburn realism: The movie should start off in mid-2025. Stupid agents.Flawed chatbots, algorithmic bias. Characters discussing these issues behind the scenes while the world is focused on other issues (global conflicts, Trump, celebrity drama, etc). [ok so basically LW: the Movie]
    • Explicit exponential growth: A VERY slow build-up of AI progress such that the world only ends in the last few minutes of the film. This seems very important to drill home the part about exponential growth. [ah yes, exponential growth, a concept that lends itself readily to drama]
    • Concrete parallels to real actors: Themes like "OpenBrain" or "Nole Tusk" or "Samuel Allmen" seem fitting. ["we need actors to portray real actors!" is genuine Hollywood film talk]
    • Fear: There's a million ways people could die, but featuring ones that require the fewest jumps in practicality seem the most fitting. Perhaps microdrones equipped with bioweapons that spray urban areas. Or malicious actors sending drone swarms to destroy crops or other vital infrastructure. [so basically people will watch a conventional thriller except in the last few minutes everyone dies. No motivation. No clear "if we don't cut these wires everyone dies!"]

    OK so what should be shown in the film?

    compute/reporting caps, robust pre-deployment testing mandates (THESE are all topics that should be covered in the film!)

    Again, these are the core components of every blockbuster. I can't wait to see "Avengers vs the AI" where Captain America discusses robust pre-deployment testing mandates with Tony Stark.

    All the cited URLS in the footnotes end with "utm_source=chatgpt.com". 'nuff said.

  • At this point in time, having a substack is in itself a red flag.

  • The targets are informed, via a grammatically invalid sentence.

    Sam Kriss (author of the ‘Laurentius Clung’ piece) has posted a critique. I don’t think it’s good, but I do think it’s representative of a view that I ever encounter in the wild but haven’t really seen written up.

    FWIW the search term 'Laurentius Clung' gets no hits on LW, so I'm to assume everyone there also is Extremely Online on Xitter and instantly knows the reference.

    https://www.lesswrong.com/posts/3GbM9hmyJqn4LNXrG/yams-s-shortform?commentId=MzkAjd8EWqosiePMf

  • Remember FizzBuzz? That was originally a simple filter exercise some person recruiting programmers came up with to weed out everyone with multi-year CS degrees but zero actual programming experience.

  • The argument would be stronger (not strong, but stronger) if he could point to an existing numbering system that is little-endian and somehow show it's better

  • So here's a poster on LessWrong, ostensibly the space to discuss how to prevent people from dying of stuff like disease and starvation, "running the numbers" on a Lancet analysis of the USAID shutdown and, having not been able to replicate its claims of millions of dead thereof, basically concludes it's not so bad?

    https://www.lesswrong.com/posts/qgSEbLfZpH2Yvrdzm/i-tried-reproducing-that-lancet-study-about-usaid-cuts-so

    No mention of the performative cruelty of the shutdown, the paltry sums involved compared to other gov expenditures, nor the blow it deals to American soft power. But hey, building Patriot missiles and then not sending them to Ukraine is probably net positive for human suffering, just run the numbers the right way!

    Edit ah it's the dude who tried to prove that most Catholic cardinals are gay because heredity, I think I highlighted that post previously here. Definitely a high-sneer vein to mine.

  • TechTakes @awful.systems

    Blinded by the light: ignoring useless regulation, NFT conf organizers use sterilizing UV lighting instead of blacklights. All my apes in ER.

    TechTakes @awful.systems

    Once again, "AI" is revealed to be an army of mechanical turks in a call center. - JWZ

    TechTakes @awful.systems

    "The best way to profit from AI"

    TechTakes @awful.systems

    Steven Pinker: The World's Most Annoying Man

    TechTakes @awful.systems

    That didn't take long: cryptobro realizes that blockchains is what LLMs crave.

    TechTakes @awful.systems

    "Be your own bank" reaches logical end stage as hackers are mining LastPass breach for crypto accounts to loot

    SneerClub @awful.systems

    Raining Man - 70,000 trapped by mud at Burning Man 2023. Pray for the techbros

    SneerClub @awful.systems

    It's a tragedy that I cannot explore my fascination with white supremacy using Chat-GPT4

    TechTakes @awful.systems

    Researchers find the nitrogen in the air makes you dumber, HN brainstorms pressure suits so they can eliminate it

    TechTakes @awful.systems

    The rise of the AI middleman - software to ensure your LLM-generated crap isn't an abomination before god

    TechTakes @awful.systems

    HN refuses to believe reputable research institute disproves superconductivity of LK99, mostly based on tone and the fact that prediction markets haven't crashed to zero