Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 23 June 2024
Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
NYT opinion piece title: Effective Altruism Is Flawed. But What’s the Alternative? (archive.org)
lmao, what alternatives could possibly exist? have you thought about it, like, at all? no? oh...
(also, pet peeve, maybe bordering on pedantry, but why would you even frame this as singular alternative? The alternative doesn't exist, but there are actually many alternatives that have fewer flaws).
You don’t hear so much about effective altruism now that one of its most famous exponents, Sam Bankman-Fried, was found guilty of stealing $8 billion from customers of his cryptocurrency exchange.
Lucky souls haven't found sneerclub yet.
But if you read this newsletter, you might be the kind of person who can’t help but be intrigued by effective altruism. (I am!) Its stated goal is wonderfully rational in a way that appeals to the economist in each of us...
rational_economist.webp
There are actually some decent quotes critical of EA (though the author doesn't actually engage with them at all):
The problem is that “E.A. grew up in an environment that doesn’t have much feedback from reality,” Wenar told me.
Wenar referred me to Kate Barron-Alicante, another skeptic, who runs Capital J Collective, a consultancy on social-change financial strategies, and used to work for Oxfam, the anti-poverty charity, and also has a background in wealth management. She said effective altruism strikes her as “neo-colonial” in the sense that it puts the donors squarely in charge, with recipients required to report to them frequently on the metrics they demand. She said E.A. donors don’t reflect on how the way they made their fortunes in the first place might contribute to the problems they observe.
The same pundits have been saying "deep learning is hitting a wall" for a DECADE.
Why do they have ANY credibility left?
Wrong, wrong, wrong. Year after year after year.
Like all professional pundits, they pound their fist on the table and confidently declare AGI IS DEFINITELY FAR OFF and people breathe a sigh of relief.
Because to admit that AGI might be soon is SCARY. Or it should be, because it represents MASSIVE uncertainty. AGI is our final invention.
You have to acknowledge the world as we know it will end, for better or worse.
Your 20 year plans up in smoke.
Learning a language for no reason.
Preparing for a career that won't exist.
Raising kids who might just... suddenly die.
Because we invited aliens with superior technology we couldn't control.
Remember, many hopium addicts are just hoping that we become PETS. They point to Ian Banks' Culture series as a good outcome... where, again, HUMANS ARE PETS.
THIS IS THEIR GOOD OUTCOME.
What's funny, too, is that noted skeptics like Gary Marcus still think there's a 35% chance of AGI in the next 12 years - that is still HIGH!
(Side note: many skeptics are butthurt they wasted their career on the wrong ML paradigm.)
Nobody wants to stare in the face the fact that 1) the average AI scientist thinks there is a 1 in 6 chance we're all about to die, or that 2) most AGI company insiders now think AGI is 2-5 years away.
It is insane that this isn't the only thing on the news right now.
So... we stay in our hopium dens, nitpicking The Latest Thing AI Still Can't Do, missing forests from trees, underreacting to the clear-as-day exponential.
Most insiders agree: the alien ships are now visible in the sky, and we don't know if they're going to cure cancer or exterminate us.
Be brave. Stare AGI in the face.
This post almost made me crash my self-driving car.
Not a big sneer, but I was checking my spam box for badly filtered spam and saw a guy basically emailing me 'hey you made some contributions to open source, these are now worth money (in cryptocoins, so no real money), you should claim them, and if you are nice you could give me a finders fee. And eurgh im so tired of these people. (thankfully he provided enough personal info so I could block him on various social medias).
there's nothing there yet, but we're thinking just short posts about funny dumb AI bullshit. Web 3 Is Going Great, but it's AI.
i assure you that we will absolutely pillage techtakes, but will have to write it in non-jargonised form for ordinary civilian sneers
BIG QUESTION: what's a good WordPress theme? For a W3iGG style site with short posts and maybe occasional longer ones. Fuckin' hate the current theme (WordPress 2023) because it requires the horrible Block Editor
Tom Murphy VII's new LLM typesetting system, as submitted to SIGBOVIK. I watched this 20 minute video all through, and given my usual time to kill for video is 30 seconds you should take that as the recommendation it is.
This is quite minor, but it's very funny seeing the intern would-be sneerers still on rbuttcoin fall for the AI grift, to the point that its part of their modscript copypasta
Or in the pinned mod comment:
AI does have some utility and does certain things better than any other technology, such as:
The ability to summarize in human readable form, large amounts of information.
The ability to generate unique images in a very short period of time, given a verbose description
tfw you're anti-crypto, but only because its a bad investing opportunity.
People are so, so, so bad at telling what's a bot and what's real. I know social media is swarming with bots, but if you're interacting with somebody who's saying anything more complicated than "P o o s i e I n B i o" it's probably not a bot. A similar thing happens in online games, too, and it's usually the excuse people use before harassing someone else
This account kind of kicked up some drama too, basically for the same reason (answering an LLM prompt), but it's about mushroom ID instead: https://www.reddit.com/user/SeriousPerson9 I've seen people like this who use voice-to-text and run their train of thought through ChatGPT or something, like one person notorious on /r/gamedev. But people always assume it's some advanced autonomous bot with stochastic post delays that mimic a human's active hours when like, it's usually just somebody copy/pasting prompts and responses.
Sorry if you contract any diseases from those links or comment chains
I'm not sure if they're a decent person or not, but I liked this sneer on the orange site, in the context of this recent rant (previously, on Awful), describing a hypothetical executive-humbling mechanism. I agree with them; I'd read that fanfiction too.
Not sure where I got the link to this video from, could very well be from this place, but look 'Gamers' made an NFT game without blockchain and cryptocurrencies! And it will potentially lead to other Gamers being scammed! Innovation!