Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. Also, happy April Fool's in advance.)
I've been doing some micro tasking to train LLM's the last few months to earn some extra cash, the last month it's all dried up, no tasks available. I can't help thinking that is a sign of a bubble deflating.
I knew the HN reaction to Marine Le Pen being banned from being elected to public office would be unhinged, but weirdly I did not have "good, she's a woman and weak so now a strong man can lead RN to glorious victory" on my bingo card. More fool me, misogyny is always on the card.
Today on the orange site, an AI bro is trying to reason through why people think he's weird for not disclosing his politics to people he's trying to be friendly with. Previously, he published a short guide on how to talk about politics, which — again, very weird, no possible explanation for this — nobody has adopted. Don't worry, he's well-read:
So far I've only read Harry Potter and The Methods of Rationality, but can say it is an excellent place to start.
The thread is mostly centered around one or two pearl-clutching conservatives who don't want their beliefs examined:
I find it astonishing that anyone would ask, ["who did you vote for?"] … In my social circle, anyway, the taboo on this question is very strong.
To which the top reply is my choice sneer:
In my friend group it's clear as day: either you voted to kill and deport other people in the friend group or you didn't. Pretty obvious the group would like to know if you're secretly interested in their demise.
This is not a dataset. This is an insult to the very idea of data. This is the most anti-scientific post I have ever seen voted to the top of HN. Truth about the world is not derived from three LLMs stacked on top of each other in a trenchcoat.
My University Keeps Sending Me Stupid Emails About AI, a continuing series:
From the email:
The debate will be chaired by Michael Pike. Speaking for the motion are Prof. Gregory O'Hare (TCD), Maeve Hynes, Prof. Gary Boyd and Chatgpt assisted by Student Curator Ruan McCabe, all UCD. Speaking against the motion are David Capener (UU), Lucy O'Connell, Peter Cody and Meabh O'Leary, all UCD.
Check out the unhinged classism from one of the lesser figures who's popped up here from time-to-time (with an added bonus shout-out to Ayn Rand further down the thread)
if you, like me, were wondering what the point of that 25 hour non-filibuster filibuster by Booker was, here’s one potential answer.
Booker held a filibuster that wasn't a filibuster - and he scheduled it to let him avoid attending his own committee's probe of his Big Tech pals. […] Oh and Booker and Dems provided unanimous consent to advance a Trump nominee right after Booker's speech.
An AI faceswapper/nudifier's database got leaked thanks to its nonexistent security - unsurprisingly, its loaded with explicit images, including massive amounts child porn and almost certainly some revenge porn.
"We love disruption and whatever is good for America will be good for Americans and very good for Palantir,” Palantir CEO Alex Karp said in a February earnings call. “Disruption at the end of the day exposes things that aren't working. There will be ups and downs. This is a revolution, some people are going to get their heads cut off."
This is a revolution, some people are going to get their heads cut off
A recent chapter of one of the official Touhou Project manga had a jab at crypto schemes. This isn't one of the works of art I expected to so explicitly dunk on these unscrupulous scams, but I welcome it all the more for that.
NaNoWriMo? Na, No Mo'. Does this have anything to do with their bungled AI policy? Maybe, maybe not, but hey, the news article that I saw this announcement in thought that it was pertinent to mention.
Came across this fuckin disaster on Ye Olde LinkedIn by 'Caroline Jeanmaire at AI Governance at The Future Society'
"I've just reviewed what might be the most important AI forecast of the year: a meticulously researched scenario mapping potential paths to AGI by 2027.
Authored by Daniel Kokotajlo (>lel) (OpenAI whistleblower), Scott Alexander (>LMAOU), Thomas Larsen, Eli Lifland, and Romeo Dean, it's a quantitatively rigorous analysis beginning with the emergence of true AI agents in mid-2025.
What makes this forecast exceptionally credible:
One author (Daniel) correctly predicted chain-of-thought reasoning, inference scaling, and sweeping chip export controls one year BEFORE ChatGPT existed
The report received feedback from ~100 AI experts (myself included) and earned endorsement from Yoshua Bengio
It makes concrete, testable predictions rather than vague statements that cannot be evaluated
The scenario details a transformation potentially more significant than the Industrial Revolution, compressed into just a few years. It maps specific pathways and decision points to help us make better choices when the time comes.
As the authors state: "It would be a grave mistake to dismiss this as mere hype."
For anyone working in AI policy, technical safety, corporate governance, or national security: I consider this essential reading for understanding how your current work connects to potentially transformative near-term developments."
Bruh what is the fuckin y axis on this bad boi?? christ on a bike, someone pull up that picture of the 10 trillion pound baby. Let's at least take a look inside for some of their deep quantitative reasoning...
SMBC using the ratsphere as comics fodder, part the manyeth:
transcription
Retrofuturistic Looking Ghost: SCROOOOOGE! I am the ghost of christmas extreme future! Why! Why did you not find a way to indicate to humans 400 generation from now where toxic waste was storrrrrrrred! Look how Tiny Tim's cyborg descendant has to make costrly RNA repaaaaaaairs!
Byline: The Longtermist version of A Christmas Carol is way better.
bonus
transcription
Scrooge: I tried, but no, no, I just don't give a shit.
Turns out rebranding even the genuinely useful Machine Learning as “AI” doesn’t help them get funding. The only beneficiaries of the bubble seem to be volatile media synthesis engines
You want my opinion, future machine learning research is probably gonna struggle to get funding once the bubble bursts, both due to the "AI" stench rubbing off on the field, and due to gen-AI sucking up all of the funding that would've gone towards actually useful shit. (Arguably, its already struggling even before the bubble's burst.)
Just had a video labeled "auto-dubbed" pop up in my YouTube feed for the first time. Not sure if it was chosen by the author or not. Too bad, it looks like a fascinating problem to see explained, but I don't think I'm going to trust an AI feature that I just saw for the first time to explain it. (And perhaps more crucially, I'm a bit afraid of what anime fans will have to say about this.)
AI-Powered Wi-Fi 7 Versatile Outdoor/Indoor Mesh AP
we're at the "the washing machine without a dateclock is marked millenium-bug safe in its marketing brochures" level of stupid
(that might seem like a stupid comparison but it's one of the things I most viscerally remember seeing from that time (when I was still a youngin who was still largely years away from computertouching))
I would also like to complain that I have finally started getting AI summaries in Google, and I may have to switch to a different search engine. Neither wanted nor needed!
Australian chemist and videographer Explosions & Fire argues convincingly that the ongoing recent radioactive-boy-scout scandal should not result in prosecution. For context, a 24-year-old man ordered small samples of radioactive isotopes from the USA, Australia failed to intercept it at the border, and they are prosecuting him in order to avoid embarrassment over incompetence. I don't have a choice sneer; E&F is unwaveringly energized over the topic of radioactive isotopes and injustice, and the whole thing is worth watching.
Probably a fair bit to sneer at in the actual study that I'm missing, and the article I first found it in is peak AI Hype. (Big Forensics is trying to keep you from knowing the Truth as found by an undergrad with a GPU) But the part that I found most concerning is that even the whole paper doesn't appear to break down their 77% accuracy index and provide the specific result ratios that go into it. In a field where each false positive represents a step on the road to innocent people being convicted of major crimes I would really like to know that number specifically.