Stubsack: weekly thread for sneers not worth an entire post, week ending 5th October 2025 - awful.systems
rook @ rook @awful.systems Posts 0Comments 201Joined 2 yr. ago

There were some issues with an internet-connected chastity device not that long ago, but they were conventional “we can’t be bothered to secure our web service, even when everyone is telling us it is terrible” issues, rather than “knob gets crushed and catches fire”. Still, there’s plenty of scope for someone to make one of those in future… OTA BMS firmware updates are very much a thing, and the market for alarming sex toys is practically unlimited.
(now I think about it, I suspect there’s a market for very poorly secured internet-connected sex toys. someone got off to that headline, I’m pretty certain)
In today’s torment nexus development news… you know how various cyberpunky type games let you hack into an enemy’s augmentations and blow them up? Perhaps you thought this was stupid and unrealistic, and you’d be right.
Maybe that’s the wrong example. How about a cursed evil ring that when you put it on, you couldn’t take it off and it wracks you with pain? Who hasn’t wanted one of those?
Happily, hard working torment nexus engineers have brought that dream one step closer, by having “smart rings”, powered by lithium polymer batteries. Y’know, the things that can go bad, and swell up and catch fire? And that you shouldn’t puncture, because that’s a fire risk too, meaning cutting the ring off is somewhat dangerous? Fun times abound!
https://bsky.app/profile/emily.gorcen.ski/post/3m25263bs3c2g
In a move that is not in any way ominous, and everyone involved has carefully thought through all the consequences, there’s a sora-generated video of sam altman shoplifting gpus that’s apparently quite popular right now.
https://bsky.app/profile/drewharwell.com/post/3m23ob342h22a
(no embed because safari on ipad is weird about downloading or linking video)
I suspect it is also hiding some rendering artefacts.
Oh hey, bay area techfash enthusing about AI and genocidal authoritarians? Must be a day ending in a Y. Today it is Vercel CEO and next.js dev Guillermo Rauch
https://nitter.net/rauchg/status/1972669025525158031
I also have strong opinions about not using next.js or vercel (and server-side javascript in general is a bit of a car crash) but even if you thought it was great you should probably have a look around for alternatives. Just not ruby on rails, perhaps.
I will say that the flipping between characters in order to disguise the fact that longer clips are impractical to render is a neat trick and fits well into the advert-like design, but rewatching it just really reinforces how much those kids look like something pretending real hard to be a human.
Also, fake old-celluloid-film filter for something that was supposed to be from 20 years ago? Really?
AI video generation use case: hallucinatory RETVRN clips about the good old days, such as, uh, walmart 20 years ago?
It his the uncanny valley triggers quite hard. It’s faintly unsettling t watch at all, but every individual detail is just wrong and dreamlike in a bad way.
Also, weird scenery clipping, just like real kids did back in the day!
https://bsky.app/profile/mugrimm.bsky.social/post/3lzy77zydrc2q
Today’s related news: the tailwind css guy is a big fan of dhh and the rubygems takeover.
https://bsky.app/profile/jaredwhite.indieweb.social.ap.brid.gy/post/3lzofv4wi4yz2
I miss the days when being publicly fashy was considered poor pr, but on the other hand it does make it a lot easier to avoid their companies and products.
Tailwind is pointless, incidentally.
Does ruby just die now?
Part of the background to this issue is the development of rv
which apparently offers a future where rubygems is much less important, and some folk seem to be taking that as a threat.
Whether or not the new tooling delivers, the rubygems debacle has probably helped the new project considerably.
Haven’t read the source paper yet (apparently it came out two weeks ago, maybe it already got sneered?) but this might be fun: OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws.
Full of little gems like
Beyond proving hallucinations were inevitable, the OpenAI research revealed that industry evaluation methods actively encouraged the problem. Analysis of popular benchmarks, including GPQA, MMLU-Pro, and SWE-bench, found nine out of 10 major evaluations used binary grading that penalized “I don’t know” responses while rewarding incorrect but confident answers.
I had assumed that the problem was solely technical, that the fundamental design of LLMs meant that they’d always generate bullshit, but it hadn’t occurred to me that the developers actively selected for bullshit generation.
It seems kinda obvious in retrospect… slick bullshit extrusion is very much what is selling “AI” to upper management.
Woke up to some hashtag spam this morning
AI’s Biggest Security Threat May Be Quantum Decryption
which appears to be over of those evolutionary “transitional forms” between grifts.
The sad thing is the underlying point is almost sound (hoarding data puts you at risk of data breaches, and leaking sensitive data might be Very Bad Indeed) but it is wrapped up in so much overhyped nonsense it is barely visible. Naturally, the best and most obvious fix — don’t hoard all that shit in the first place — wasn’t suggested.
(it also appears to be a month-old story, but I guess there’s no reason for mastodon hashtag spammers to be current 🫤)
One to watch from a safe distance: dafdef, an “ai browser” aimed at founders and “UCG creators”, named using the traditional amazon-keysmash naming technique and and following the ai-companies-must-have-a-logo-suggestive-of-an-anus style guide.
Dafdef learns your browsing patterns and suggests what you'd do next After watching you fill out similar forms a few times, Dafdef starts autocompleting them. Apply with your startup to YC, HF0 and A16z without wasting your time.
So… spicy autocomplete.
But that’s not all! Tired of your chatbot being unable to control everything on your iphone, due to irksome security features implemented by those control freaks at apple? There’s a way around that!
Introducing the “ai key”!
A tiny USB-C key that turns your phone into a trusted AI assistant. It sees your screen, acts on your behalf, and remembers — all while staying under your control.
I’m sure you can absolutely trust an ai browser connected to a tool that has nearly full control over your phone to not do anything bad, because prompt injection isn’t a thing, right?
(I say nearly full, because I think Apple Pay requires physical interaction with a phone button or face id, but if dafdef can automate the boring and repetitive parts of using your banking app then having full control of the phone might not matter)
h/t to ian coldwater
Good point. I should probably start including some real world stuff in future versions of this argument… the Wikipedia page on the Pegasus spyware has a depressingly long list of publically-known deployments.
https://en.wikipedia.org/wiki/Pegasus_(spyware)#By_country
Cellebrite is another big one, because whilst its tools generally require physical access, they’re regularly used by law enforcement and border staff and it is tricky to say “no” when the latter demands access to your phone. They specifically seek to crack grapheneos (see this old capabilities list) and signal, the latter leading to this wonderful bit of trolling by moxie.
Avoiding phone exploits is considerably more hassle than changing cipher suites (grapheneos and iOS in lockdown mode require a bunch of compromises, for example).
the possibility of such power falling into government hands is one that all-but guarantees Nineteen Eighty-Four levels of mass surveillance and invasion of privacy if it comes to pass
Dealing with an implementation of Grover’s algorithm just means that you need to double the key length of your symmetric ciphers (because it only provides a root-2 speed up over brute force search). Given that the current recommended key length for eg. AES is 128 bits and we have off-the-shelf implementations that can already handle 256 bit keys, this isn’t really a serious problem.
A working implementation of Shor’s algorithm would be significantly more problematic, but we’ve already had plenty of work done on post-quantum cryptography, eg. NISTPQC which has given us some standards, and there are even ML-KEM implementations in the wild.
Even for the paranoid sort who might think that NIST approving a load of new cryptographic algorithms is not because quantum computers are a risk, but because the NSA has already backdoored them, there are things like X-Wing and PQXDH (used in signal) that combine conventional cryptography like ed25519 with ML-KEM, such that even if ML-KEM turn out to be backdoored or vulnerable to a new attack the tried-and-tested elliptic curve algorithm will still have done its job and your communications should remain secure, and if ML-KEM remains effective then your communications will remain secure even if a working quantum computer can implement shor’s algorithm for large enough numbers.
Honestly though, if a state-level actor wants access to your encrypted secrets, they’ve got plenty of mechanisms to let them do that and don’t need a quantum computer to do it. The classic example might be xkcd (2009) or Mickens (2014):
If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, they’re going to hold a press conference and say “It wasn’t us” as they wear t-shirts that say “IT WAS DEFINITELY US,” and then they’re going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them.
Quantum decryption is a little bit like the y2k problem, in that we have all the tools needed to deal with the issue well in advance of it actually happening. Except that unlike y2k it may never happen, but it is nice not to have to worry about it in either case.
New lucidity post: https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/
The author is entertaining, and if you’ve not read them before their past stuff is worth a look.
It isn’t clear to me at this point that such research will ever be funded in english-speaking places without a significant set of regime changes… no politician or administrator can resist outsourcing their own thinking to llm vendors in exchange for funding. I expect the US educational system will eventually provide a terrible warning to everyone (except the UK, whose government looks at the US and says “oh my god, that’s horrifying. How can we be more like that?”).
I’m probably just feeling unreasonably pessimistic right now, though.
Some people casting their eyes over this monster of a paper have less than positive thoughts about it. I’m not going to try and summarise the summaries here, but the threads aren’t long (and are vastly shorter than the paper) so reading them wouldn’t take long.
Dr. Cat Hicks on mastodon: https://mastodon.social/@grimalkina/114690973548997443
Ashley Juavinett on bluesky: https://bsky.app/profile/analog-ashley.bsky.social/post/3lru5sua3fk25
It is related, inasmuch as it’s all generated from the same prompt and the “answer” will be statistically likely to follow from the “reasoning” text. But it is only likely to follow, which is why you can sometimes see a lot of unrelated or incorrect guff in “reasoning” steps that’s misinterpreted as deliberate lying by ai doomers.
I will confess that I don’t know what shapes the multiple “let me just check” or correction steps you sometimes see. It might just be a response stream that is shaped like self-checking. It is also possible that the response stream is fed through a separate llm session when then pushes its own responses into the context window before the response is finished and sent back to the questioner, but that would boil down to “neural networks pattern matching on each other’s outputs and generating plausible response token streams” rather than any sort of meaningful introspection.
I would expect the actual systems used by the likes of openai to be far more full of hacks and bodges and work-arounds and let’s-pretend prompts that either you or I could imagine.
It’s just more llm output, in the style of “imagine you can reason about the question you’ve just been asked. Explain how you might have come about your answer.” It has no resemblance to how a neural network functions, nor to the output filters the service providers use.
It’s how the ai doomers get themselves into a flap over “deceptive” models… “omg it lied about its train of thought!” because if course it didn’t lie, it just edited a stream of tokens that were statistically similar to something classified as reasoning during training.
I know it’s terrible being a drama gossip, but there are some Fun Times on bluesky at the moment. I’m sure most of you know the origins of the project, and the political leanings of the founders, but they’re currently getting publicly riled up about trans folk and palestinians and tying themselves up in knots defending their decision to change the rules to keep jesse singal on site, and penniless victims of the idf off it.
They really cannot cope with the fact that their user base aren’t politically aligned with them, and are desperate to appease the fash (witness the crackdowns on people’s reaction to charlie kirk’s overdue departure from this vale of tears) and have currently reached the Posting Through It stage. I’m assuming at some point their self-image as Reasonable Centrists will crack and one or more of them will start throwing around transphobic slurs and/or sieg-heiling and bewailing how the awful leftists made them do it. Anyone want to hazard a guess at a timeline?