Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this..)
Some dark urge found me skim-reading a recent AI doomer blog post. I was startled awake by this most unsettling passage:
My wife wrote a letter to our infant daughter recently. It concluded:
I don’t know that we can offer you a good world, or even one that will be around for all that much longer. But I hope we can offer you a good childhood. [...]
Though the theoretical possibility had always been percolating somewhere in the back of my mind, it wasn't until now that I viscerally realized that P(doomers reproducing) was greater than zero. And with other doomers no less.
Left brooding on this development, I drudged along until-
BAhahaha what the fuck
I can't. This is beyond parody.
Completely lost it here. Nothing could have prepared me for the poorly handwritten wrist tattoo.
Creating space for miracles
Doom feels really likely to me. [...] But who knows, perhaps one of my assumptions is wrong. Perhaps there's some luck better than humanity deserves. If this happens to be the case, I want to be in a position to make use of it.
Oh how rational! Willing to entertain the idea that maybe, theoretically, the doomsday prediction could be off by a few days?
I'm not sure that I ever strongly felt that I would die at eighty or so. I had a religious youth and believed in an immortal soul. Even when I came out of that, I quickly believed in the potential of radical transhuman life extension.
This guy thought he was getting clean but he was actually replacing weed with heroin
I really convinced myself that "doomsday cult" was hyperbole but uhh, nope, it's 107% real.
Gumroad’s asshole CEO, Sahil Lavingia, NFT fanboy who occasionally used his customer database to track down and get into fights with people on twitter, has now gone professional fash and joined DOGE in order to hollow out the department of veterans affairs and replace the staff with chatbots.
In short: There has been a conspiracy to insert citations to a book by a certain P. Gagniuc into Wikipedia. This resulted in said book gaining about 900 citations on Google Scholar from people who threw in a footnote for the definition of a Markov chain. The book, Markov Chains: From Theory to Implementation and Experimentation (2017), is actually really bad. Some of the comments advocating for its inclusion read like chatbot (bland, generic, lots of bullet points). Another said that it should be included because it's "the most reliable book on the subject, and the one that is part of ChatGPT training set".
This has been argued out over at least five different discussion pages.
An airplane does not flap its wings. And an autopilot is not the same as a pilot. Still, everybody is ok with saying that a plane "flies" and an autopilot "pilots" a plane.
This is the difference between the same system and a system that performs the same function.
When it comes to flight, we focus on function, not mechanism. A plane achieves the same outcome as birds (staying airborne) through entirely different means, yet we comfortably use the word "fly" for both.
With Generative AI, something strange happens. We insist that only biological brains can "think" or "understand" language. In contrast to planes, we focus on the system, not the function. When AI strings together words (which it does, among other things), we try to create new terms to avoid admitting similarity of function.
When we use a verb to describe an AI function that resembles human cognition, we are immediately accused of "anthropomorphizing." In some way, popular opinion dictates that no system other than the human brain can think.
In the late 2000s, rationalists were squarely in the middle of transhumanism. They were into the Singularity, but also the cryonics and a whole pile of stuff they got from the Extropians. It was very much the thing.
These days they're most interested in Effective Altruism (loudly -the label at least) and race science (used to be quiet, now a bit louder). I hardly ever hear them even mention transhumanism as it was back then.
Using AI effectively is now a fundamental expectation of everyone at Shopify. It’s a tool of all trades today, and will only grow in importance. Frankly, I don’t think it’s feasible to opt out of learning the skill of applying AI in your craft; you are welcome to try, but I want to be honest I cannot see this working out today, and definitely not tomorrow. Stagnation is almost certain, and stagnation is slow-motion failure. If you’re not climbing, you’re sliding.
The worst of the internet is continuously attacking the best of the internet. This is a distributed denial of service attack on the good parts of the World Wide Web.
If you’re using the products powered by these attacks, you’re part of the problem. Don’t pretend it’s cute to ask ChatGPT for something. Don’t pretend it’s somehow being technologically open-minded to continuously search for nails to hit with the latest “AI” hammers.
Wow, that latest chat with Adam Patrick Murray about the Nintendo Switch 2 was quite the ride! The bit on the console's dock secrets and the MicroSD Express storage had me glued. It's amazing to see how these tech advancements are sculpting new landscapes.
Speaking of tech wizardry, have you thought about having Christian Perry on the show? As the CEO of Undetectable AI, he's taken the whole generative AI world by storm, much like the Switch 2 is taking over gaming news! With over 15 million users and standing as a top AI writing tool, Christian's insights into AI's hidden workings promise to intrigue your audience, especially when it comes to how his tools seamlessly pass for human writing without tripping any detectors like GPTzero
Some more low effort image posting. This zine was in Connolly Books for free. I'm not sure who the author is, but I thought the text was spot on and the illustrations were great. Sorry for no captions/transcriptions
Apparently including a camera-esque filename in prompts for the latest mid journey release can make it more photorealistic. Unfortunately it also looks like the distinctive AI art style was pretty key to preventing the usual set of AI generated image "tells". Mirrors, hands, teeth, etc are all very visibly wrong.
It would appear CNN was also at the eugenics conference? Why are all these mainstream news orgs at a 200-person event where all the speakers are eugenicists and racists?
And in response to an Atlantic subhead saying "Perpetuating humanity should be a cross-politics consensus, but the left was mostly absent at a recent pro-natalism conference":
yeah, weird that the left wasn’t present at the Fourteen Words conference
(My modal timeline has loss of control of Earth mostly happening in 2028, rather than late 2027, but nitpicking at that scale hardly matters.)
It starts with some rationalist jargon to say the author agrees but one year later...
AI 2027 knows this. Their scenario is unrealistically smooth. If they added a couple weird, impactful events, it would be more realistic in its weirdness, but of course it would be simultaneously less realistic in that those particular events are unlikely to occur. This is why the modal narrative, which is more likely than any other particular story, centers around loss of human control the end of 2027, but the median narrative is probably around 2030 or 2031.
Further walking the timeline back, adding qualifiers and exceptions that the authors of AI 2027 somehow didn't explain before. Also, the reason AI 2027 didn't have any mention of Trump blowing up the timeline doing insane shit is because Scott (and maybe some of the other authors, idk) like glazing Trump.
I expect the bottlenecks to pinch harder, and for 4x algorithmic progress to be an overestimate...
No shit, that is what every software engineering blogging about LLMs (even the credulous ones) say, even allowing LLMs get better at raw code writing! Maybe this author is better in touch with reality than most lesswrongers...
...but not by much.
Nope, they still have insane expectations.
Most of my disagreements are quibbles
Then why did you bother writing this? Anyway, I feel like this author has set themselves up to claim credit when it's December 2027 and none of AI 2027's predictions are true. They'll exaggerate their "quibbles" into successful predictions of problems in the AI 2027 timeline, while overlooking the extent to which they agreed.
I'll give this author +10 bayes points for noticing Trump does unpredictable batshit stuff, and -100 for not realizing the real reason why Scott didn't include any call out of that in AI 2027.
:( looked in my old CS dept's discord, recruitment posts for the "Existential Risk Laboratory" running an intro fellowship for AI Safety.
Looks inside at materials, fkn Bostrom and Kelsey Piper and whole slew of BS about alignment faking. Ofc the founder is an effective altruist getting a graduate degree in public policy.
On slightly more relevant news the main post is scoot asking if anyone can put him in contact with someone from a major news publication so he can pitch an op-ed by a notable ex-OpenAI researcher that will be ghost-written by him (meaning siskind) on the subject of how they (the ex researcher) opened a forecast market that predicts ASI by the end of Trump’s term, so be on the lookout for that when it materializes I guess.
edit: also @gerikson is apparently a superforcaster
Fun fact: the rise of autoplag is now threatening the supply chain as well, as bad actors take advantage of LLM hallucinations to plant malware into people's programs.
tesla: "your car is not your car and we have deep, varied firmware and systems access to it on a permanent basis. we can see you and control you at all times. piss us off and we'll turn off the car that we own."
so like a fool I decided to search the web. specifically for which network protocol Lisp REPLs use these days (is it nREPL? or is that just a clojure thing with ambitions?)
Numerous applications and tools are being developed to support mental health and wellness. Among the varied programming languages at the forefront, Lisp stands out due to its unique capabilities in cognitive modeling and behavior analysis.
so I know exactly what this is, but why is this? what even is the game here?
So in the past week or so a lot of pedestrian crossings in Silicon Valley were "hacked" (probably never changed the default password lol) to make them talk like tech figures.
Here are a few. Note that these voices are most likely AI generated.
I didn't get to hear any of them in person, however the crosswalk near my place has recently stopped saying "change password" constantly, which I'm happy about.