Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this.)
It's so great that this isn't falsifiable in the sense that doomers can keep saying, well "once the model is epsilon smarter, then you'll be sorry!", but back in the real world: the model has been downloaded 10 million times at this point. Somehow, the diamanoid bacteria has not killed us all yet. So yes, we have found out the Yud was wrong. The basilisk is haunting my enemies, and she never misses.
Bonus sneer: "we are going to find out if Yud was right"
Hey fuckhead, he suggested nuking data centers to prevent models better than GPT4 from spreading. R1 is better than GPT4, and it doesn't require a data center to run so if we had acted on Yud's geopolitical plans for nuclear holocaust, billions would have been for incinerated for absolutely NO REASON. How do you not look at this shit and go, yeah maybe don't listen to this bozo? I've been wrong before, but god damn, dawg, I've never been starvingInRadioactiveCratersWrong.
In the process of looking for ways to link up with homeschool parents that aren't doing it for culty reasons, I accidentally discovered the existence of a small but active subreddit for "progressive monarchists". It's titled r/progressivemonarchists, because their imagination in naming conventions only slightly outatrips their imagination for forms of government. Given how our usual sneer fodder overlaps with nrx I figured there are others here who I can inflict this headache on.
My investigation tracked to you [Outlier.ai] as the source of problems - where your instructional videos are tricking people into creating those issues to - apparently train your AI.
I couldn’t locate these particular instructional videos, but from what I can gather outlier.ai farms out various “tasks” to internet gig workers as part of some sort of AI training scheme.
Bonus terribleness: one of the tasks a few months back was apparently to wear a head mounted camera “device” to record ones every waking moment
P.S. sorry for the linkedin link behind the mastodon link, but shared suffering and all that. I had to read "Uber for AI code data" so now you do too.
I am a journalist who specializes in features and profiles. I write about the American right, ideologues, intellectuals, extremist movements, the culture wars, true crime, and strange events and strange places.
by "about", he means "for"
I'm a journalist at the Guardian working on a piece about the Zizians. If you have encountered members of the group or had interactions with them, or know people who have, please contact me: oliver.conroy@theguardian.com.
I'm also interested in chatting with people who can talk about the Zizians' beliefs and where they fit (or did not fit) in the rationalist/EA/risk community.
I prefer to talk to people on the record but if you prefer to be anonymous/speak on background/etc. that can possibly be arranged.
So as part of the ongoing administrative coup; federal employees have been receiving stupid emails from what everyone assumes is Elon Musk (since it's the exact same playbook as the twitter firings). But they apparently royally flubbed up NOAA's email security in the process so the employees are getting constant spam through an unsecured broadcast address.
Hey, did you know of you own an old forum full of interesting posts from back in the day when humans wrote stuff, you can just attach ai bots to dead accounts and have them post backdated slop for, uh, reasons?
I’m not going to link Andy Ngo but random rationalist transwomen are being accused of terror sympathy…and Aella is doing this ‘leopards ate my face’ dance.
edit: it was @jessi_cata who tipped Ngo off of all people.
Me: Oh boy, I can't wait to see what my favorite thinkers of the EA movement will come up with this week :)
Text from Geoff: "Morally stigmatize AI developers so they considered as socially repulsive as Nazi pedophiles. A mass campaign of moral stigmatization would be more effective than any amount of regulation. "
Another rationalist W: don't gather empirical evidence that AI will soon usurp / exterminate humanity. Instead as the chief authorities of morality, engage in societal blackmail to anyone who's ever heard the words TensorFlow.
On slightly more relevant news the main post is scoot asking if anyone can put him in contact with someone from a major news publication so he can pitch an op-ed by a notable ex-OpenAI researcher that will be ghost-written by him (meaning siskind) on the subject of how they (the ex researcher) opened a forecast market that predicts ASI by the end of Trump's term, so be on the lookout for that when it materializes I guess.
Project Gutenberg has AI generated summaries?? How the mighty have fallen.
I was researching a bizarre old sci-fi book I once read (don't judge; bad old sci-fi is a trip), and Gutenberg's summary claims it was written in the 21st century. There's actually no accurate information about this book online, as far as I can tell the earliest reference is Project Gutenberg typing it up into a text file in 2003.
Given that it's in the public domain, no one has any idea where it came from, and it has old sci-fi vibes; I strongly suspect it was written in the 20th century*; making that misinformation. It's also just a bad summary that, while not wrong, doesn't really reflect the (amusingly weird) themes of the book.
Anyway someone needs to tell them that no information is leagues better than misinformation.
* maybe the '70s give or take but I'm not a professional date guesser
(Two comment threads about the CDC purging "woke" research, the comments are bad even by HN standards)
Gee given a forum full of hackers you'd expect them to be against arbitrary removal of scientific studies. What happened to "information wants to be free"?
Also I know I know, more US politics. It turns out silicon valley fascists have gained power so expect this to keep happening for the forseeable future 🙃.
These past two weeks have made me very uncomfortable working in Silicon Valley, I know last time I said I was planning to get out; but now it feels urgent both for my own well being, and to stop contributing to this industry. In trans communities we immediately saw coupy stuff** for the attempted transgender genocide that it is, the wider public and media is waking up to this very slowly.
* An account-only platform that sometimes bans US citizens for being cool.
** If there's interest I could try turning all of this into a top level post on morewrite or techtakes. I've been trying to avoid inundating people with US politics, but it's extremely bad. Like constitutional crisis, rise of techno-fascism, dismantling of the administrative state, transgender extermination, put career roadblocks in front of minorities bad.
Eliezer Yudkowsky says he would like to be a post-human some day, but the way to get there is by experimenting on augmenting biological intelligence through adult gene therapy targeting the human brain with suicide volunteers who may end up schizophrenic rather than taking a "leap of death" into unconstrained AI development
(found via flipping through LW for sneerable posts/comments)
Part of me suspects DeepSeek is gonna quickly carve out a good chunk of the market for itself - for SaaS services looking for spicy autocomplete or a slop generator to bolt on to their products, DeepSeek's high efficiency gives them a way to do it that doesn't immediately blow a massive hole in their finances.
No EA stuff! $1M each going to eight great charities and non-profits as far as I can tell: Children’s Hunger Fund, First Generation Investors, Global Refuge, NAACP Legal Defense and Educational Fund, PEN America, The Trevor Project, Planned Parenthood, and Team Rubicon. (from The Trevor Project's blog post)
found while giving my feed a moment of scroll while making coffee after too many 3am worknights, I saw this response to the substack guy giving themselves a pat on the back again for helping the nazis
Spent the last week playing with some security shit (thinking about a career change, since it looks like I will be mastering out of my PhD program) and fuck me everything about hardening your personal devices is exhausting. We are nowhere close to accessible privacy and security in our computers. The best solution right now may be "buy a Macbook and learn MacOS", which is so depressing.
Still deciding on a web browser. Used to be I could recommend Firefox because Righteous-Opposition-to-Google, but that doesn't really track anymore with Mozilla's behavior. Now I guess I would recommend Chrome, but it feels so gross (and I am unsure about things like Ungoogled-Chromium, for security reasons).
I personally couldn't figure out how to set the GRUB password. I will probably get around to it eventually.
As far as passwords, the only password I have to memorize is the one to my Bitwarden vault. Everything else is stored in Bitwarden. The passwords (except for my phone PIN) are 16 characters if I ever need to type them in manually (e.g. LUKS password), whereas passwords that will always be copy-pasted are 128 characters. I am looking into integrating a yubikey, but am leaning towards "fuck that shit, why would anyone actually want to use this?" If anyone here has comments on this (am I missing an obvious pitfall? do yubikeys suck as much as it looks like they suck?) I would be happy to hear them.
Anyway tl;dr is I spent the last week hardening all my devices and it sucks. In some cases it was a complete waste of time (my Steam Deck does not appear to have a way to set a password in the BIOS). In other cases (e.g. my Framework), it was probably worth it but a deeply terrible experience.