Need to let loose a primal scream without collecting footnotes first? Have a
sneer percolating in your system but not enough time/energy to make a whole post
about it? Go forth and be mid: Welcome to the Stubsack, your first port of call
for learning fresh Awful you’ll near-instantly regret. Any awf...
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. Also, happy 4th July in advance...I guess.)
Get your popcorn folks. Who would win: one unethical developer juggling "employment trial periods", or the combined interview process of all Y Combinator startups?
Apparently one indian dude managed to crack the YC startup interview game and has been juggling being employed full time at multiple ones simultaneously for at least a year, getting fired from them as they slowly realize he isn't producing any code.
The cope from the hiring interviewers is so thick you could eat it as a dessert. "He was a top 1% in the interview" "He was a 10x". We didn't do anything wrong, he was just too good at interviewing and unethical. We got hit by a mastermind, we couldn't have possibly found what the public is finding quickly.
I don't have the time to dig into the threads on X, but even this ask HN thread about it is gold. I've got my entertainment for the evening.
Apparently he was open about being employed at multiple places on his linkedin. I'm seeing someone say in that HN thread that his resume openly lists him hopping between 12 companies in as many months. Apparently his Github is exclusively clearly automated commits/activity.
Someone needs to run with this one. Please. Great look for the Y Combinator ghouls.
Have any of the big companies released a real definition of what they mean by AGI? Because I think the meme potential of these leaked documents is being slept on.
The definition of AGI being achieved agreed on between Microsoft and OpenAI in 2023 is just: when OpenAI makes a product that raises $100B.
Seems like a fun way to shut down all the low quality philsophical wankery. Oh, AGI? You just mean $100B profit, right? That's what your lord and savior Altman means.
Maybe even something like a cloud to butt browser extension? AGI -> $100B in OpenAI profits
"What $100B in OpenAI Profits Means for the Future of Humanity"
I'm sure someone can come up with something better, but I think there's some potential here.
Damn cat just stood on my phone and launched Gemini for the first time, so we can drop Google's monthly active user count by one relative to whatever they claim.
So, you know Ross Scott, the Stop Killing Games guy?
About 2 years ago he actually interviewed Yudkowsky.
The context being that Ross discussed his article on one of his monthly streams, and expressed skepticism that there was any threat at all from AI.
Yudkowsky got wind of his skepticism, and reached out to Ross to do a discussion with him about the topic. He also requested that Ross not do any research on him.
And here it is... https://www.youtube.com/watch?v=hxsAuxswOvM
I can't say I actually recommend watching it, because Yudkowsky spends the first 40 minutes of the discussion refusing to answer the question "So what is GPT-4, anyway?" (It's not exactly that question, but it's pretty close).
I don't know what they discussed afterwards because I stopped watching it after that, but, well, it's a thing that exists.
Rainbow, an Italian animation studio known for making Winx Club, is looking to hire a prompt engineer :-) Had I been Italian I would be considering applying if only to stop them from trying to sell NFTs and whitewashing their characters.
Anybody who has been around programmers for more than five minutes should not be surprised that many of them are enthusiastically adopting a tool that is harmful, destroying industries, sabotaging education, and hindering the energy transition because they feel it's giving them a moderate advantage
That they respond to those pointing some of this out with mockery ("nuts", "shove your concern up your ass") and that their peers see this mockery as reasonable discourse is also not surprising. Tech is entirely built on the backs of workers with no regard for externalities or second order effects
Tech is also extremely bad at software. We habitually make fragile, insecure, complex, and hard to maintain code that backs poor UIs. The best case scenario is that LLMs accelerate already broken software dev processes in an industry that is built around monopolies and billionaire extremists
You want my opinion, Zitron's on the money - once the AI bubble finally bursts, I expect a massive outpouring of schadenfreude aimed at the tech execs behind the bubble, and anyone who worked on or heavily used AI during the bubble.
For AI supporters specifically, I expect a triple whammy of mockery:
On one front, they're gonna be publicly mocked for believing tech billionaires' bullshit claims about AI, and publicly lambasted for actively assisting tech billionaires' attempts to destroy labour once and for all.
On another front, their past/present support for AI will be used as grounds to flip the bozo bit on them, dismissing whatever they have to say as coming from someone incapable of thinking for themselves.
On a third front, I expect their future art/writing will be immediately assumed to be AI slop and either dismissed as not worth looking at or mocked as soulless garbage made by someone who, quoting David Gerard, "literally cannot tell good from bad".
The background is that the center-RIGHT gov of Sweden is gonna put up an investigation ("utredning") into why people aren't getting (the RIGHT kind of) kids. Nothing new there, simply the same culture war fretting already percolating in the anglosphere.
Finland already has an investigation ongoing, and the spokesperson there raises the point that one societal change that's happened in the last 25 years is... social media.
Wouldn't it be delicious if it could be proved that Facebook and Twitter and Tiktok are the reasons people don't get into relationships and have kids? Eat that, Elon!
Actually burst a blood vessel last weekend raging. Gary Marcus was bragging about his prediction record in 2024 being flawless
Gary continuing to have the largest ego in the world. Stay tuned for his upcoming book "I am God" when 2027 comes around and we are all still alive. Imo some of these are kind of vague and I wouldn't argue with someone who said reasoning models are a substantial advance, but my God the LW crew fucking lost their minds. Habryka wrote a goddamn essay about how Gary was a fucking moron and is a threat to humanity for underplaying the awesome power of super-duper intelligence and a worse forecaster than the big brain rationalist. To be clear Habryka's objections are overall- extremely fucking nitpicking totally missing the point dogshit in my pov (feel free to judge for yourself)
But what really made me want to drive a drill to the brain was the LW brigade rallying around the claim that AI companies are profitable. Are these people straight up smoking crack? OAI and Anthropic do not make a profit full stop. In fact they are setting billions of VC money on fire?! (strangely, some LWers in the comments seemed genuinely surprised that this was the case when shown the data, just how unaware are these people?) Oliver tires and fails to do Olympic level mental gymnastics by saying TSMC and NVDIA are making money, so therefore AI is extremely profitable. In the same way I presume gambling is extremely profitable for degenerates like me because the casino letting me play is making money. I rank the people of LW as minimally truth seeking and big dumb out of 10. Also weird fun little fact, in Daniel K's predictions from 2022, he said by 2023 AI companies would be so incredibly profitable that they would be easily recuperating their training cost. So I guess monopoly money that you can't see in any earnings report is the official party line now?
1:08:02 There's a lot of discussions among the rationalist community about the
uneven distribution of IQ and its correlation with race. Why is this a topic that people fixate
on if they're also convinced that this ultra intelligence an AGI that's like
smarter than every human on the planet why are these marginal differences so important to people?
Alright that's it: anime streaming needs to return to fansubbing (note: this link contains a skintight anime bosom so don't open it in front of your boss unless your boss is chill)
Haven't seen a newsletter of mine hit the top 20 on Hackernews and then get flag banned faster, feels like it barely made it 20 minutes before it was descended upon by guys who would drink Sam Altman's bathwater
Also funny: the hn thread doesn't appear on their search.
An interesting takedown of "superforecasting" from Ben Recht, a 3 part series on his substack where he accuses so called super forecasters of abusing scoring rewards over actually being precogs. First (and least technical) part linked below...
"The term Defensive Forecasting was coined by Vladimir Vovk, Akimichi Takemura, and Glenn Shafer in a brilliant 2005 paper, crystallizing a general view of decision making that dates back to Abraham Wald. Wald envisions decision making as a game. The two players are the decision maker and Nature, who are in a heated duel. The decision maker wants to choose actions that yield good outcomes no matter what the adversarial Nature chooses to do. Forecasting is a simplified version of this game, where the decisions made have no particular impact and the goal is simply to guess which move Nature will play. Importantly, the forecaster’s goal is not to never be wrong, but instead to be less wrong than everyone else.*
I had applied to a job and it screened me verbally with an AI bot. I find it strange talking to an AI bot that gives no indication of whether it is following what I am saying like a real human does with "uh huh" or what not. It asked me if I ever did Docker and I answered I transitioned a system to Docker. But I had done an awkward pause after the word transition so the AI bot congratulated me on my gender transition and it was on to the next question.
So two weeks ago I linked titotal's detailed breakdown of what is wrong with AI 2027's "model" (tldr; even accepting the line goes up premise of the whole thing, AI 2027's math was so bad that they made the line always asymptote to infinity in the near future regardless of inputs). Titotal went to pretty extreme lengths to meet the "charitability" norms of lesswrong, corresponding with one of the AI 2027 authors, carefully considering what they might have intended, responding to comments in detail and depth, and in general not simply mocking the entire exercise in intellectual masturbation and hype generation like it rightfully deserves.
Oh, and looking back at the comments on titotal's post... his detailed elaboration of some pretty egregious errors in AI 2027 didn't really change anyone's mind, at most moving them back a year to 2028.
So, morale of the story, lesswrongers and rationalist are in fact not worth the effort to talk to and we are right to mock them. The numbers they claim to use are pulled out of their asses to fit vibes they already feel.
And my choice for most sneerable line out of all the comments:
A bit of old news but that is still upsetting to me.
My favorite artist, Kazuma Kaneko, known for doing the demon designs in the Megami Tensei franchise, sold his soul to make an AI gacha game. While I was massively disappointed that he was going the AI route, the model was supposed to be trained solely on his own art and thus I didn't have any ethical issues with it.
Fast-forward to shortly after release and the game's AI model has been pumping out Elsa and Superman.
There's at least one (if not two if you handle the HN response separately) good threads that could be made from this. Don't have the time personally at the moment.
I will say that I'm shocked to see some reasonable shit in the HN comments, people saying the post is too long or not an acceptable tone are getting told off rather respectably with some good explanations (effectively: this was written this way intentionally you dolt). Broken clock and all that, I guess.
You want my personal opinion, the basic idea of "decomputing" that author Dan McQuillan is putting forward is likely gonna gain plenty of traction. The Trump administration more generally and DOGE more specifically have thoroughly undermined any notion of tech being an apolitical force, so arguing against the politics inherent to AI is gonna be an easier sell.