Need to let loose a primal scream without collecting footnotes first? Have a
sneer percolating in your system but not enough time/energy to make a whole post
about it? Go forth and be mid: Welcome to the Stubsack, your first port of call
for learning fresh Awful you’ll near-instantly regret. Any awf...
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. Also, happy Pride :3)
David Friedberg, a co-host of the All-In podcast that often features Musk and that has become a sounding board for the Trump-aligned tech world, suggested there was a broader cost to America from the spat between the US president and the Tesla boss. “China just won,” he posted.
Behind the scenes, prominent Silicon Valley figures were desperately trying to prevent Musk from appearing on an emergency episode of the podcast, according to two people familiar with the matter, out of concern that the billionaire would make the dispute even worse and poison the relationship with tech’s most powerful ally in Washington, vice-president JD Vance.
This piece, although in a way defeatist, also gives me hope because there's at least one other person who has the same general feeling about LLMs that I do, and is a better writer.
I'm gonna think that the latest drumbeat of pro-LLM posts (tpacek's screed, this excrescense) is a last gasp of a system running in midair like the Coyote, before the VC money dries up.
Please let me commiserate my miserable misery, Awful dot Systems. So the other day I was flirting with this person—leftie, queer, sexy terrorist vibes, just my type—and asked if they had any plans for the weekend, and they said like, "will be stuck in the lab trying to finish a report lol". They are an academic in an area related to biomedicine, I don't want to get more specific than that. Wanting to be there for emotional support I invited them to talk about their research if they wanted to. The person said,
"Oh I am paying for MULTIPLE CHATGPT ACCOUNTS that I'm using to handle the", I swear to Gods I'm not making this up, "MATHLAB CODE, but I keep getting basic errors, like wrong variable names stuff like that, so I have to do a lot of editing and…". Desperate emphases mine.
And at this point I was literally speechless. I was having flashbacks of back in 2016 when it was this huge scandal that 1 in 5 papers in genetics had data errors because they used Microsoft Excel and it would ‘smartly’ mangle tokens like SEPT2 into a date-time cell. The field has since evolved, of course (=they threw in the towel and renamed the gene to SEPTIN2, and similarly for other tokens that Excel gets too smart about). I was having ominous visions of what the entirety body of published scientific data is about to become.
I considered how otherwise cool this person was and whether I should start a gentle argument, but all I could say was "haha yeah, mathlab is hard".
I feel like a complete and utter blowhard saying this, but now that I told you the story I have no other choice but to blurt it out: I am no longer flirting with this person.
I wrote a memoir thing on my brief, dystopic time at Google . I'm not sure if me reminiscing about the time when I sold out fits the topic of the forum, but I think a lot of it qualifies as sneering and might generally interest this audience.
I have written observations on how I see the nonsense crest peaking. Just the other day a collegue remarked that they had been at a conference and it was less AI than last year.
Today, however, I was at an audio / video trade show. I don't usually go to such, but it could be a good opportuinty to update on what is availble, and was close by, it was free and you got a free lunch. There was some interesting stuff in the monters, Yealink had some new stuff for conference rooms. Then just before lunch everyone headed to the key note adress. And it was horrible. It was a CEO who bragged how he had got ahead in life thanks to his "entrepreneurial mindset", though I would more say he bragged about bullshitting his way through life. And then it got worse when he got into AI. He quoted AIs answer on why AI acted in certain ways ("Just ask it!"), he claimed AI would cause at least 5 "penicillin-events" in the next 10 years, raising life spans to 180 and wiping out disease. At this time I just stood up and left, and skipped the free lunch.
It had just been 15 minutes out of an hour, and while he hadn't touched the topics of audio or video, he had established that nothing he would say about that could be trusted, which means it wouldn't matter what he said about their actual products. No great surprise that a bullshit artist likes the bullshit machine, I am a little surprised more people didn't leave, but then again social norms and free lunch.
There’s some optimism from the redditors that the LLM folk will patch the problem out (“you must be prompting it wrong”), but assume that they somehow just don’t know about the issue yet.
As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now.
There’s some dubious self-published analysis which coined the term “neural howlround” to mean some sort of undesirable recursive behaviour in LLMs that I haven’t read yet (and might not, because it sounds like cultspeak) and may not actually be relevant to the issue.
It wraps up with a surprisingly sensible response from the subreddit staff.
Our policy is to quietly ban those users and not engage with them, because we're not qualified and it never goes well.
AI boosters not claiming expertise in something, or offloading the task to an LLM? Good news, though surprising.
hackernews enthusiast tpacek is filled with incredulity when some friends won't join his new religious movement. This of course has triggered a 1200 reply long thread:
I've decided that this year I'm going to be more open about this and wear a pride bracelet whenever I go in public this month. Including for (remote) work meetings where nobody knows... wonder if anyone will notice.
As the second season of “Poker Face” trickles out, Lyonne is shifting her focus to another project: her feature directorial debut, which she wrote with Brit Marling. Titled “Uncanny Valley,” the movie follows a teenage girl whose grip on the real world unravels when she is consumed by a popular augmented reality video game. The project will blend traditional filmmaking with AI, courtesy of what she describes as an “ethical” model trained only on copyright-cleared data.
“It’s all about protecting artists and confronting this oncoming wave,” says Lyonne, emphasizing that it is not a “generative AI movie” but uses tools for things like set extensions.
When the film was announced in April, many on the internet did not see it that way.
“It’s comedic that people misunderstand headlines so readily because of our bizarro culture of not having reading comprehension,” says Lyonne. “Suddenly I became some weird Darth Vader character or something. That’s crazy talk, but God bless!”
“I’ve never been inside of one of those before,” Lyonne says of the vortex of backlash. “It’s scary in there, if anyone’s wondering. It’s not fun when people say not nice things to you. It grows you up a bit.”
She looks at Johnson, who, in 2017, felt the wrath of “Star Wars” fanboys when he subverted expectations on the critically acclaimed, yet divisive “Last Jedi.” His advice: shut off the noise and just make things. In a social media era where film and TV projects are judged before they’re even made, “any great art, during the process of making it, is going to seem like a terrible idea that will never work,” he says. “Anything great is created in a bubble. If it weren’t, it would never make it past the gestation period.”
Lobsters went down a VC financing rabbit hole the other day (thanks to me and @dgerard) and a user horked up this absolutely bonkers defense of OpenAI losing a galactic sum of money:
OpenAI is very different. They mainly lose money on ChatGPT, but it’s not really lost money, because they in turn accumulate fresh daha to further train their models. Data that none of their competitors have access to.
OpenAI is also different because AI is a major geopolitical factor at the moment and unless you’ve been living in a cave lately, you must have noticed that geopolitics is much more important than money these days. ChatGPT is an incredible intelligence gathering channel and cutting access to AI APIs would make US sanctions hurt that much more. The only other country that can compete with US companies when it comes to bulk training data access is China, via their social media alternatives like TikTok and RedNote. You can imagine the geopolitical implications of that too.
Has anyone heard of Boom Supersonic? Supposedly the company is making a new SST that is supposed to be able to go supersonic without the sonic boom hitting the ground by flying at or above 50,000 feet. They did a demo flight using a a plane that doesn't use the engine tech that the prospective finished plane will have nor does it resemble the prospective airframe design, so it seems like they went fast to prove fast plane is fast I guess?