Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 14 July 2024
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
Might be slightly off topic, but interesting result using adversarial strategies against RL trained Go machines.
Quote: Humans able use the adversarial bots’ tactics to beat expert Go AI systems, does it still make sense to call those systems superhuman? “It’s a great question I definitely wrestled with,” Gleave says. “We’ve started saying ‘typically superhuman’.” David Wu, a computer scientist in New York City who first developed KataGo, says strong Go AIs are “superhuman on average” but not “superhuman in the worst cases”.
Me thinks the AI bros jumped the gun a little too early declaring victory on this one.
Muse is a new creative platform that can create your own AI-generated series so you can dive into a new world of storytelling without the need for personal content creation.
Who the fuck are these people and why do I not have a button that spreads Lego bricks across their floor?
First off, if you've read The Singularity Is Near, which was published 19 years ago in 2005, you should be aware that the sequel book is a lot less technical.
From the just released GOP 2024 party platform (PDF), this is a single bullet point in CHAPTER THREE: BUILD THE GREATEST ECONOMY IN HISTORY:
Republicans will pave the way for future Economic Greatness by leading the World in Emerging Industries.
Crypto
Republicans will end Democrats’ unlawful and unAmerican Crypto crackdown and oppose the creation of a Central Bank Digital Currency. We will defend the right to mine Bitcoin, and ensure every American has the right to self-custody of their Digital Assets, and transact free from Government Surveillance and Control.
Artificial Intelligence (AI)
We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.
Expanding Freedom, Prosperity and Safety in Space
Under Republican Leadership, the United States will create a robust Manufacturing Industry in Near Earth Orbit, send American Astronauts back to the Moon, and onward to Mars, and enhance partnerships with the rapidly expanding Commercial Space sector to revolutionize our ability to access, live in, and develop assets in Space.
When your party platform is just a long-form weird tweet that you wrote after bong rips with Elon Musk.
Over at "work on climate" there's been an influx of companies that will greenwash using ChatGPT. One company I interviewed for (in my estimation it) boiled down to using ChatGPT to make generic greening recommendations for a business and attach hallucinated numbers that the client can then pass off for themselves.
Edit: is there a list of companies that use "prompt engineering" so that I can just avoid them?
With a moment’s contemplation after reading it, I just realized how spectacularly bad this could go if, for example, you went to do a search for an chemical’s Material Safety Data Sheet (MSDS) and a Large Language Model (LLM) gave you back some bullshit advice to take in the event of hazmat exposure or fire.
joke's on you, MSDSs are already dogshit. these things only exist to cover ass of manufacturers and are filled with generic, useless advice https://www.science.org/content/blog-post/uselessness-msdshttps://www.science.org/content/blog-post/un-safety-data-sheets there is MSDS for sand, MSDS for tear gas and ethanol lists the same dangers, toxicity is overemphasized (because it's common) and some other dangers like explosiveness are underappreciated (because it's not), we don't even need LLMs for this, humans (lawyers mostly i guess) did the same on accident
also bonus points for first-principling what could have been instead of asking somebody that actually knows, like any proper rationalist would do. also, vinyl chloride is not reactive with water and spraying pressurized containers with water can be a sensible thing to do, because this cools them down, so it decreases pressure meaning it decreases risk of rupture, which would be a bad thing, if manageable for firefighters to do it safely. see: some fires involving propane tanks
An MSDS may not tell you what respirator to use;
Slander! MSDS will tell you to use the right one ("appropriate respirator"), it's your job to figure out what it is
This happened a while ago and I still have mixed feelings about it: a band I like started a music label and named it p(doom): https://pdoomrecords.com/
The cryptocurrency aspect is mostly just funny, but Google and Squarespace should know better than to effectively disable MFA out from under people. Tech companies put profit over people all the time. And then everyone blames the people for not being hyper-vigilant about computer security.
Some "defi" company realized this could be a problem 22 hours before they were hacked. Even had time to write a tool to mitigate the impact of getting hacked. Got hacked anyway. Did they uhh... IDK change their password? Make sure MFA was set up? They don't say.
The doom prediction in question? Dec 31st 2024. It's been an honour serving with you lads. 🫡
Edit: as a super forecastor, my P(Connor will shut the fuck up due to being catastrophically wrong | I wake up on Jan 1st with a pounding hang over) = (1/10)^100
While Acemoglu has some positive things to say — for example, that AI models could be trained to help scientists conceive of and test new materials (which happened last year) — his general verdict is quite harsh: that using generative AI and "too much automation too soon could create bottlenecks and other problems for firms that no longer have the flexibility and trouble-shooting capabilities that human capital provides."
The recent report from a group of scientists at Google who employ a combination of existing data sets, high-throughput density functional theory calculations of structural stability, and the tools of artificial intelligence and machine learning (AI/ML) to propose new compounds is an exciting advance. We examine the claims of this work here, unfortunately finding scant evidence for compounds that fulfill the trifecta of novelty, credibility, and utility.
petition to use the term "pyramid sucking" to refer to the activity of defending the Incredible Potential of AI, crypto, the metaverse, whatever the next thing is, etc
(happened to notice this while digging into something else)
upwork's landing page has a whole big AI anchorblob. clicking from frontpage takes you to /nx/signup (and I'm not going to bother), but digging around a bit elsewhere finds "The Future Of Work With AI"
so we're now at the stage where upwork reckons it's a good bet to specifically hype AI delivery from their myriad exploitatively arbitraged service providers
(they're probably not wrong, I can see a significant chunk of companies falling over each other to "get into AI" at pay-a-remote-coder-peanut-shells prices)
Debug Shell: AI-powered package install suggestions for commands
in the app upgrade popup it's just bare text. in the documentation for debug shell there's no reference. in the release notes feed it's the same bare text
I've already sent feedback asking for more information about it, but just ..... what? I mean there's that annoying(-to-me) ubuntu shell hook that goes "oh hey $binary not found, try installing $pkg!" already, and that's been out for years, but what?
if/when I hear more I'll post comment I guess. in the meantime consider me fucking bewildered.