Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 30 June 2024
Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh facts of Awful you'll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
thinking about how I was inoculated against part of ai hype bc a big part of my social circle in undergrad consisted of natural language processing people. they wanted to work at places with names like "OpenAI" and "google deepmind," their program was more or less a cognitive science program, but I never once heard any of them express even the slightest suspicion that LLMs of all things were progressing toward intelligence. it would have been a nonsequiter.
also from their pov the statistical approach to machine learning was defined by abandoning the attempt to externalize the meaning of text. the cliche they used to refer to this was "the meaning of a word is the context in which it occurs."
finding out that some prestigious ai researchers are all about being pilled on immanetizating agi was such a swerve for me. it's like if you were to find out that michio kaku has just won his fourth consecutive nobel prize in physics
Everyone explaining to me that lawyers actually read all the documents in discovery is really trying to explain to me, a computer scientist with 20 years of experience[1], how GPT works!
[1] Does OP have actual tech expertise? The answer may (not) surprise you!
You lawyers admit that sometimes you use google translate and database search engines, and those use machine learning components, and all ML is basically LLMs, so I'm right, Q.E.D.!
Lawyers couldn't possibly read everything in discovery, right?
Lawyers couldn't possibly pay for professional translation for everything, right?
With respect to content that is already on the open web, the social contract of that content since the 90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been freeware, if you like. That's been the understanding, there's a separate category where a website or a publisher or a news organization had explicitly said, 'do not scrape or crawl me for any other reason than indexing me so that other people can find that content.' That's a gray area and I think that's going to work its way through the courts.
Watch the entire interview if you're bored because he is in deep. Microsoft probably just hired the most AI-enthused person they could find.
Steve Yegge goes hard into critihype, there's no need for any junior people anymore, all you need is a senior prompt engineer. No word on what happens when the seniors retire or die off, guess we'll have AGI by then and it'll all work out. Also no word on how the legal profession will survive when all the senior prompt engineer's time is spend rewriting increasingly meaningless LLM responses as the training corpus inevitably degenerates from slurm contamination.
You know what would be awesome is if there was a way to easily see new posts to a thread, like if the "New" button actually put New posts on top. Maybe lemmy truly is too janky for that but it's a shame because I just start to ignore threads after a while.
okay at this point I should probably make a whole-ass perplexity post because this is the third time I'm featuring them in stubsack but 404media found yet more dirt
... which included creating a series of fake accounts and AI-generated research proposals to scrape Twitter, as CEO Aravind Srinivas recently explained on the Lex Fridman podcast
According to Srinivas, all he and his cofounders Denis Yarats and Johnny Ho wanted to do was build cool products with large language models, back when it was unclear how that technology would create value
tell me again how lies and misrepresentation aren't foundational parts of the business model, I think I missed it
What do normal people - people who don't pay for twitter, or sneer at rationalists - think of Twitter atp?
Went on to Twitter (my mistake) after seeing Inside Out 2 because it's the latest kid's movie to feature [trope that I found passe that I can't figure out how to spoil inline] and I see a post on my feed from "HBD Chick".
And I'm like okay, that has to be "happy birthday, right?". Nah, her third retweet is creamy porno redux.
Just like all the other right wingers and embarrassingly enthusiastic neoliberals and occasional Musk fans, I don't follow her or anybody that follows her, there's literally no connection or personal interest.
I feel like the post Elon shift is really understated for how bad the site's gotten. Like I see more people talk about how Instagram reels is racist than I do about the average twitter replies section. I know a lot of left leaning people fled for bluer pastures, but I'm surprised you don’t see more buzz about it from regular, non-power users.
no surprises here, Mozilla’s earlier stated goal of focusing on local, accessibility-oriented AI was just entryism to try to mask their real, fucking obvious goal of shoving integrations with every AI vendor into Firefox:
Whether it’s a local or a cloud-based model, if you want to use AI, we think you should have the freedom to use (or not use) the tools that best suit your needs. With that in mind, this week, we will launch an opt-in experiment offering access to preferred AI services in Nightly for improved productivity as you browse. Instead of juggling between tabs or apps for assistance, those who have opted-in will have the option to access their preferred AI service from the Firefox sidebar to summarize information, simplify language, or test their knowledge, all without leaving their current web page.
Our initial offering will include ChatGPT, Google Gemini, HuggingChat, and Le Chat Mistral, but we will continue adding AI services that meet our standards for quality and user experience.
I’m now taking bets on which of these vendors will pay the most to be the default in the enabled-by-default production version of this feature
this is making me seriously consider donating to Servo, the last shred of the spirit and goals of a good, modernized Firefox-style browser remaining, which apparently operates on a tiny budget (and with a whole army of reply guys waiting to point out they might receive grants which, cool? they still need fucking donations to do this shit and I’d rather give it to them than Mozilla or any other assholes making things actively worse)
thinking back to when I first switched to Mozilla during the MSIE 7-8 days and actually started having a good time on the web, daily driving Servo might not be an awful move once Firefox gets to its next level of enshittification. back then, Firefox (once it changed its name) was incredibly stable and quick compared with everything else, and generally sites that wouldn’t render right were either ad-laden horseshit I didn’t need, or were intentionally broken on non-IE and usually fixable with a plugin. now doesn’t that sound familiar?
seemingly yet more chatgpt jailbreaks just by providing input that barrierbreaks some n times, and then readily provides details
y'know, if I were the one making safety nets for these systems, I'd make them return such kind of results (or other typical honeypot type behaviour). and it's possible that that's what oai did. but it seems extremely unlikely that that's what they did, because it goes again the bayfucker gottagofast philosophy (and, frankly, against the level of competence I've seen displayed in the genml space overall)
(you can turn it off but in typical fashion: sneaky new setting, default-on, way to make it turn off is hidden in settings instead of a direct “no fuck this” button. same shit twitter and fb pulled for years)
I tried using Claude 3.5 sonnet and .... it's actually not bad. Can someone please come up with a simple logic puzzle that it abysmally fails on so I can feel better? It passed the "nonsense river challenge" and the "how many sisters does the brother have" tests, both of which fooled gpt4.