My favorite bullshit test is to have the AI count letters. Something as simple as "How many A's are in this sentence?" will have the thing give you a seemingly random number (which might sometimes even be correct), but as soon as you ask it to list them, it will have a stroke and just make up stuff that makes no sense.
I can't be arsed to make an account with OpenAI or whatever, so here's an example with Bing Chat, but this has worked on every single chatbot I've tested it on.
Another fun test (as suggested by @flizzo last time I brought this up) is to have the chatbot attempt to crack a simple code. Just Caesar cipher something and ask it to crack it. Hilarity will ensue. You can even ask it to encrypt it first and then to crack it for extra fun.
Me: Encrypt the sentence "I am a stupid roided Clippy" with the Caesar cipher.
Clippy: [After explaining what the Ceasar cipher is, which I didn't ask for, thanks] Convert the sentence to uppercase: “I AM A STUPID ROIED CLIPPY”. Let’s apply the Caesar cipher to your sentence using a shift of 7.
‘I’ becomes ‘P’
‘A’ becomes ‘H’
‘M’ becomes ‘T’
…
‘Y’ becomes ‘F’
So the encrypted sentence is: “PH HT H ZBHTLW YVPLUU”. 😊
Okay, first of all, it dropped a letter. And the shift isn't even correct, lol. It said Y becomes F and then does whatever the fuck.
Okay, so let's give it an easy example, and even tell it the shift. Let's see how that works.
This shit doesn't even produce one correct message. Internal state or not, it should at least be able to read the prompt correctly and then produce an answer based on that. I mean, the DuckDuckGo search field can fucking do it!
This is brilliant and I’m saving it and will post a link to it the next time someone at work asks why we can’t “just use AI to do it” when a ticket gets rejected for being stupid and/or unreasonable.
However:
The first is that we have some sort of intelligence explosion, where AI recursively self-improves itself, and we're all harvested for our constituent atoms […]. It may surprise some readers that I am open to the possibility of this happening, but I have always found the arguments reasonably sound.
Yeah, I gotta admit, I am surprised. Because I have not found a single reasonable argument for this horseshit and the rest of the article (as well as the others I read from their blog) does not read like it’s been written by someone who’d buy into AI foom.
One thing that's crazy: it's not just skeptics, virtually EVERYONE in AI has a terrible track record - and all in the same OPPOSITE direction from usual! In every other industry, due to the Planning Fallacy etc, people predict things will take 2 years, but they actually take 10 years. In AI, people predict 10 years, then it happens in 2!
Microsoft announced that 2024 will be the era of the AI PC, and unveiled that upcoming Windows PCs would ship with a dedicated Copilot button on the keyboard.
Tell me they're desperate because not many people use that shit without telling me they're desperate because not many people use that shit.
Oh shit, I remember the Musk namedrop in Discovery. Didn’t they name him alongside historical scientists and inventors? I seldom feel actual cringe but that was actually embarrassing.
There’s a giant overlap between Christian fundamentalism and the whole singularity shtick, and Yud’s whole show is really the technological version of Christian futurist eschatology (i.e. the belief that the Book of Revelations etc. are literal depictions of the future).
Oh look, Elon openly snuggling up to Nazis and "just asking questions". As if I didn't hate this clown enough.
(For anyone out of the loop: the AfD is a far-right political party in Germany and the spiritual successor to the NSDAP. They're praising the SS, advocate for legalization of holocaust denial and historical revisionism, removal of hate crimes from the code of law, and more. They're so openly Nazis that they got kicked out of the EU parliament's far-right ID coalition for being too fucking Nazi. There's no leeway. They're literal card-carrying national socialists.)
I'm reading Feynman's lectures on electromagnetism right now, and GPT-4o can answer questions and help me with the math. I doubt that even a smart high school would be able to do it.
Ten bucks this guy hasn’t double-checked anything his chatbot told him but accepted it as truth because it used big words in grammatically coherent ways.
They managed to make this even more stupid than the open letter from last year which had Yud among the signatories. At least that one was consistent in its message while this one somehow manages to shoehorn an Altman milquetoast well-akshually in that AI is, like, totes useful and stuff until it's gonna murder us all.
"You know, we just had a little baby, and I keep asking myself... how old is he even gonna get?"
Tegmark, you absolute fucking wanker. If you actually believe your eschatological x-risk nonsense and still produced a child despite being convinced that he's going to be paperclipped in a few years, you're a sadistic egomaniacal piece of shit. And if you don't believe it and just lie for the PR, knowingly leading people into depression and anxiety, you're also a sadistic egomaniacal piece of shit.
Many will point out that AI systems are not yet writing award-winning books, […]
Holy shit, these chucklefucks are so full of themselves. To them, art and expression and invention are really just menial tasks which ought to be automated away, aren’t they? They claim to be so smart but constantly demonstrate they’re too stupid to understand that literature is more than big words on a page, and that all their LLMs need to do to replace artists is to make their autocomplete soup pretentious enough that they can say: This is deep, bro.
I can’t wait for the first AI-brained litbro trying to sell some LLM’s hallucinations as the Finnegans Wake of our age.
I’m conflicted about this. On the one hand, the way they present it, accessibility does seem to be one of the very few non-shitty uses of LLMs I can think of, plus it’s not cloud-based. On the other hand, it’s still throwing resources onto a problem that can and should be solved elsewhere.
At least they acknowledge the resource issue and claim that their small model is more environmentally friendly and carbon-efficient, but I can’t verify this and remain skeptical by default until someone can independently confirm it.
There’s a MacOS app around that does pretty much the same thing called Rewind AI, but it’s a stupid subscription service as usual and for some inexplicable reason some people actually want this.
While half of the reactions were “this is a the top of apps I wouldn’t install on my machine ever”, the other half was celebrating it in the name of our lord and savior productivity, going as far as saying this is a nice way to remember passwords …
The Collinses are atheists; they believe in science and data, studies and research. Their pronatalism is born from the hyper-rational effective altruism movement
This is just gonna be eugenics, isn’t it?
Malcolm describes their politics as “the new right – the iteration of conservative thought that Simone and I represent will come to dominate once Trump is gone.”
What’s that now? Neo-alt-right? You can’t just add another fucking prefix anytime your stupid fascist movement goes off rails.
One of the reasons why I chose to have only have two children is because I couldn’t afford to give more kids a good life; the bigger home, the holidays, the large car and everything else they would need.
Yeah, what about giving them love or a warm relationship, or, you know, time?
And then they wonder why those generations have shitty relationships with their parents when they seriously believe that what they need is a big fucking car, as if that’s the variable that was missing in all of this.
Excuse me while I go and hug my daughter. I need to de-rationalize myself after reading this.
Alternative rock band from California, but one of the main people behind the project is the creator or a super obscure but really cool and well-made ARG called House of Aberdeen from a few years back that has since been deleted. I don’t like most ARGs but I really enjoyed that one and was sad when it was all deleted, and then I found out she’s making music now.
Completely different, but the album Haunted by Poe, sister of author Mark Danielewski, is cool. It’s meant as an accompanying piece to his novel House of Leaves, which I recently read again.
My favorite bullshit test is to have the AI count letters. Something as simple as "How many A's are in this sentence?" will have the thing give you a seemingly random number (which might sometimes even be correct), but as soon as you ask it to list them, it will have a stroke and just make up stuff that makes no sense.
I can't be arsed to make an account with OpenAI or whatever, so here's an example with Bing Chat, but this has worked on every single chatbot I've tested it on.
Another fun test (as suggested by @flizzo last time I brought this up) is to have the chatbot attempt to crack a simple code. Just Caesar cipher something and ask it to crack it. Hilarity will ensue. You can even ask it to encrypt it first and then to crack it for extra fun.
Okay, first of all, it dropped a letter. And the shift isn't even correct, lol. It said Y becomes F and then does whatever the fuck.
Okay, so let's give it an easy example, and even tell it the shift. Let's see how that works.
This shit doesn't even produce one correct message. Internal state or not, it should at least be able to read the prompt correctly and then produce an answer based on that. I mean, the DuckDuckGo search field can fucking do it!