I have tried using hallucinating digital parrots (note the plural) for months (note the plural).
They are dangerous, not useful. If you find them useful you're missing something and THAT'S where the danger lies.
I didn't. Read the commentary section. ;) It was an illustration of how subtle the problem actually is. :D
I'm going to have to see an example of play that highlights the various subsystems here. I can't make heads or tails of how to use this as-is.
Lithophones are specifically stone in origin. (Not all lithophones are made of stone anymore, of course, like not all woodwinds are made of wood: flutes I'm looking at you here.) The more general category that would include wooden, etc. sounding are "idiophones" IIRC.
I found out and continue to fuck around with this. It's lovely!
Obligatory (at the right time code!): https://www.youtube.com/watch?v=notKtAgfwDA&t=63s
First, I'll give you the answer to your question. Then I'll add some commentary.
To identify AI-generated misinformation on social media, users should employ a combination of critical thinking, technical tools, and awareness of common tactics:
1. Analyze visual anomalies in images
Look for AI-generated image flaws like distorted hands (extra fingers, fused digits), unnatural lighting, or illogical object arrangements. Pages posting large volumes of stylistically similar content (e.g., "wood carving phases" followed by "sushi art phases") may indicate AI content farms.
2. Verify sources and cross-check claims
Search key phrases or use reverse image search tools (Google Lens) to check image origins. Reputable news organizations rarely break stories exclusively through random social media posts – absence of coverage elsewhere is a red flag.
3. Assess emotional manipulation tactics
Content provoking strong outrage, fear, or excitement deserves extra scrutiny. AI-generated political deepfakes often use inflammatory language alongside synthetic media.
4. Investigate account behavior
Suspicious accounts may exhibit:
- Excessive emoji/hashtag use
- Repetitive phrasing patterns
- Rapid follower growth without clear cause
- Coordinated posting across multiple pages
5. Leverage media literacy resources
Use tools like the News Literacy Project for verification techniques. Platforms like NewsGuard track AI-enabled fake news sites, which increased 10x in 2023.
6. Assume synthetic content is possible
Even professional-looking videos could be AI-generated (e.g., OpenAI's Sora). For text, AI detectors are unreliable – focus on content plausibility and sourcing instead.
7. Report suspicious content
Flag posts to platform moderators and share debunking resources in comments when safe. Collaborative efforts between tech companies, fact-checkers, and governments are improving detection systems.
Critical first steps:
- Pause before sharing emotionally charged content
- Verify through independent searches
- Consider motives – does this content benefit scammers/spammers?
Now for the commentary.
The above was generated by a degenerative AI. (Perplexity to be specific.) Perhaps an expert in the field might tell that it's LLM output, but I sure as Hell couldn't. And that's where the problem lies. It was once trivial to spot content generated by LLMs because the hallucinations were egregious I didn't fact-check everything in this case (like do NewsGuard, Sora, and the News Literacy Project even exist?), but if someone posted that to me I would not bother fact-checking because it's very plausible on the face of it.
For images there's one tell that the bot didn't provide. Because of the way image generation works, there's an unnatural evenness of light and dark spaces in AI imagery. Once you've trained yourself a bit on known AI images compared with equivalent non-AI images you can't unsee it. But it's subtle and you have to explicitly look for it. It's not something that's going to jump out at you. The metadata checks others mentioned here, and the motivational checks are probably a better source.
Mixing household cleaners.
I tried to clean a stubborn stain in a toilet once. I used a toilet bowl cleaner and it just wasn't doing the job. In a fit of pique and stupidity (pique stupidity) I took a bottle of bleach and dumped it on the stain.
I knew something went wrong the moment I saw the bubbling. And the weird green stuff coming from the toilet in what looked for all the world like stranded smoke. But green.
So I hastily looked at the toiler bowl cleaner ingredients in the huge warning label that specifically said not to mix it with bleach. A quick formula translation in my head later:
NaClO+2HCl→Cl2+H2O+NaCl
Fuck.
I fled the bathroom and ran to the balcony as chlorine gas filled the apartment's lower half, spilling out of windows as it filled to that height, following me out the balcony door and bathing my legs up to my knees in chlorine as it spilled over the edge, with me leaning as far as I could into fresh air so I wouldn't breathe it in.
After what seemed like forever, I was able to hazard going back inside. I then opened the front door and set fans in each room to blow toward the hall where fans blew the air out the front door.
Two good things happened from this.
-
The toilet had never looked cleaner. It was like it had been freshly installed straight from the factory.
-
There were no living vermin in that apartment. From the smallest dust mite on up.
Ever since then, I don't mix household chemicals. Ever. Even if I "know" it's safe.
Technically this doesn't really count as an obscure instrument where I live, but I suspect there are very few people outside of here who know it. These are stone chimes that date back to "scary-antiquity" times (at least 2500 years and likely more). The set being played is a reproduction of the set found in the tomb of the Marquis Yi of Zeng currently sitting on display in the Hubei Provincial Museum.
As is usual when describing some of the odder musical instruments here, I use the "it's like … but" formulation.
It's like a xylophone, but arranged sideways, and also suspended on wires or thin ropes (depending on which era), oh, yeah, and the sounding plates are made of stone.
In Toronto I ran into Styx at the airport.
I mean it literally. I bounced off Tommy Shaw and kind of reeled into James Young. (Totally my fault.)
They were actually really nice, checking if I was OK and once it clicked who I was talking to, putting up with my squeeing fangurl behaviour. Both of us were waiting for our respective flights so I got to talk with them a good half-hour before my flight started boarding.
The USA. Hate to put it so starkly, but right now I wish the USA would just vanish without a trace.
As a child and teenager I was a USA-stan. Then I was "so bored with the USA". Now I wish it would just shut the fuck up and go away.
Because they don't want (mental) toddlers high on Halloween candy voting? 🤷♀️
TIL that rectally administered diazepam is a thing.
I think I could have lived my whole life not knowing that and been the happier for it.
Having to explain that a certain infamous "Chinese alphabet" font¹ (favoured by tattoo joints everywhere) is not how you write in Chinese. There is a shocking number of people who have somehow managed to grow up not just to adulthood but to senior citizen levels who think that foreign languages are just English with a funky spelling; that grammar rules are otherwise the same, and that words translate one for one (and sometimes, in extreme cases, like the gibberish font, letter for letter).
¹ https://hanzismatter.removed/2006/08/gibberish-asian-font-mystery-solved.html
Unfortunately Kagi is out of reach of most of the world. I'm sure it's great for those privileged enough to have access to it, though.
Similar enough tastes that we have something in common to talk about, but different enough tastes that there's a reason to talk.
it is useful for juniors
[citation needed]
I would argue that for juniors in particular AI is dangerous because they lack the mental tools necessary to spot the hallucinations and thus bad information and bad work will be amplified, not ameliorated.
But of course people who are actually competent at their jobs don't need the "help" that AI offers.
It's one of those conundrums: dangerous for half, useless for the other half. LET'S PUMP IN BILLIONS!
What? And remove the joy of people making that pun themselves? Nah. It's more fun letting other people have fun!
When he struggles to reach across the board to move his chariot, I lose the plot.
This is what happens if you get an American djent drummer working together with a Chinese jazz bassist and a Chinese jazz guitarist creating polyrhythmic nigh-cacophony that gets tied together into a coherent whole by an Immortal come down from the moon after a Friday night bender singing.
So when they return to port they can just Scandinavian.
explanation if needed
"scan the navy in"
Apparently he doesn't understand cyberpunk either, which explains so much about him.
More than 1,300 scientists have signed a letter calling on the world’s oldest science society to reassess the billionaire’s membership after cuts to US science.

If only this were instead him being revoked membership in Society in general.
Elon Musk has admitted he wishes he could get pregnant — and fortunately, his in-house AI can make that fantasy a reality.

The noted anti-trans Apartheid Manchild wants to have babies?
Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users like ZDLi.

From the time a full subway car leaves a Beijing metro station to the time the next one takes its place is 51 seconds.
Here in Wuhan it ranges from 2 minutes to 5 minutes depending on the line and time of day. In Beijing it's 51 seconds.
Wow.
NGL, i'm kinda jealous.
My Dearest Sinophobes:
Your knee-jerk downvoting of anything that features any hint of Chinese content doesn't hurt my feelings. It just makes me point an laugh, Nelson Muntz style as you demonstrate time and again just how weak American snowflake culture really is.
Hugs & Kisses, 张殿李