Hollow-eyed insincere robot posters are already flooding Meta's sites and they're everything we dreaded.
Soon people discovered that Meta’s ghoulish posters had been among us for months, even years. There’s Liv, a “Proud black queer momma of 2 & truth-teller,” according to its Instagram profile. Add to that Brian, “everybody’s grandpa;” Jade, “your girl for all things hip-hop;” and Carter, a “relationship coach.” I’m sure there are more yet to be discovered.
All four of these posters have pages on both Facebook and Instagram with mirrored content and all four have post histories that go back to September 26, 2023. The accounts have the blue verified check marks and a label indicating that they’re an AI “managed by Meta.” Users can block them on Facebook, but not on Instagram. Users can also message them across all of Meta’s platforms, including WhatsApp.
This all really feels like a desperate attempt to save their dying social networks, just filling their sites with garbage to make it look like they're not declining
Something about George Romero’s 1978 film about doomed survivors riding out the zombie apocalypse in a shopping mall feels resonant today as I look across Meta’s suite of AI-created profiles. The movie’s blue-skinned corpses don’t know they’re dead. They just wander through the shopping center on autopilot, looking for something new to consume.
That’s how many of our social media spaces feel now. Digital town squares populated by undead posters, zombies spouting lines they learned from an LLM, the digested material from decades of the internet spewed back at the audience. That’s what Meta is selling now.
This is some legit dystopian shit right here. I almost wish I hadn’t abandoned these platforms years ago, so that I could take this opportunity to do it. I hope that more people wake up to the insidious nature of this company and these services.
I still don't understand why they're doing it. They can't possibly think this will be good long-term, can they?
And what's with the article ending on a note of trying to make it sound bad that reviving nuclear power is a negative thing? If that's the result of AI slop, then I'd call it a net positive.
The best part is no one is going to leave the platform over this except a small minority of nerds like us. The average person is addicted to social media. Ain't nothing going to drive them away.
It feels like Zuckerberg is out of ideas and just chases the trends to stay relevant. He was all in on the “metaverse” until he wasn’t because the concept disappeared. Now, he’s all in on AI because everyone else is. Even the quest, before it became the face of the metaverse, was him chasing another big idea from someone else.
He and the company haven’t had original ideas in a very long time. Even the smaller ones, like Stories and Reels, are just ideas copied from someone else.
I avoid any multiplayer game that has bots that you can't filter out and you have to play with them not knowing that they're bots. it removes all the font of any game ever
This is getting really fucking creepy. I suppose next they're going to start following and sending private messages because anything is worth it for our precious "engagement".
This is brilliantly dystopian They can create hyper-specific profiles to target specific eyeballs and feed ads from the profile itself. They no longer have to rely on users posting things others would react to. They can just write what they know will generate reactions.
You cannot really pollute these platforms any more than they already are imo. The AI slop perfectly resonates with the rest of the garbage there, so it shouldn't make a difference.
Obviously this is all stupid and you'll find problems anywhere you choose to look.
The problem I'm finding is this, if Facebook truly is betting on AI becoming better as a way to encourage growth then why are they further poisoning their own datasets? Like ok, even if you exclude everything your own bots say from your training data, which you could probably do since you know who they are, this is still encouraging more AI slop on the platform. You don't know how much of the "engagement" your driving (which they are likely just turning around and feeding back into the AI training set) is actually human, AI grifter, or someone poisoning the well by making your AIs talk to themselves. If you actually cared to make your AI better, then you can't use any of the responses to your bots as most of them will be of dubious providence at best.
Personally I'm rooting on the coming Hapsburg-AI issue so I don't really have that much of a problem with Facebook deciding more poison is a brilliant business move. But uh... seems real dumb if your actually interested in having an actually functional LLM.