Have you seen a Facebook post and wondered if it was AI-generated or human-written? In this study, we analyze the impact of ChatGPT and generative AI tools on Facebook posts. These are our findings.
Originality.AI looked at 8,885 long Facebook posts made over the past six years.
Key Findings
41.18% of current Facebook long-form posts are Likely AI, as of November 2024.
Between 2023 and November 2024, the average percentage of monthly AI posts on Facebook was 24.05%.
This reflects a 4.3x increase in monthly AI Facebook content since the launch of ChatGPT. In comparison, the monthly average was 5.34% from 2018 to 2022.
They don't talk much about their secret sauce. That 40% figure is based on "trust me bro, our tool is really good". Would have been nice to be able to verify this figure / use the technique elsewhere.
It's pretty tiring to keep seeing ads masquerading as research.
I wouldn’t be surprised, but I’d be interested to see what they used to make that determination. All of the AI detection I know of are prone to a lot of false-positives.
FB has been junk for more than a decade now, AI or no.
I check mine every few weeks because I'm a sports announcer and it's one way people get in contact with me, but it's clear that FB designs its feed to piss me off and try to keep me doomscrolling, and I'm not a fan of having my day derailed.
I deleted facebook in like 2010 or so, because i hardly ever used it anyway, it wasn't really bad back then, just not for me. 6 or so years later a friend of mine wanted to show me something on fb, but couldn't find it, so he was just scrolling, i was blown away how bad it was, just ads and auto played videos and absolute garbage. And from what i understand, it just got worse and worse. Everyone i know now that uses facebook is for the market place.
My brother gave me his Facebook credentials so I could use marketplace without bothering him all the time. He's been a liberal left-winger all his life but for the past few years he's taken to ranting about how awful Democrats are ("Genocide Joe" etc.) while mocking people who believe that there's a connection between Trump and Putin. Sure enough, his Facebook is filled with posts about how awful Democrats are and how there's no connection between Trump and Putin - like, that's literally all that's on there. I've tried to get him to see that his worldview is entirely created by Facebook but he just won't accept it. He thinks that FB is some sort of objective collator of news.
In my mind, this is really what sets social media apart from past mechanisms of social control. In the days of mass media, the propaganda was necessarily a one-size-fits-all sort of thing. Now, the pipeline of bullshit can be custom-tailored for each individual. So my brother, who would never support Trump and the Republicans, can nevertheless be fed a line of bullshit that he will accept and help Trump by not voting (he actually voted Green).
Deleted my account a little while ago but for my feed I think it was higher. You couldn't block them fast enough, and mostly obviously AI pictures that if the comments are to be believed as being actual humans...people believed were real. It was a total nightmare land. I'm sad that I have now lost contact with the few distant friends I had on there but otherwise NOTHING lost.
8,855 long-form Facebook posts from various users using a 3rd party. The dataset spans from 2018 to November 2024, with a minimum of 100 posts per month, each containing at least 100 words.
seems like thats a good baseline rule and that was about the total number that matched it
Only summing up 9k posts over a 6 year stretch with over 100 words feels like an outreach problem. Conclusion could be drawn that bots have better reach
this whole concept relies on the idea that we can reliably detect AI, which is just not true. None of these "ai detector" apps or services actually works reliably. They have terribly low success rates. the whole point of LLMs is to be indistinguishable from human text, so if they're working as intended then you can't really "detect" them.
So all of these claims, especially the precision to which they write the claims (24.05% etc), are almost meaningless unless the "detector" can be proven to work reliably.
Thank you. I’ve wondered the same thing. I mean the whole goal of the LLMs is to be indistinguishable from normal human created test. I have a hard time telling most of the time. Now the images I can spot in a heartbeat. But I imagine that will change too.
Not enough attention is given to the literal arms race we find ourselves in. Most big tech buzz is all "yay innovation!" Or "oh no, jobs!"
Don't get me wrong, the impact AI will have on pretty much every industry shouldn't be underestimated, and people are and will lose their jobs.
But information is power. Sun Tzu knew this a long time ago. The AI arms race won't just change job markets - it will change global markets, public opinion, warfare, everything.
The ability to mass produce seemingly reliable information in moments - and the consequent inability to trust or source information in a world flooded by it...
I can't find the words to express how dangerous it is. The long-term consequences are going to be on par with - and terribly codependent with - the consequences of the industrial revolution.
Yeah. This is a way bigger problem with this article than anything else. The entier thing hinges on their AI-detecting AI working. I have looked into how effective these kinds of tools are because it has come up at my work, and independent review of them suggests they're, like, 3-5 times worse than the (already pretty bad) accuracy rates they claim, and disproportionatly flag non-native English speakers as AI generated. So, I'm highly skeptical of this claim as well.
AI does give itself away over "longer" posts, and if the tool makes about an equal number of false positives to false negatives then it should even itself out in the long run. (I'd have liked more than 9K "tests" for it to average out, but even so.) If they had the edit history for the post, which they didn't, then it's more obvious. AI will either copy-paste the whole thing in in one go, or will generate a word at a time at a fairly constant rate. Humans will stop and think, go back and edit things, all of that.
I was asked to do some job interviews recently; the tech test had such an "animated playback", and the difference between a human doing it legitimately and someone using AI to copy-paste the answer was surprisingly obvious. The tech test questions were nothing to do with the job role at hand and were causing us to select for the wrong candidates completely, but that's more a problem with our HR being blindly in love with AI and "technical solutions to human problems".
"Absolute certainty" is impossible, but balance of probabilities will do if you're just wanting an estimate like they have here.
I have no idea whether the probabilities are balanced. They claim 5% was AI even before chatgpt was released, which seems pretty off. No one was using LLMs before chatgpt went viral except for researchers.
Sure, but then the generator AI is no longer optimised to generate whatever you wanted initially, but to generate text that fools the detector network, thus making the original generator worse at its intended job.
I see no reason why "post right wing propaganda" and "write so you don't sound like "AI" " should be conflicting goals.
The actual argument why I don't find such results credible is that the "creator" is trained to sound like humans, so the "detector" has to be trained to find stuff that does not sound like humans. This means, both basically have to solve the same task: Decide if something sounds like a human.
To be able to find the "AI" content, the "detector" would have to be better at deciding what sounds like a human than the "creator". So for the results to have any kind of accuracy, you're already banking on the "detector" company having more processing power / better training data / more money than, say, OpenAI or google.
But also, if the "detector" was better at the job, it could be used as a better "creator" itself. Then, how would we distinguish the content it created?
The most annoying part of that is the shitty render. I actually have an account on one of those AI image generating sites, and I enjoy using it. If you're not satisfied with the image, just roll a few more times, maybe tweak the prompt or the starter image, and try again. You can get some very cool-looking renders if you give a damn. Case in point:
It's incredible, for months now I see some suggested groups, with an AI generated picture of a pet/animal, and the text is always "Great photography". I block them, but still see new groups every day with things like this, incredible...
I have a hard time understanding facebook’s end game plan here - if they just have a bunch of AI readers reading AI posts, how do they monetize that? Why on earth is the stock market so bullish on them?
They want dumb users consuming ai content, they need LLM content because the remaining users are too stupid to generate the free content that people actually want to click.
Then they pump ads to you based on increasingly targeted AI slop selling more slop.
As long as they can convince advertisers that the enough of the activity is real or enough of the manipulation of public opinion via bots is in facebook's interest, bots aren't a problem at all in the short-term.
Title says 40% of posts but the article says 40% of long-form posts yet doesn't in any way specify what counts as a long-form post. My understanding is that the vast majority of Facebook posts are about the lenght of a tweet so I doubt that the title is even remotely accurate.
Yeah, the company that made the article is plugging their own AI-detection service, which I'm sure needs a couple of paragraphs to be at all accurate. For something in the range of just a sentence or two it's usually not going to be possible to detect an LLM.
I’ve posted a notice to leave next week. I need to scrape my photos off, get any remaining contacts, and turn off any integrations. I was only there to connect with family. I can email or text.
FB is a dead husk fake feeding some rich assholes. If it’s coin flip AI, what’s the point?
Back when I got off in 2019, there was a tool (Facebook sponsored somewhere in the settings) that allowed you to save everything in an offline HTML file that you could host locally and get access to things like picture albums, complete with descriptions and comments. Not sure if it still exists, but it made the process incredibly painless getting off while still retaining things like pictures.
It still existed when I did the same thing a year ago or so. They implemented it awhile back to try and avoid antitrust lawsuits around the world. Though, now that Zuckerberg has formally started sucking this regime's dick, I wouldn't be surprised if it goes away.
The bigger problem is AI “ignorance,” and it’s not just Facebook. I’ve reported more than one Lemmy post the user naively sourced from ChatGPT or Gemini and took as fact.
No one understands how LLMs work, not even on a basic level. Can’t blame them, seeing how they’re shoved down everyone’s throats as opaque products, or straight up social experiments like Facebook.
…Are we all screwed? Is the future a trippy information wasteland? All this seems to be getting worse and worse, and everyone in charge is pouring gasoline on it.
Also… the tremendous irony here is Meta is screwing themselves over.
They've hedged their future on AI, and are smart enough to release the weights and fund open research, yet their advantage (a big captive dataset, aka Facebook/Instagram/WhatsApp users) is completely overrun with slop that poisons it. It’s as laughable as Grok (X’s AI) being trained on Twitter.
Anyone on Facebook deserves to be shit on by sloppy. They also deserve scanned out of all of the money and anything else.
If you’re on Facebook, you deserve this. Get the hell off Facebook.
Edit: itt: brain, dead, and fascist apologist Facebook Earth, who just refuse to accept that their platform is one of the biggest advent of Nazi fascism in this country, and they are all 100% complicit.
Edit: itt: brain, dead, and fascist apologist Facebook Earth, who just refuse to accept that their platform is one of the biggest advent of Nazi fascism in this country, and they are all 100% complicit.
This is some Facebook quality content you're bringing to us here. It's so great seeing this kind of posts on my feed first thing in the morning. Shows that it's not just AI poisoning our social media platforms.