Instagram finds that AI Mr beast scams do not go against community guidelines.
These scammers using Mr Beasts popularity, generosity, and (mostly) deep fake AI to scam people into downloading malware, somehow do not go against Instagrams community guidelines.
After trying to submit a request to review these denied claims, it appears I have been shadow banned in some way or another as only an error message pops up.
Instagram is allowing these to run on their platform. Intentional or not, this is ridiculous and Instagram should be held accountable for allowing malicious websites to advertise their scam on their platform.
For a platform of this scale, this is completely unacceptable. They are blatant and I have no idea how Instagrams report bots/staff are missing these.
So what they are saying is they are willing to take liability and thus be open to being sued over this as they know of the scams but say they do not break community guidelines
Companies serving ads should have at least partial liability for them. If they can't afford to look into them all, then maybe they are too big or their business model just isn't as viable as they pretend it is.
Like many, I've reported lots of stuff to basically every social media outlet, and nothing has been done. Most surprising, a woman I know was getting harassed from people setting up fake accounts of her. Meta did nothing, so she went to the police...who also did nothing. Her MP eventually got involved, and after three months the accounts were removed, but the damage had gone on for about two years at that point.
As someone that works in tech, it's obvious why this is such a hard problem, because it requires actual people to review the content, to get context, and to resolve in a timely and efficient manner. It's not a scalable solution on a platform with millions of posts a day, because it takes thousands (if not more) of people to triage, action, and build on this. That costs a ton of money, and tech companies have been trying (and failing) to scale this problem for decades. I maintain that if someone is able to reliably solve this problem (where users are happy), they'll make billions.
Not that this helps anyone, but I gave up Instagram the day Facebook bought it. I don't regret it and my mental health is better for it. Using Instagram made me depressed as hell.
I doubt they're missing them. They simply don't care and will continue to not care until something happens that makes the money generated by the ADs not worth it.
Enshittification has become the new way of life for tech firms like Meta.
They lay off workers and decrease user safety, because that leads to more ad buys. This year’s record profits need to exceed last year’s record profits, even though a fourth of you are fired. More profit, or else…
I report lots of scam ads and leave comments calling them out. I’ve had Meta or YouTube take down maybe one or two of the hundreds I’ve reported. But I’ve had a ton of my comments removed as “hate speech” (stuff like pointing out a NFT collection was using stolen artwork). We are not their customers - advertisers are. The people who made this ad are the people that paid Meta - why would they take it down?
It is exactly because Instagram is at the scale that it is that caused moderation to be difficult. Facebook has relied on using bots to moderate for so long due to its scale, and using bots that are specifically designed to detect AI generated contents is really not possible without introducing a ton of false positives, since the Instagram of the 2020s at its core IS celebrity/influencer advertisement, and there is honestly very little that differentiate what constitutes as "content* and "spam" there.
Since influencers will be the first to be automated by machines, I just don't really see a point in having an Instagram account any longer, the inevitable conclusion of creating a fake reality of your life on Instagram is being replaced by a machine that can fake it more efficiently.
Facebook had no problem helping pedophiles distribute child pornography on their platform, terrorists and Nazis from organizing events on their platform, or allowing deceptive political ads that swayed the votes of democratic nations.
Why would they give any fuck about fake Mr beast ads?
There are different standards between the users and the people that give meta money. It’s sad but true, and why I think moderation is a SIGNIFICANT concern when considering federating with threads
@Zaderade The internet is flooded with AI generated ads, it is crazy, I was using my sister's phone for a moment and inside an app an ad pops up, it was a obviously AI generated image of a singer with the lyrics of the song, nothing compared to a scam, but still. Another example is my mother, she was using youtube shorts and her feed was flooded with AI generated videos, the "person", voice, background, everything.
Then they ask why people are using ad blocking and alternative clients to consume content.
P.S: I have installed alternative clients, adblocks and all to their phones, I have told them and teach them how to avoid all of this crap, they don't see to care, they love ads and all this crap. (It's more of an habit thing I would say, but yeah)
Somehow, someone reactivated an old Facebook account of mine, which was dormant for like a decade. I reached out to Facebook support and said that someone was using my old account to post diet ads. Their response? "We see nothing wrong here, so we're not going to do anything about it." 🤦♂️
At this point I'm convinced meta either gets paid under the table to keep that shit, or (probably more likely) they make so much money off of the sheer volume of ai scam ads that they just don't care.
Each time I see "Meta's product didn't remove reported malicious post" I just think that this is valid punishment for user and their ego for wasting their time on these shitty platforms. 😅
Man. I blocked them with my pihole because I got tired of max volume jump scares whenever I clicked on a link to their. Guess it's staying blocked indefinitely.
Meta's "guidelines" are basically: does this content somehow stop us from making money?
The answer is generally no. If people stopped using the platform because of its poor handling of these kinds of situations, I guess that would affect them. Maybe?
I don't think any report I've ever made on a social media platform has ever been accepted. I once reported an account on tiktok named "[swastika symbol] FATHERLAND [swastika symbol]" that posted holocaust denial and genocidal content and got back "no violation detected". The same is true when I reported similar accounts on twitter (even pre-musk), and Facebook. I don't even know what the point of the report feature on those sites is, I've never even heard of it working.