In about a year a quarter of the reports everywhere will be like this
In about two years, there will be entire programs devoted to whether or not videos, speeches or images are real or not .... and it will just get harder and harder to decide what is real and what isn't
In about five years, no one in the public will be sure what is real and what isn't on the internet, in the news, on social media or any where.
Swedish public service actually already has a program where you can send in... Anything I guess... And if they get enough requests they'll investigate it
I'm gonna hazard a guess that you are right about the trend, give or take on the time. However, there will be someone that claims some authority on what is and isn't accurate.
Its been the job of journalists up until now, but this how they'll finally supplant all of news media with their own (a golden Trump head of truth next to the post means grok has looked it over and says its real).
That or an open-source tool will be developed to detect ai media, and we are able to run it locally to confirm. This is my preferred outcome. Maybe I need to start doing some reading.
I'm sort of surprised companies haven't started to implement some hidden visual verification in their broadcasts. So they can prove deepfakes of it aren't real.
They won't do that because lots of customers won't pay for an AI service if they can't use the material to trick and defraud. Propaganda and misinformation is one of the biggest selling points for AI. Also they don't want their company's watermark on the demented, often illegal, sexual material that people make with these things.
This seems pretty inevitable, since it should really be journalism in particular that exposes and names such wrongdoings.
In a democracy, it theoretically has an important corrective function, but independent investigative journalism is already virtually non-existent (too expensive), and so in practice it is becoming increasingly unlikely that abuses will be remedied and those responsible prosecuted.
This leads to a situation that is already reality in the US, where serious journalism has already been largely replaced by mindless entertainment or even propaganda - I also think it's only going to get worse.
What can we do, society is beyond cooked. Your friends, neighbors, colleagues, teachers, lawyers, many many people cannot identify a random still AI image right now, nevermind a fictional cohesive motion picture. They are unaware how far the tech has progressed and extent it's become weaponized. What actually can be done?
Oklahoma state agencies are using bot accounts on Facebook to support deeply unpopular and illegal actions - like a turnpike that they are going to build without doing any form of environmental impact study (and lots of eminent domain bullshit.)
Why do you think this video was fake AI generated?
edit: To be clear, I'm not suggesting the story is accurate. I'm questioning why OP things this specific video is AI generated as opposed to a real video being taken out of context.
I zoomed in on one frame - it's a kind mess of abstract lines and smears, but it also could be AI enhancement with a modern phone, imo. I'd like additional supporting evidence as well.
I don't know anything about how cameras are "enhancing" pictures/videos but in your still I would wonder 1: how is this wire supposedly connected to these randomly placed fence posts, and 2 what is happening here, arms and faces all through the wiring?
Any wire I've ever used would bend and easily be pulled down by a person trying to climb over it, it can't be electric, because people are all touching it
I completely suck at telling AI stuff, but Hamas soldiers can't take three steps outside without being bombed and these morons want us to think they had the time or resources to set up something like that? The claim is fake either way.
Tbh, it doesn't matter, the video doesn't show anything. Its just some random guy and people cheering. It could have been shot in Croatia for all we know.