Deepfakes are blurring the lines of reality more than ever before, and they're likely going to get a lot worse this year.
The AI Deepfakes Problem Is Going to Get Unstoppably Worse::Deepfakes are blurring the lines of reality more than ever before, and they're likely going to get a lot worse this year.
It is open source technology. There is no way to stop that, only pushing it under ground. The solution is simply a healthy skepticism. In reality it is a good thing. Now, a political image must be presented with much more transparency and conservative rhetoric lest one's actions and message become indistinguishable from satirical meme culture.
All authoritarian promoting articles about AI regulations originate from the oligarch Altmann's quest to monopolize the next decade of technology using populism.
AI has very real risks but no one is effectively reporting these, like the Israeli AI weapons systems and developments surrounding drones in Ukraine right now.
Images of the orange insurrectionist do not matter. No AI is crazy enough to say worse things than that jackass or anyone else in the Red Jihadi Erotic Donkey show.
A lot of harmful AI regulations have been proposed. Some seem designed to benefit well-heeled interest groups at the expense of society, while others seem to be pure populism. Watermarks are an especially worrying example of the latter.
Whether "regulation" in the abstract is harmful is not a sensible question to me, which is probably because my ideological outlook is different to most people here, which is probably because my cultural background is different. I'll resist the urge to give a long, rambling explanation.
People need to be more media litterate and more skeptical of news stories instead of taking them at face value, regardless of Deepfakery. So many articles that pass as "news" are filled with opinion and adjectives designed to ellicit an emotional response.
People need to learn to look at a piece of information and ask questions.
Who wants me to be reading this?
What emotions (if any) is this trying to ellicit?
What objective information can be taken from this story?
What are the sources for that objective information? Are they reliable?
Etc. Etc. Etc.
Even a Fox News article can have some insight into the goings on if you can parse the information from the spin. Deepfakes are just going to be another level of spin, but if people are informed enough, they'll be able to logically differentiate between a real news story and a damning fake video.
However, that doesnt solve the age old problem of willfully ignorant people and the confirmation bias...
You're not wrong, but your suggestion is completely disconnected from reality. You cannot fix any problem in the world by appealing to the potential in people to behave intelligently and rationally. As a group, they never will. As a group, they are monkeys without tails, incapable of rational thought and behavior.
One big problem is going to be that political supporters have been more than willing to assume anything they don't like about their candidate is a "deep fake", regardless of the fact that this has only been a recent possibility. You could have an authentic video of their favorite candidate telling everyone how stupid their supporters are, and those supporters will never believe it (or vice-versa, that easily-detectable fakes are made to smear a candidate, and the opposition will gobble it up).
Yeah we're going to see a lot of disgusting stuff like fake porns, but that was already being made on still photos so of course we're going to start seeing videos now. I think it will be interesting to see what happens in Hollywood, where actor's voices are already being used without their consent. If laws get passed to discourage such things (and we've just seen the FCC ban the use of faking politician's voices), they can also be used to curb other fakes of real people. I think that will help, but in the meantime it's still the Wild West of AI-generated content.
The world is being ripped apart by AI-generated deepfakes, and the latest half-assed attempts to stop them aren’t doing a thing.
Federal regulators outlawed deepfake robocalls on Thursday, like the ones impersonating President Biden in New Hampshire’s primary election.
“They’re here to stay,” said Vijay Balasubramaniyan, CEO of Pindrop, which identified ElevenLabs as the service used to create the fake Biden robocall.
The Federal Communications Commission (FCC) outlawing deepfake robocalls is a step in the right direction, according to Balasubramaniyan, but there’s minimal clarification on how this is going to be enforced.
OpenAI introduced watermarks to Dall-E’s images this week, both visually and embedded in a photo’s metadata.
There is some hope that technology and regulators are catching up to address this problem, but experts agree that deepfakes are only going to get worse before they get better.
The original article contains 478 words, the summary contains 138 words. Saved 71%. I'm a bot and I'm open source!