Should there be a law mandating all AI generated content be tagged?
Title, or at least the inverse be encouraged. This has been talked about before, but with how bad things are getting, and how realistic goods ai generated videos are getting, anything feels better than nothing, AI generated watermarks, or metadata can be removed, but thats not the point, the point is deterrence. Immediately all big tech will comply (atleast on the surface for consumer-facing products), and then we will probably see a massive decrease in malicious use of it, people will bypass it, remove watermarks, fix metadata, but the situation should be quite a bit better? I dont see many downsides/
No, mostly because I'm against laws which are literally impossible to enforce. And it'll become exponentially harder to enforce as the years pass on.
I think a lot of people will get annoyed at this comparison, but I see a lot of similarity between the attitudes of the "AI slop" people and the "We can always tell" anti-trans people, in the sense that I've seen so many people from the first group accuse legitimate human works of being AI-created (and obviously we've all seen how often people from the second group have accused AFAB women of being trans). And just as those anti-trans people actually can't tell for a huge number of well-passing trans people, there's a lot of AI-created works out there that are absolutely passing for human-created works in mass, without giving off any obvious "slop" signs. Real people will get (and are getting) swept-up and hurt in this anti-AI reactionary phase.
I think AI has a lot of legitimately decent uses, and I think it has a lot of stupid-as-shit uses. And the stupid-as-shit uses may be in the lead for the moment. But mandating tagging AI-generated content would just be ineffective and reactionary. I do think it should be regulated in other, more useful ways.
I definitely agree with this. If this does not happen then I can at the very least see the journalism industry develop its own opt-in standard for image signing.
I'm not against such a law in theory, but I have many questions about how it would be implemented and enforced. First off, what exactly counts as AI generated? We are seeing more and more that AI features are being added into lots of areas, and I could certainly envision a future in few years time that nearly all photos taken with high end phones would be altered by AI in some way.
After that, who exactly is responsible for ensuring that things are tagged properly? The individual who created the image? The software that may have done the AI processing? The social media site that the image was posted on?
If the penalties are harsh for not attributing ai to an image, what's to stop sites from just having a blanket disclaimer saying that ALL images on the page were generated by AI?
If the penalties are harsh for not attributing ai to an image, what’s to stop sites from just having a blanket disclaimer saying that ALL images on the page were generated by AI?
Just like what happens with companies slapping Prop. 65 warnings on products that don't actually need them, out of caution and/or ignorance
Yup. There should also be a law requiring all photography, specifically of people, that have been altered/photoshopped to be tagged to remind us that the beauty standards that are being shoved down our throats are unrealistic.
Legally mandating watermarks on any AI generated watermarks is a bad idea.
It's good practice for these companies to add a watermark, but when you add a "legal" requirement, you're opening up regular artists/authors to getting dragged through the legal system simply because someone (or some corporation) suspects that an AI tool was used at some point in the work's creation.
Force the AI models to contain some kind of metadata in all their material. Training AI models is a massive undertaking, it's not like they can hide what they're doing. We know who is training these models and where their data centers are, so a regulatory agency would certainly be able to force them to comply.
In the US this could be done with the FCC, in other countries the power can be invested into regulatory bodies that control communications and broadcasting etc.
No, just legislate that all AI companies have to publish every single source they used for their training models and proof they have permissions/licenses to do so. If its later shown that they used a source and didnt list it, they can be fined & sued for a % of the companies revenue.
Then all the copyright holders of those sources then sue the AI companies for infringement/retrospective licenses.
Until they can no longer tell, slide into completely baseless vibes based identification and them most people will just bore and move on and small but vocally online group of tinhat equivalents will base their entire personality on "tracking" the AI
I'd rather have us normalize sourcing things, preferably with the whole trace.
Honestly, I don't really care if something has been written by an underpaid intern or by a LLM. I don't think hallucinations are worse than lies and propaganda and these two things are what I want to see fought.
And I think the issue is wrongly framed. Fast forward 5 years (or 5 months, who knows), everyone will have the equivalent of Claude 4 running locally on their phone, and they will ask for news updates on their specific interests to a model who knows what their interlocutor knows in the tone they configured.
"Fucking Putin at it again, this time hit Kiyv with missiles, 50 sent, half went through." and it will know to give you background when you dont have it "Well it turns out that there riots broke up in Nowheretown in France over a new highway project, with a local ecologist group that brought about a thousand militants across Europe to oppose police. Cool clashes videos if you want."
We won't read "news" through generic one-size-fits-all texts, we will be source-hungry and will have agents digest hundreds of pages of raw info into what we need.
Traceability of information will be what matters the most.