I am in complete agreement with this. While you can currently tell what's AI it won't be long before we're scratching our heads wondering which way is up and which way is down. Hell, I saw an AI generated video of a cat cooking food. It looked real sortve.
Maybe all digital content just shouldn't be trusted. It's like some kind of demon-realm or something. Navigable by the wise but for common fools like you and I, perilous. Full of illusion.
political posts should have a tag as well, so people can filter them out. people just bluesky, pixelfed, ... instead of lemmy because of all the politics here.
I do enjoy some types of AI content, I do not enjoy others. Same as any other type of content. So that tag would be useless for my personal preferences.
Anyway nsfw tag is made not for moral reasons, but to avoid those images showing when you are in an environment that's not proper for them (basically so it doesn't look like you are watching porn at work), this makes no sense for AI content, So I don't see the point besides some kind of persecution driven by a particular ideology. So I don't support it.
Sure. Only problem is, it's a people issue. Some people making ai generated content may be honest and willing to abide to such rule, but most are proud to not even read the rules and just blast shitty slop left and right. For this second category of people, when you point it to them, a very small percentage of them goes "oh, sorry". The vast majority just keep posting until blocked.
Granted, this experience mostly stems from every media posting sites out there, so it may be a bit biased…
That might work for now when those of us who know what to look for can readily identify AI content for the time being, but there will be a time when nobody can tell anymore. How will we enforce the tagging then? Bad actors will always lie anyway. Some will accidentally post it without knowing its AI.
I think they should add a tag for it anyway so those who are knowingly posting AI stuff can tag it but I fear that in the next few years the AI images and videos will be inescapable and impossible to identify reliably even for people who are usually good at picking out altered or fake images and videos.
Adobe is trying for the opposite. Content authenticity with digital signatures to show something is not AI (been having conversations with them on this).
Text, sure. But I don't get the hate towards AI generated images. If it's a good image and it's not meant to mislead, I am completely fine with AI content. It's not slop if it's good.
A lot of people seem to think that all ai art is low effort garbage, which is just not true. There can be a lot of skill put into crafting the correct prompt to get the image you want from an image generator, not to mention the technical know-how of setting it up locally. The "ai art is not art" argument to me doesn't sound any more substantiated than "electronic musicians aren't musicians, go learn a real instrument" or "photographers aren't really artists, all they do is push a button". But regardless, I agree that we need good tagging, or as @ThatWeirdGuy1001 said, different communities. Even though the output looks similar, actually drawing things and wrangling prompts are two completely different skillsets, and the way we engage with the artistic product of those skills is completely different. You wouldn't submit a photo you took to a watercolor painting contest. Same with ai art and non-ai art.
Anyway, just thought i'd share my opinion as an ai non-hater.
Definitely, just to prevent it from being normalized. Just like crypto, flying cars, psychics, MLM businesses, this shit will fade in to the into the domain of low skill grifters.