YouTube now requires creators to disclose when AI-generated content is used in videos
YouTube now requires creators to disclose when AI-generated content is used in videos
Just a moment...
YouTube now requires creators to disclose when AI-generated content is used in videos
Just a moment...
That's a win, but it would need to be enforced... Which is harder to do
I'm waiting for the constant big drama when it turns out Big Popular Youtuber of the Week gets accused of using/not using Ai and it turns out the oppsite is true.
That's good, but soon every video will partially be AI because it'll be build in into the tools. Just like every photo out there is retouched with Lightroom/Photoshop.
To your point, Samsung's CEO said there is no such thing as a real photo when they were criticized for highly adjusting pictures on some of their newer cameras a year or two ago. Google's phones have had lighting and other effects that are helpful and make great improvements to shots(fixing lighting, removing photobombs, etc) that most people wouldn't say is AI but that's exactly what it is.
None of this is AI-specific. Youtube wants you to label your videos if you use "altered or synthetic content" that could mislead people about real people or events. 99% of what Corridor Crew puts out would probably need to be labeled, for example, and they mostly use traditional digital effects.
Creators must disclose content that:
Makes a real person appear to say or do something they didn’t do
Alters footage of a real event or place
Generates a realistic-looking scene that didn’t actually occur
So, they want deepfakes to be clearly labeled, but if the entire video was scripted by chatgpt, the AI label is not required?
Generates a realistic-looking scene that didn’t actually occur
Doesn't this describe, like, every mainstream live action film or television show?
this is going to be devastating for all the prank youtube channels
Wouldn't this enable, for example, Trump claiming he didn't make the "bloodbath" comment, calling it a deepfake, and telling Youtube to remove all the new coverage of it? I mean, more generally, what stops someone from abusing this system?
I'm sure that given the 99.99% ethical nature of AI enthusiasts and users that they will absolutely comply with this voluntary identification! /sarcasm
It’s a good first step. If claiming your AI video is real gets more views then I’m curious if the risks outweigh the cost of being caught.
Will this apply to advertisers, too? They don't block outright scams, so probably not. Money absolves all sins.
What? Didnt you know the government is giving away 6400.00 to everybody if you but only claim it by filling out this form on my sketchy website with all your personal info....?
tbf, a lot of ads are already misleading as it is, so pointing out AI isnt going to change its perception much.