A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography.
A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.
Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.
How was the model trained? Probably using existing CSAM images. Those children are victims. Making derivative images of “imaginary” children doesn’t negate its exploitation of children all the way down.
So no, you are making false equivalence with your video game metaphors.
A generative AI model doesn't require the exact thing it creates in its datasets. It most likely just combined regular nudity with a picture of a child.
That's not really a nuanced take on what is going on. A bunch of images of children are studied so that the AI can learn how to draw children in general. The more children in the dataset, the less any one of them influences or resembles the output.
Ironically, you might have to train an AI specifically on CSAM in order for it to identify the kinds of images it should not produce.
That's a whole other thing than the AI model being trained on CSAM. I'm currently neutral on this topic so I'd recommend you replying to the main thread.
It's not CSAM in the training dataset, it's just pictures of children/people that are already publicly available. This goes on to the copyright side of things of AI instead of illegal training material.
It’s images of children used to make CSAM. No matter of your mental gymnastics can change that, nor the fact that those children’s consent was not obtained.
Why are you trying so hard to rationalize the creation of CSAM? Do you actually believe there is a context in which CSAM is OK? Are you that sick and perverted?
Because it really sounds like that’s what you’re trying to say, using copyright law as an excuse.
It's every time with you people, you can't have a discussion without accusing someone of being a pedo. If that's your go-to that says a lot about how weak your argument is or what your motivations are.
I'm not neutral about child porn, I'm very much against it, stop trying to put words in my mouth. I'm talking about this kind of use of AI could be in the very same category of loli imagery, since these are not real child sexual abuse material.
I just hope that the Models aren't trained on CSAM. Making generating stuff they can fap on ""ethical reasonable"" as no children would be involved. And I hope that those who have those tendancies can be helped one way or another that doesn't involve chemical castration or incarceration.
Fantasising about sexual contact with children indicates that this person might groom children for real, because they have a sexual interest in doing so. As someone who was sexually assaulted as a child, it's really not something that needs to happen.
Seems like then fantasizing about shooting people or carjacking or such indcates that person might do that activity for real to. There are a lot of car jackings nowadays and you know gta is real popular. mmmm. /s but seriously im not sure your first statement has merit. Especially when you look at where to draw the line. anime. manga. oil paintings. books. thoughts in ones head.
If you're asking whether anime, manga, oil paintings, and books glorifying the sexualization of children should also be banned, well, yes.
This is not comparable to glorifying violence, because real children are victimized in order to create some of these images, and the fact that it's impossible to tell makes it even more imperative that all such imagery is banned, because the existence of fakes makes it even harder to identify real victims.
It's like you know there's an armed bomb on a street, but somebody else filled the street with fake bombs, because they get off on it or whatever. Maybe you'd say making fake bombs shouldn't be illegal because they can't harm anyone. But now suddenly they have made the job of law enforcement exponentially more difficult.
If you want to keep people who fantasise about sexually exploiting children around your family, be my guest. My family tried that, and I was raped. I didn't like that, and I have drawn my own conclusions.
yeah and if you want to keep people who fantasize about murdering folk. you can't say one thing is a thing without saying the other is. Im sorry you were raped but I doubt it would be stopped by banning lolita.
You can call it a strawman but doing something evil if its killing folks or raping folks the effect should be the same when discussing non actual and actual. You can say this thing is a special case but when it comes to freedom of speech, which is anything that is not based in actual events. writing, speaking, thinking, art. Special circumstances becomes a real slippery slope (which can also be brought up as a fallacy which like all "fallacies" depend a lot on what else backs them up on how they are being presented)
Well, the image generator had to be trained on something first in order to spit out child porn. While it may be that the training set was solely drawn/rendered images, we don't know that, and even if the output were in that style, it might very well be photorealistic images generated from real child porn and run through a filter.
Yes exactly. That people are then excusing this with "well it was trained on all.public images," are just admitting you're right and that there is a level of harm here since real materials are used. Even if they weren't being used or if it was just a cartoon, the morality is still shaky because of the role porn plays in advertising. We already have laws about advertising because it's so effective, including around cigarettes and prescriptions. Most porn, ESPECIALLY FREE PORN, is an ad to get you to buy other services. CP is not excluded from this rule - no one gets free lunch, so to speak. These materials are made and hosted for a reason.
The role that CP plays in most countries is difficult. It is used for blackmail. It is also used to generate money for countries (intelligence groups around the world host illegal porn ostensibly "to catch a predator," but then why is it morally okay for them to distribute these images but no one else?). And it's used as advertising for actual human trafficking organizations. And similar organizations exist for snuff and gore btw. And ofc animals. And any combination of those 3. Or did you all forget about those monkey torture videos, or the orangutan who was being sex trafficked? Or Daisy's Destruction and Peter Scully?
So it's important to not allow these advertisers to combine their most famous monkey torture video with enough AI that they can say it's AI generated, but it's really just an ad for their monkey torture productions. And even if NONE of the footage was from illegal or similar events and was 100% thought of by AI - it can still be used as an ad for these groups if they host it. Cartoons can be ads ofc.
Wild corn dogs are an outright plague where I live. When I was younger, me and my buddies would lay snares to catch to corn dogs. When we caught one, we'd roast it over a fire to make popcorn. Corn dog cutlets served with popcorn from the same corn dog is popular meal, especially among the less fortunate. Even though some of the affluent consider it the equivalent to eating rat meat. When me pa got me first rifle when I turned 14, I spent a few days just shooting corn dogs.
I hope you didn't seriously think the prompt for that image was "corn dog" because if your understanding of generative AI is on that level you probably should refrain from commenting on it.
Prompt: Photograph of a hybrid creature that is a cross between corn and a dog
But you do know because corn dogs as depicted in the picture do not exists so there couldn't have been photos of them in the training data, yet it was still able to create one when asked.
This is because it doesn't need to have been seen one before. It knows what corn looks like and it knows what a dog looks like so when you ask it to combine the two it will gladly do so.
But you do know because corn dogs as depicted in the picture do not exists so there couldn't have been photos of them in the training data, yet it was still able to create one when asked.
Yeah, except photoshop and artists exist. And a quick google image search will find them. 🙄
And this proves that AI can't generate simulated CSAM without first having seen actual CSAM how, exactly?
To me, the takeaway here is that you can take a shitty 2 minute photoshop doodle and by feeding it thru AI it'll improve the quality of it by orders of magnitude.
I wasn't the one attempting to prove that. Though I think it's definitive.
You were attempting to prove it could generate things not in its data set and i have disproved your theory.
To me, the takeaway here is that you can take a shitty 2 minute photoshop doodle and by feeding it thru AI it'll improve the quality of it by orders of magnitude.
To me, the takeaway is that you know less about ai than you claim. Much less. Cause we have actual instances and many where csam is in the training data. Don't believe me?
You were attempting to prove it could generate things not in its data set and i have disproved your theory.
I don't understand how you could possibly imagine that pic somehow proves your claim. You've made no effort in trying to explain yourself. You just keep dodging my questions when I ask you to do so. A shitty photoshop of a "corn dog" has nothing to do with how the image I posted was created. It's a composite between a corn and a dog.
Generative AI, just like a human, doesn't rely on having seen an exact example of every possible image or concept. During its training, it was exposed to huge amounts of data, learning patterns, styles, and the relationships between them. When asked to generate something new, it draws on this learned knowledge to create a new image that fits the request, even if that exact combination wasn't in its training data.
Cause we have actual instances and many where csam is in the training data.
If the AI has been trained on actual CSAM and especially if the output simulates real people, then that’s a whole another discussion to be had. This is however not what we’re talking about here.
Intent is defined as intention or purpose. So I'll rephrase for you: the purpose of playing a FPS is to play a game. The purpose of playing GTA is to play a game.
The purpose of AI generated CSAM is to watch children being abused.
I don't think that's fair. It could just as well be said that the purpose of violent games is to simulate real life violence.
Even if I grant you that the purpose of viewing CSAM is to see child abuse, it's still less bad than actually abusing them just like playing violent games is less bad than participating in real violence. Also, despite the massive increase in violent games and movies, the actual number of violence is going down so implying that viewing such content would increase the cases of child abuse is an assumption I'm not willing to make either.
The purpose of a game is to play a game through a series of objectives and challenges.
Even if I grant you that the purpose of viewing CSAM is to see child abuse
Very curious to hear what else you think the purpose of watching CSAM might be.
it’s still less bad than actually abusing them
"less bad" is relative. A bad thing is still bad. If we go by length of sentencing then rape is 'less bad' than murder. that doesn't make it 'not bad'.
so implying that viewing such content would increase the cases of child abuse is an assumption I’m not willing to make either.
OK?
I didn't claim that AI CSAM increased anything at all. Literally all I've said is that the purpose of AI generated CSAM is to watch kids being abused.
Neither did I claim that violent games lead to violence. You invented that strawman all by yourself.
A person said that there is no victim in creating simulated CSAM with AI just like there isn't one in video games, to which you replied that the difference there is intention. The intention to play violent games is to play games when as with viewing CSAM it's that your intention is to view abuse material.
Correct so far?
Ofcourse the intent is that. For what other reason would anyone want to see CSAM for, than to see CSAM? What kind of argument / conclusion is this supposed to be? How else am I supposed to interpret this than as you advocating for the crimimalization of creating such content despite the fact that no one is being harmed? How is that not pre-emptively punishing people for crimes they've yet to even commit? Nobody chooses to be born with such thoughts or desires, so I don't see the point of punishing anyone for that alone.
I've literally got no idea what you're talking about or what your point is. Are you saying this person hasn't committed a crime? Because that's incorrect. Lots of jurisdictions have laws preventing things like CSAM generated imagery, deepfake porn and a whole raft of other things. 'Harm' doesn't begin and end with something done to an individual for a lot of crimes.
Are you saying this person hasn’t committed a crime?
Yes, and if the law is interpretet in a way that it is considered illegal, and the person is punished for it, then that's a moral injustice and the kind of senselessness we as humans should grow out of. The fact that this "crime" has no victim is the whole point of why punishing for it makes no sense.
CSAM is illegal for a very good reason; producing it without abusing children is by definition impossible. By searching for and viewing such content, the person becomes part of the causal chain that leads to it being produced in the first place. By criminalizing it we attempt to deter people from looking for it and thus bringing down the demand and disincentivizing the production of it.
Using AI that is not trained on such content is out of this loop. There is literally nobody being harmed if someone decides to use it to create depictions of such content. It's not actual CSAM it's producing. By the very definition it cannot be. Not any more than shooting a person in a video game is a murder. CSAM stands for Child Sexual Abuse Material (I hate even saying that) so in other words; proof of the crime having happened. AI generated images are fiction. Nobody is being harmed. It's just a more photorealistic version of a drawing. Treating it as actual CSAM in the court is insanity.
Now. If the AI has been trained on actual CSAM and especially if the output simulates real people, then that's a whole another discussion to be had. This is however not what we're talking about here.
Not a great comparison, because unlike withh violent games or movies, you can't say that there is no danger to anyone in allowing these images to be created or distributed. If they are indistinguishable from the real thing, it then becomes impossible to identify actual human victims.
There's also a strong argument that the availability of imagery like this only encourages behavioral escalation in people who suffer from the affliction of being a sick fucking pervert pedophile. It's not methadone for them, as some would argue. It's just fueling their addiction, not replacing it.