FBI Arrests Man For Generating AI Child Sexual Abuse Imagery
FBI Arrests Man For Generating AI Child Sexual Abuse Imagery

FBI Arrests Man For Generating AI Child Sexual Abuse Imagery

FBI Arrests Man For Generating AI Child Sexual Abuse Imagery
FBI Arrests Man For Generating AI Child Sexual Abuse Imagery
The headline/title needs to be extended to include the rest of the sentence
Yes, this sicko needs to be punished. Any attempt to make him the victim of " the big bad government" is manipulative at best.
Edit: made the quote bigger for better visibility.
That's a very important distinction. While the first part is, to put it lightly, bad, I don't really care what people do on their own. Getting real people involved, and minor at that? Big no-no.
All LLM headlines are like this to fuel the ongoing hysteria about the tech. It's really annoying.
Sure is. I report the ones I come across as clickbait or missleading title, explaining the parts left out...such as this one where those 7 words change the story completely.
Whoever made that headline should feel ashamed for victimizing a grommer.
It's worth mentioning that in this instance the guy did send porn to a minor. This isn't exactly a cut and dry, "guy used stable diffusion wrong" case. He was distributing it and grooming a kid.
The major concern to me, is that there isn't really any guidance from the FBI on what you can and can't do, which may lead to some big issues.
For example, websites like novelai make a business out of providing pornographic, anime-style image generation. The models they use deliberately tuned to provide abstract, "artistic" styles, but they can generate semi realistic images.
Now, let's say a criminal group uses novelai to produce CSAM of real people via the inpainting tools. Let's say the FBI cast a wide net and begins surveillance of novelai's userbase.
Is every person who goes on there and types, "Loli" or "Anya from spy x family, realistic, NSFW" (that's an underaged character) going to get a letter in the mail from the FBI? I feel like it's within the realm of possibility. What about "teen girls gone wild, NSFW?" Or "young man, no facial body hair, naked, NSFW?"
This is NOT a good scenario, imo. The systems used to produce harmful images being the same systems used to produce benign or borderline images. It's a dangerous mix, and throws the whole enterprise into question.
The major concern to me, is that there isn't really any guidance from the FBI on what you can and can't do, which may lead to some big issues.
https://www.ic3.gov/Media/Y2024/PSA240329 https://www.justice.gov/criminal/criminal-ceos/citizens-guide-us-federal-law-child-pornography
They've actually issued warnings and guidance, and the law itself is pretty concise regarding what's allowed.
(8) "child pornography" means any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture, whether made or produced by electronic, mechanical, or other means, of sexually explicit conduct, where-
(A) the production of such visual depiction involves the use of a minor engaging in sexually explicit conduct;
(B) such visual depiction is a digital image, computer image, or computer-generated image that is, or is indistinguishable from, that of a minor engaging in sexually explicit conduct; or
(C) such visual depiction has been created, adapted, or modified to appear that an identifiable minor is engaging in sexually explicit conduct.
...
(11) the term "indistinguishable" used with respect to a depiction, means virtually indistinguishable, in that the depiction is such that an ordinary person viewing the depiction would conclude that the depiction is of an actual minor engaged in sexually explicit conduct. This definition does not apply to depictions that are drawings, cartoons, sculptures, or paintings depicting minors or adults.
If you're going to be doing grey area things you should do more than the five minutes of searching I did to find those honestly.
It was basically born out of a supreme Court case in the early 2000s regarding an earlier version of the law that went much further and banned anything that "appeared to be" or "was presented as" sexual content involving minors, regardless of context, and could have plausibly been used against young looking adult models, artistically significant paintings, or things like Romeo and Juliet, which are neither explicit nor vulgar but could be presented as involving child sexual activity. (Juliet's 14 and it's clearly labeled as a love story).
After the relevant provisions were struck down, a new law was passed that factored in the justices rationale and commentary about what would be acceptable and gave us our current system of "it has to have some redeeming value, or not involve actual children and plausibly not look like it involves actual children".
Is every person who goes on there and types, "Loli" or "Anya from spy x family, realistic, NSFW" (that's an underaged character) going to get a letter in the mail from the FBI?
I'll throw that baby out with the bathwater to be honest.
Simulated crimes aren't crimes. Would you arrest every couple that finds health ways to simulate rape fetishes? Would you arrest every person who watches Fast and The Furious or The Godfather?
If no one is being hurt, if no real CSAM is being fed into the model, if no pornographic images are being sent to minors, it shouldn't be a crime. Just because it makes you uncomfortable, don't make it immoral.
The part involving actual children is go-to-jail territory, because it involves actual children.
The images don't.
There's as much "child sexual abuse" in a generated image as in a crayon drawing of Bart Simpson fucking Lisa. Anyone's visceral reaction may differ between the two JPGs, but we should treat them the same because they are the same.
America has some of the most militant anti pedophilic culture in the world but they far and away have the highest rates of child sexual assault.
I think AI is going to revel is how deeply hypocritical Americans are on this issue. You have gigantic institutions like churches committing industrial scale victimization yet you won't find a 1/10th of the righteous indignation against other organized religions where there is just as much evidence it is happening as you will regarding one person producing images that don't actually hurt anyone.
It's pretty clear by how staggering a rate of child abuse that occurs in the states that Americans are just using child victims as weaponized politicalization (it's next to impossible to convincingly fight off pedo accusations if you're being mobbed) and aren't actually interested in fighting pedophilia.
Most states will let grown men marry children as young as 14. There is a special carve out for Christian pedophiles.
Fortunately most instances are in the category of a 17 year old to an 18 year old, and require parental consent and some manner of judicial approval, but the rates of "not that" are still much higher than one would want.
~300k in a 20 year window total, 74% of the older partner being 20 or younger, and 95% of the younger partner being 16 or 17, with only 14% accounting for both partners being under 18.
There's still no reason for it in any case, and I'm glad to live in one of the states that said "nah, never needed .
These cases are interesting tests of our first amendment rights. "Real" CP requires abuse of a minor, and I think we can all agree that it should be illegal. But it gets pretty messy when we are talking about depictions of abuse.
Currently, we do not outlaw written depictions nor drawings of child sexual abuse. In my opinion, we do not ban these things partly because they are obvious fictions. But also I think we recognize that we should not be in the business of criminalizing expression, regardless of how disgusting it is. I can imagine instances where these fictional depictions could be used in a way that is criminal, such as using them to blackmail someone. In the absence of any harm, it is difficult to justify criminalizing fictional depictions of child abuse.
So how are AI-generated depictions different? First, they are not obvious fictions. Is this enough to cross the line into criminal behavior? I think reasonable minds could disagree. Second, is there harm from these depictions? If the AI models were trained on abusive content, then yes there is harm directly tied to the generation of these images. But what if the training data did not include any abusive content, and these images really are purely depictions of imagination? Then the discussion of harms becomes pretty vague and indirect. Will these images embolden child abusers or increase demand for "real" images of abuse. Is that enough to criminalize them, or should they be treated like other fictional depictions?
We will have some very interesting case law around AI generated content and the limits of free speech. One could argue that the AI is not a person and has no right of free speech, so any content generated by AI could be regulated in any manner. But this argument fails to acknowledge that AI is a tool for expression, similar to pen and paper.
A big problem with AI content is that we have become accustomed to viewing photos and videos as trusted forms of truth. As we re-learn what forms of media can be trusted as "real," we will likely change our opinions about fringe forms of AI-generated content and where it is appropriate to regulate them.
It comes back to distribution for me. If they are generating the stuff for themselves, gross, but I don't see how it can really be illegal. But if your distributing them, how do we know their not real? The amount of investigative resources that would need to be dumped into that, and the impact on those investigators mental health, I don't know. I really don't have an answer, I don't know how they make it illegal, but it really feels like distribution should be.
It feels incredibly gross to just say "generated CSAM is a-ok, grab your hog and go nuts", but I can't really say that it should be illegal if no child was harmed in the training of the model. The idea that it could be a gateway to real abuse comes to mind, but that's a slippery slope that leads to "video games cause school shootings" type of logic.
I don't know, it's a very tough thing to untangle. I guess I'd just want to know if someone was doing that so I could stay far, far away from them.
partly because they are obvious fictions
That's it actually, all sites that allow it like danbooru, gelbooru, pixiv, etc. Have a clause against photo realistic content and they will remove it.
Well thought-out and articulated opinion, thanks for sharing.
If even the most skilled hyper-realistic painters were out there painting depictions of CSAM, we'd probably still label it as free speech because we "know" it to be fiction.
When a computer rolls the dice against a model and imagines a novel composition of children's images combined with what it knows about adult material, it does seem more difficult to label it as entirely fictional. That may be partly because the source material may have actually been real, even if the final composition is imagined. I don't intend to suggest models trained on CSAM either, I'm thinking of models trained to know what both mature and immature body shapes look like, as well as adult content, and letting the algorithm figure out the rest.
Nevertheless, as you brought up, nobody is harmed in this scenario, even though many people in our culture and society find this behavior and content to be repulsive.
To a high degree, I think we can still label an individual who consumes this type of AI content to be a pedophile, and although being a pedophile is not in and of itself an illegal adjective to posses, it comes with societal consequences. Additionally, pedophilia is a DSM-5 psychiatric disorder, which could be a pathway to some sort of consequences for those who partake.
This is tough, the goal should be to reduce child abuse. It's unknown if AI generated CP will increase or reduce child abuse. It will likely encourage some individuals to abuse actual children while for others it may satisfy their urges so they don't abuse children. Like everything else AI, we won't know the real impact for many years.
How do you think they train models to generate CSAM?
Some of yall need to lookup what an LoRA is
I suggest you actually download stable diffusion and try for yourself because it's clear that you don't have any clue what you're talking about. You can already make tiny people, shaved, genitals, flat chests, child like faces, etc. etc. It's all already there. Literally no need for any LoRAs or very specifically trained models.
Does an AI image of Shrek riding an avocado motorcycle imply there's a bunch of images of that, in the data set?
He then allegedly communicated with a 15-year-old boy, describing his process for creating the images, and sent him several of the AI generated images of minors through Instagram direct messages. In some of the messages, Anderegg told Instagram users that he uses Telegram to distribute AI-generated CSAM. “He actively cultivated an online community of like-minded offenders—through Instagram and Telegram—in which he could show off his obscene depictions of minors and discuss with these other offenders their shared sexual interest in children,” the court records allege. “Put differently, he used these GenAI images to attract other offenders who could normalize and validate his sexual interest in children while simultaneously fueling these offenders’ interest—and his own—in seeing minors being sexually abused.”
I think the fact that he was promoting child sexual abuse and was communicating with children and creating communities with them to distribute the content is the most damning thing, regardless of people's take on the matter.
Umm ... That AI generated hentai on the page of the same article, though ... Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.
Umm … That AI generated hentai on the page of the same article, though … Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.
The image depicts mature women, not children.
Correct. And OP's not saying it is.
But to place that sort of image on an article about CSAM is very poorly thought out
I had an idea when these first AI image generators started gaining traction. Flood the CSAM market with AI generated images( good enough that you can't tell them apart.) In theory this would put the actual creators of CSAM out of business, thus saving a lot of children from the trauma.
Most people down vote the idea on their gut reaction tho.
Looks like they might do it on their own.
It's such an emotional topic that people lose all rationale. I remember the Reddit arguments in the comment sections about pedos, already equalizing the term with actual child rapists, while others would argue to differentiate because the former didn't do anything wrong and shouldn't be stigmatized for what's going on in their heads but rather offered help to cope with it. The replies are typically accusations of those people making excuses for actual sexual abusers.
I always had the standpoint that I do not really care about people's fictional content. Be it lolis, torture, gore, or whatever other weird shit. If people are busy & getting their kicks from fictional stuff then I see that as better than using actual real life material, or even getting some hands on experiences, which all would involve actual real victims.
And I think that should be generally the goal here, no? Be it pedos, sadists, sociopaths, whatever. In the end it should be not about them, but saving potential victims. But people rather throw around accusations and become all hysterical to paint themselves sitting on their moral high horse (ironically typically also calling for things like executions or castrations).
Yeah, exact same feelings here. If there is no victim then who exactly is harmed?
My concern is why would it put them out of business? If we just look at legal porn there is already beyond huge amounts already created, and the market is still there for new content to be created constantly. AI porn hasn't noticeably decreased the amount produced.
Really flooding the market with CSAM makes it easier to consume and may end up INCREASING the amount of people trying to get CSAM. That could end up encouraging more to be produced.
The market is slightly different tho. Most CSAM is images, with Porn theres a lot of video and images.
It's also a victimless crime. Just like flooding the market with fake rhino horns and dropping the market price to a point that it isn't worth it.
No no no guys.
It's perfectly okay to do this as this is art, not child porn as I was repeatedly told and down voted when I stated the fucking obvious
So if it's art, we have to allow it under the constitution, right? It's "free speech", right?
Well yeah. Just because something makes you really uncomfortable doesn't make it a crime. A crime has a victim.
Also, the vast majority of children are victimized because of the US' culture of authoritarianism and religious fundamentalism. That's why far and away children are victimized by either a relative or in a church. But y'all ain't ready to have that conversation.
First of all, it's absolutely crazy to link to a 6 month old thread just to complain that you go downvoted in it. You're pretty clearly letting this site get under your skin if you're still hanging onto these downvotes.
No, I just... Remembered the thread? Wasn't difficult to remember it. Took me a minute to find it.
This may surprise you but CP isn't something I discuss very often.
I don't lose sleep over people defending CP as "art", nor did it get under my skin. I just think these are fucking idiots and are for some baffling reason trying to defend the indefensible and go about my day. I'm not going to do anything about it, but I'm sure glad I don't have such dumb comments linked to a public account with my IP address logged somewhere...
I just raised it to make my point.
I didn't bother reading the rest of your essay. Its pretty clear from the first paragraph where you're going to land.
Does this mean the AI was trained on CP material? How else would it know how to do this?
It would not need to be trained on CP. It would just need to know what human bodies can look like and what sex is.
AIs usually try not to allow certain content to be produced, but it seems people are always finding ways to work around those safeguards.
AIs usually try not to allow certain content to be produced, but it seems people are always finding ways to work around those safeguards.
Local model go brrrrrr
Likely yes, and even commercial models have an issue with CSAM leaking into their datasets. The scummiest of all of them likelyget one offline model, then add their collection of CSAM to it.
Well some llm have been caught wirh cp in their training data
Fuckin good job
And the Stable diffusion team get no backlash from this for allowing it in the first place?
Why are they not flagging these users immediately when they put in text prompts to generate this kind of thing?
You can run the SD model offline, so on what service would that User be flagged?
my main question is: how much csam was fed into the model for training so that it could recreate more
i think it'd be worth investigating the training data usued for the model
This did happen a while back, with researchers finding thousands of hashes of CSAM images in LAION-2B. Still, IIRC it was something like a fraction of a fraction of 1%, and they weren't actually available in the dataset because they had already been removed from the internet.
You could still make AI CSAM even if you were 100% sure that none of the training images included it since that's what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI's hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That's the power and danger of these things.
Approximately zero images, out of a bajillion.
Y'all know this tech combines concepts, right? Being able to combine "Shrek" and "unicycle" does not require prior art for Shrek riding a unicycle. It judges whether an image satisfies the concepts of Shrek and unicycle, and adjusts it to satisfy both constraints. Eventually you get a fat green ogre on half a bicycle.
The database definitely contains children. The database definitely contains pornography. The network does not have moral opinions about why those two goals cannot be satisfied simultaneously.
Because what prompts people enter on their own computer isn't in their responsibility? Should pencil makers flag people writing bad words?
Isn't there evidence that as artificial CSAM is made more available, the actual amount of abuse is reduced? I would research this but I'm at work.
What an oddly written article.
Additional evidence from the laptop indicates that he used extremely specific and explicit prompts to create these images. He likewise used specific ‘negative’ prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults.”
They make it sound like the prompts are important and/or more important than the 13,000 images…
In many ways they are. The image generated from a prompt isn't unique, and is actually semi random. It's not entirely in the users control. The person could argue "I described what I like but I wasn't asking it for children, and I didn't think they were fake images of children" and based purely on the image it could be difficult to argue that the image is not only "child-like" but actually depicts a child.
The prompt, however, very directly shows what the user was asking for in unambiguous terms, and the negative prompt removes any doubt that they thought they were getting depictions of adults.
Having an AI generate 13.000 images does not even take 24 hours (depending on hardware and settings ofc).
Mhm I have mixed feelings about this. I know that this entire thing is fucked up but isn't it better to have generated stuff than having actual stuff that involved actual children?
A problem that I see getting brought up is that generated AI images makes it harder to notice photos of actual victims, making it harder to locate and save them
And doesn't the AI learn from real images?
Well that, and the idea of cathartic relief is increasingly being dispelled. Behaviour once thought to act as a pressure relief for harmful impulsive behaviour is more than likely just a pattern of escalation.
The arrest is only a positive. Allowing pedophiles to create AI CP is not a victimless crime. As others point out it muddies the water for CP of real children, but it also potentially would allow pedophiles easier ways to network in the open (if the images are legal they can easily be platformed and advertised), and networking between abusers absolutely emboldens them and results in more abuse.
As a society we should never allow the normalization of sexualizing children.
Interesting. What do you think about drawn images? Is there a limit to how will the artist can be at drawing/painting? Stick figures vs life like paintings. Interesting line to consider.
Is this proven or a common sense claim you’re making?
Actually, that's not quite as clear.
The conventional wisdom used to be, (normal) porn makes people more likely to commit sexual abuse (in general). Then scientists decided to look into that. Slowly, over time, they've become more and more convinced that (normal) porn availability in fact reduces sexual assault.
I don't see an obvious reason why it should be different in case of CP, now that it can be generated.
Did we memory hole the whole ‘known CSAM in training data’ thing that happened a while back? When you’re vacuuming up the internet you’re going to wind up with the nasty stuff, too. Even if it’s not a pixel by pixel match of the photo it was trained on, there’s a non-zero chance that what it’s generating is based off actual CSAM. Which is really just laundering CSAM.
IIRC it was something like a fraction of a fraction of 1% that was CSAM, with the researchers identifying the images through their hashes but they weren't actually available in the dataset because they had already been removed from the internet.
Still, you could make AI CSAM even if you were 100% sure that none of the training images included it since that's what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI's hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That's the power and danger of these things.
I didn't know that, my bad.
Yeah, it’s very similar to the “is loli porn unethical” debate. No victim, it could supposedly help reduce actual CSAM consumption, etc… But it’s icky so many people still think it should be illegal.
There are two big differences between AI and loli though. The first is that AI would supposedly be trained with CSAM to be able to generate it. An artist can create loli porn without actually using CSAM references. The second difference is that AI is much much easier for the layman to create. It doesn’t take years of practice to be able to create passable porn. Anyone with a decent GPU can spin up a local instance, and be generating within a few hours.
In my mind, the former difference is much more impactful than the latter. AI becoming easier to access is likely inevitable, so combatting it now is likely only delaying the inevitable. But if that AI is trained on CSAM, it is inherently unethical to use.
Whether that makes the porn generated by it unethical by extension is still difficult to decide though, because if artists hate AI, then CSAM producers likely do too. Artists are worried AI will put them out of business, but then couldn’t the same be said about CSAM producers? If AI has the potential to run CSAM producers out of business, then it would be a net positive in the long term, even if the images being created in the short term are unethical.
Just a point of clarity, an AI model capable of generating csam doesn't necessarily have to be trained on csam.
I think one of the many problems with AI generated CSAM is that as AI becomes more advanced it will become increasingly difficult for authorities to tell the difference between what was AI generated and what isn't.
Banning all of it means authorities don't have to sift through images trying to decipher between the two. If one image is declared to be AI generated and it's not...well... that doesn't help the victims or create less victims. It could also make the horrible people who do abuse children far more comfortable putting that stuff out there because it can hide amongst all the AI generated stuff. Meaning authorities will have to go through far more images before finding ones with real victims in it. All of it being illegal prevents those sorts of problems.
Imo, not the best framework for creating laws. Essentially, it's an appeal to emotion.
You know whats better? Having none of this shit
Did you just fix menal health?
Yeah as I also said.
I have trouble with this because it's like 90% grey area. Is it a pic of a real child but inpainted to be nude? Was it a real pic but the face was altered as well? Was it completely generated but from a model trained on CSAM? Is the perceived age of the subject near to adulthood? What if the styling makes it only near realistic (like very high quality CG)?
I agree with what the FBI did here mainly because there could be real pictures among the fake ones. However, I feel like the first successful prosecution of this kind of stuff will be a purely moral judgement of whether or not the material "feels" wrong, and that's no way to handle criminal misdeeds.
If not trained on CSAM or in painted but fully generated, I can't really think of any other real legal arguments against it except for: "this could be real". Which has real merit, but in my eyes not enough to prosecute as if it were real. Real CSAM has very different victims and abuse so it needs different sentencing.
Everything is 99% grey area. If someone tells you something is completely black and white you should be suspicious of their motives.
Apparently he sent some to an actual minor.
Better only means less worse in this case, I guess
It reminds me of the story of the young man who realized he had an attraction to underage children and didn't want to act on it, yet there were no agencies or organizations to help him, and that it was only after crimes were committed that anyone could get help.
I see this fake cp as only a positive for those people. That it might make it difficult to find real offenders is a terrible reason against.
No?
Is everything completely black and white for you?
The system isn't perfect, especially where we prioritize punishing people over rehabilitation. Would you rather punish everyone equally, emphasizing that if people are going to risk the legal implications (which, based on legal systems the world over, people are going to do) they might as well just go for the real thing anyways?
You don't have to accept it as morally acceptable, but you don't have to treat them as completely equivalent either.
There's gradations of questionable activity. Especially when there's no real victims involved. Treating everything exactly the same is, frankly speaking, insane. Its like having one punishment for all illegal behavior. Murder someone? Death penalty. Rob them? Straight to the electric chair. Jaywalking? Better believe you're getting the needle.
I think the point is that child attraction itself is a mental illness and people indulging it even without actual child contact need to be put into serious psychiatric evaluation and treatment.
This mentality smells of "just say no" for drugs or "just don't have sex" for abortions. This is not the ideal world and we have to find actual plans/solutions to deal with the situation. We can't just cover our ears and hope people will stop