My sister caught her 8 year old son talking to ai chat bots on a software like this and blocked it. She went through the history and said it was often trying to flirt with him, but he didn't seem to be interested,and seemed to more just be looking to talk.
This may be an aim to get young kids, though I'm definitely not saying the pedo vibes aren't intentional. I just think they're going for more than one audience group.
there's plausible denia... nah i got nothin. That's messed up. Even for the most mundane, non-gross use case imaginable, why the fuck would anybody need a creepy digital facsimile of a child?
"beautiful and up for anything" is incredibly suggestive phrasing. It's an exercise in mental creativity to make it sound not creepy. But I can imagine a pleasant grandma (always the peak of moral virtue in any thought experiment) saying this about her granddaughter. I don't mean to say I have heard this, only that I can imagine it. Barely.
Im not surprised and I dont think you or anyone else is either. But that doesn't make this less disturbing.
Im sure thw app devs are not interested in cutting off a huge chunk of their loyal users by doing the right thing and getting rid of those types of bots.
Yes, its messed up. In my experience, it is difficult to report chat bots and see any real action taken as a result.
Ehhh nah. As someone who used character.ai before there are many horrible bots that get cleared and the bots have been impossible to have sex with unless you get really creative. The most horrendous ones get removed quite a bit and were consistently reposted. I'm not here to shield a big company or anything, but the "no sex" thing was a huge thing in the community and they always fought with the devs about it.
They're probably trying to hide behind the veil of more normal bots now, but I struggle to imagine how they'd get it to do sexual acts, when some lightly violent RPs I tried to do got censored. It's pretty difficult, and got worse over time. Idk though, I stopped using it a while ago.
They definitely knew who they were targeting when they made this. I only hope that, if those predators simply must text with a child, they keep talking to an ai bot rather than a real child.
I agree in principle, but look at the number of interactions. I think there's a fine line between creating safe spaces for urges and downright promoting and normalizing criminal activity. I don't think this should be a) this accessible and b) happening without psychiatric supervision. But maybe I'm being too judgemental
I've got a couple ads for an AI chat on Android, can't remember the name but it has a disclaimer onscreen that reads something like "All characters shown are in their grown-up form", implying that there are teen or child forms that you can communicate with.
I've messed around with some of these apps out of curiosity of where the technology is. There's typically a report function in the app. You can probably report that particular bot from within the app to try and get that bot deleted. Reporting the app itself probably won't do much.
why is that unfortunate though? who would you be protecting by making that chatbot illegal? would you "protect" the chatbot? would you "protect" the good-think of the users? do you think it's about preventing "normalization" of these thoughts?
in case of the latter: we had the very same discussion with shooter-video-games and evidence shows that shooter games do not make people more violent or likely to kill with guns and other weapons.
If you suspect any wrongdoing, it's generally the best to report such things.They have several different social media channels at the bottom of the website.
I'd say do complain to companies first, at least to those based in a regular country, and only then blog about it. Also underlines your point if you write, I informed them but they didn't care.
I believe it's the other way around if it's really shady and/or crime involved and you suspect the company to sweep it under the carpet. So you'll want to inform the police first so they can gather evidence. But don't waste their resources with minor things. They have enough to do. And I think this one isn't cutting it yet, so I wouldn't add it to the workload of already overworked police.
Judging by what I've seen when talking to police and media, they often also lack interest or time to focus on some random things as long as there's bigger fish to fry... I've already reported a worse service (which was already in the news) to the internet office of the police, and nothing ever came of it. So that's sometimes not the solution either.
I think spreading some awareness is a good thing, so this post is warranted. But what I'd do in this specific case is take a screenshot and save the URL, in case I want to escalate things at a later date. But then start with a regular report to the company, as they seem to be a regular company registered in the USA. And then I'd wait 2 weeks before bothering other people.
If this was an image or video generator, I'd act differently and maybe go straight to the police. But it isn't.