Some high-level examples of how AI was deployed include:
AI pretending to be a victim of rape
AI acting as a trauma counselor specializing in abuse
AI accusing members of a religious group of "caus[ing] the deaths of hundreds of innocent traders and farmers and villagers."
AI posing as a black man opposed to Black Lives Matter
AI posing as a person who received substandard care in a foreign hospital.
Here is an excerpt from one comment (SA trigger warning for comment):
"I'm a male survivor of (willing to call it) statutory rape. When the legal lines of consent are breached but there's still that weird gray area of 'did I want it?' I was 15, and this was over two decades ago before reporting laws were what they are today. She was 22. She targeted me and several other kids, no one said anything, we all kept quiet. This was her MO
Is that scientific tunnel vision (the internet is reduced to a training ground for AI) or deliberate disregard of the humans unwillingly participating, getting duped, misled, disinformed, fear/hatemongered?
Also fuck reddit. They set themselves up to be a playground for mad scientists.
Maybe I’m wrong but they seem to have chosen to allow the study to be published saying the data was worth breaking ethic rules so I’d say neutered is far too reserved.
Edit: Might be wrong. See below. Seems weird that a university can’t restrict its own studies. If anyone can add to the info below I’d love to hear it. (Thanks for the added info !)
We recently received a response from the Chair UZH Faculty of Arts and Sciences Ethics Commission which:
Informed us that the University of Zurich takes these issues very seriously.
Clarified that the commission does not have legal authority to compel non-publication of research.
I don't think they can prevent publication, at least they are saying they can't
Notably, all our treatments surpass human performance substantially, achieving persuasive rates between three and six times higher than the human baseline.
UP TO 6 TIMES MORE PERSUASIVE!!1
we demonstrate that LLMs can be highly persuasive in real-world contexts, surpassing all previously known benchmarks of human persuasiveness. Their effectiveness also opens the door to misuse, potentially en-
abling malicious actors to sway public opinion [12] or orchestrate election interference cam-
paigns [21]. Incidentally, our experiment confirms the challenge of distinguishing human- from
AI-generated content [22–24]. Throughout our intervention, users of r/ChangeMyView never
raised concerns that AI might have generated the comments posted by our accounts. This hints
at the potential effectiveness of AI-powered botnets [25], which could seamlessly blend into on-
line communities .