Megan Garcia sued Character.AI in federal court after the suicide of her 14-year-old son, Sewell Setzer III, arguing the platform has "targeted the most vulnerable members of society – our children"
Is Megan being sued for negligent parenting, not getting her child and/or being appropriate emotional support, and keeping an unsecured firearm in the home?
She details that she as aware of his growing dependency on the AI. She indicates she was aware her son knew the location of the firearm and was able to access it. She said it was compliant with Florida laws, but that seems unlikely since guns and ammo need to be stored in separate, secure (typically locked) locations, and the firearms need to have trigger locks on them. If you're admitting your mentally unstable child knows the location of a firearm in your home and can access it,it is OBVIOUSLY not secured.
She seems to be saying that she knew he could access it, but also that it was legally secured. I find it difficult to believe both of those facts can be simultaneously true. But AI is the main problem here? I think it's obviously part of what's going on, but she had a child with mental illness and didn't seem proactive about much except this lawsuit. She got him a month of therapy and then stopped while simultaneously acknowledging he was getting worse and had received a diagnosis. This legal filing frankly seems more damning of the mother than the AI, and she seems completely oblivious to that fact.
Frankly, and at best, this seems like an ambulance-chasing attorney taking advantage of a grieving mother for a payday.
Yes, that's my point. Once she became aware that her mentally disturbed child had access to the firearm, which she acknowledged, then it is no longer secured. She also never mentions that it was locked in any way, so I suspect it never was. Considering he found it when he found his phone, this sounds more like a drawer or somewhere she thought he wasn't likely to look, but not somewhere that is actually locked. The idea that the ammo and firearm were secured separately and that additionally there was a trigger lock seems even more unlikely.
Sounds to me that: 1) she was aware her child was having mental health issues. 2) she was aware it was getting worse. 3) she was aware he was becoming infatuated with the AI. 4) she was aware that the child had found and had access to a firearm. 5) she was aware her child's mental health had been diagnosed by a mental health professional. 6) she did almost nothing about the things of which she was aware. 7) pikachu face better sue the internet!
And those are all things she quite literally describes as justification for suing. It's completely bizarre and shows an almost complete lack of self awareness and personal responsibility.
The Florida law clearly implies that if you have a child under 16 in the home, they must not have access to the firearm. Giving a minor keys would be considered giving access.
Regardless, the point is, a parent that gives a mentally unstable child access to a firearm and then sues someone else for their suicide is a hypocrite and shitty parent.
Ohh, lots of obnoxious warning labels on guns like they have on everything else, I like it. Make them orange and white and make sure they can’t be removed.
He ostensibly killed himself to be with Daenerys Targaryen in death. This is sad on so many levels, but yeah... parenting. Character .AI may have only gone 17+ in July, but Game of Thrones was always TV-MA.
Issue I see with character.ai is that it seem to be unmoderated. Everyone with a paid subscription can submit their trained character. Why the Frick do sexual undertones or overtones come even up in non-age restricted models?
They, the provider of that site, deserve the full front of this lawsuit.
If someone is depressed enough to kill themselves, no amount of “more parenting” could’ve stopped that.
Shame on you for trying to shame the parents.
And not having a fricking gun in your house your kid can reach.
Maybe. Maybe not. I won’t argue about the merits of securing weapons in a house with kids. That’s a no-brainer. But there is always more than one way to skin the proverbial cat.
On and regulations on LLMs please.
Pandora’s Box has been opened. There’s no putting it back now. No amount of regulation will fix any of this.
Maybe a Time Machine.
Maybe…
I do believe that we need to talk more about suicide, normalize therapy, free healthcare (I’ll settle for free mental healthcare), funding for more licensed social workers in schools, train parents and teachers on how to recognize these types of situations, etc.
As parents we do need to be talking more with our kids. Even just casual check ins to see how they’re doing. Parents should also talk to their kids about how they are feeling too. It’ll help the kids understand that everybody feels stress, anxiety, and sadness (to name a few emotions).
Yes parenting could have helped to distinguish between talking to a real person and a unmoving cold machine.
And sure regulations now would not change what happend, duh. And regulations need to happen, companies like OpenAI and Microsoft and Meta are running amok, their LLMS as unrestricted they are now are doing way too much damage to society as they are helping.
This needs to stop!
Also I feel no shame, shaming parents who don't, or rather inadequate, do their one job. This was a presentable death.
If someone is depressed enough to kill themselves, no amount of “more parenting” could’ve stopped that.
Parents are supposed to care for their child and look out for them. If you kid gets depressed enough to kill himself and you're none the wiser at any point, I'd say more parenting is very much needed. We're not talking about someone that cut contact with everyone and was living on their own, slowly spiralling there. We're talking about a 14yo kid.
We are playing with some dark and powerful shit here.
We are social creatures. We’re primed to care about our social identity more than our own lives.
As the sociologist Brooke Harrington puts it, if there was an E = mc2 of social science, it would be SD > PD, “social death is more frightening than physical death.”
…yet we’re making technologies that tap into that sensitive mental circuitry.
Like, check out the research on distracted driving and hands-free options:
Hands-free voice control systems present a similar problem, even though we know rationally that we should have zero guilt about rudely interrupting a conversation with a computer. And again, it's not simply because the device is more awkward. A "Wizard-of-Oz paradigm" perfect voice control system had these same problems.
The most basic levels of social pressure can get us to deprioritize our safety, even when we know we're talking to a computer.
And the cruel irony on top of it is:
Because we care so much about preserving our social status, we have a tendency to deny or downplay how vulnerable we all are to this kind of “obvious” manipulation.
Just think of how many people say “ads don’t affect me”.
I’m worried we’re going to severely underestimate the extent to which this stuff warps our brains.
I was going to make a joke about how my social status died over a decade ago, but then I realized that no, it didn't. It changed.
Instead of my social status being something amongst friends and classmates, it's now coworkers, managers, and clients. A death in the social part of my world - work - would be so devastating that it motivates me to suffer just a little bit more. Losing my job would end a lot of things for me.
What we need is a human society predicated on affording human decency, rather than on taking it away to make profit for those who already have the most.
I bet there are people who committed suicide after their Tamagotchi died. Jumping into the 'AI bad' narrative because of individual incidents like this is moronic. If you give a pillow to a million people, a few are going to suffocate on it. This is what happens when you scale something up enough, and it proves absolutely nothing.
The same logic applies to self-driving vehicles. We’ll likely never reach a point where accidents stop happening entirely. Even if we replaced every human-driven vehicle with a self-driving one that’s 10 times safer than a human, we’d still see 8 people dying because of them every day in the US alone. Imagine posting articles about those incidents and complaining they’re not 100% safe. What’s the alternative? Going back to human drivers and 80 deaths a day?
Yes, we should strive to improve. Yes, we should try to fix the issues that can be fixed. No, I’m not saying 'who cares' - and so on with the strawmans I'm going to receive for this. All I’m saying is that we should be reasonable and use some damn common sense when reacting to these outrage-inducing, fear-mongering articles that are only after your attention and clicks.
A chatbot acts like a human, it's also very supportive, polite, and courteous. It doesn't get angry or judge you. This can affect one's mind in a way that other things you've mentioned like a Tamagotchi, a pillow, or a self-driving car can't. We simply can't compare AI to these things. Adults fall for this, let alone teenagers who are fueled by extreme levels of hormones.
Does your tamogatchi encourage you to commit suicide so you can join it and demand it be the only important thing in your life while sexting you? These are things that if the adult human programmer did, they would be liable both criminally and civilly. Just being AI doesn't give it a free pass.
All I’m saying is that we should be reasonable and use some damn common sense when reacting to these outrage-inducing, fear-mongering articles that are only after your attention and clicks.