What could go wrong?
What could go wrong?
cross-posted from: https://hexbear.net/post/4958707
I find this bleak in ways it’s hard to even convey
What could go wrong?
cross-posted from: https://hexbear.net/post/4958707
I find this bleak in ways it’s hard to even convey
What could go wrong?
AI-Fueled Spiritual Delusions Are Destroying Human Relationships - https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
Yeah we have spiritual delusions at home already!
Seriously, no new spiritual delusions could ever be more harmful than what we have right now.
Totally fair point but I really don't know if that's true. Most mainstream delusions have the side effect of creating community and bringing people together, other negative aspects notwithstanding. The delusions referenced in the article are more akin to acute psychosis, as the individual becomes isolated, nobody to share delusions with but the chatbot.
With traditional mainstream delusions, there also exists a relatively clear path out, with corresponding communities. ExJW, ExChristian, etc. People are able to help others escape that particular in-group when they're familiar with how it works. But how do you deprogram someone when they've been programmed with gibberish? It's like reverse engineering a black box. This is scaring me as I write it.
I can't wait until ChatGPT starts inserting ads into its responses. "Wow that sounds really tough. You should learn to love yourself and not be so hard on yourself when you mess up. It's a really good thing to treat yourself occasionally, such as with an ice cold Coca-Cola or maybe a large order of McDonald's French fries!"
Black mirror lol
That episode was so disturbing 😅
Am I old fashioned for wanting to talk to real humans instead?
No. But when the options are either:
it's quite understandable that some people choose the one that is a privacy nightmare but keeps them sane and away from some dark thoughts.
I suppose this can be mitigated by installing a local LLM that doesn't phone home. But there's still a risk of getting downright bad advice since so many LLM's just tell their users they're always right or twist the facts to fit that view.
I've been guilty of this as well, I've used ChatGPT as a "therapist" before. It actually gives decently helpful advice, compared to what's out there available after a google search. But I'm fully aware of the risks "down the road", so to speak.
so many LLM's just tell their users they're always right
This is the problem, they apparently cannot be objective as just a matter of course.
If the title is a question, the answer is no
If the title is a question, the answer is no
A student of Betteridge, I see.
Actually I read it in a forum somewhere, but I am glad I know the source now!
What is a sarcastic rhetorical question?
Oh, I know this one!
how long will it take an 'ai' chatbot to spiral downward to bad advice, lies, insults, and/or promotion of violence and self-harm?
We're already there. Though that violence didn't happen due to insults, but due to a yes-bot affirming the ideas of a mentally-ill teenager.
Wrong. Massively wrong. This mother is 180% at fault for the death. C.AI has been heavily censored, time and again. The reality is that this kid was already mentally ill, and he persuaded the bot to act like that. I am an ex-user, and bots there just don't do those things unless YOU press them to act like that. SMH.
People's lack of awareness of how important accessibility is really shows in this thread.
Privacy leaking is much lesser issue than not having anyone to talk to for many people, especially in poorer countries.
Cheaper than paying people better, I suppose.
Let's not pretend people aren't already skipping therapy sessions over the cost
I’m not, I’m saying people’s mental health would be better if pay was better.
Great idea, what could possibly go wrong?
I've tried this ai therapist thing, and it's awful. It's ok to help you work out what you're thinking, but absymal at analyzing you. I got some structured timelines back fro. It that I USED in therapy, but AI is a dangerous alternative to human therapy.
My $.02 anyway.
So you are actively documenting yourself sharing sensitive information about your patients?
Enter the Desolatrix
unlike humans, the ai listens to and remembers me to me [for the number of characters allotted]. this will help me feel seen i guess
You know a reply's gonna be good when it starts with "unlike humans" 😁
The only people that think this will help are people that don't know what therapy is. At best, this is pacification and certainly not any insightful incision into your actual problems. And the reason friends are unable to allow casual emotion venting is because we have so much stupid shit like this plastering over a myriad of very serious issues.
There are ways that LLMs can be used to better one's life (apparently in some software dev circles these can be and are used to make workflow more efficient) and this can also be one of them, because the part that sucks most about therapy (after the whole monetary thing) is trying to find the form of therapy that works for you, and finding a therapist that you can work with. Every human is different, and that contains both the patient and the therapist, and not everyone can just start working together right off the bat. Not to mention how long it takes for a new therapist to actually get to know you to improve the odds of the cooperation working.
Obviously I'm not saying "replace all therapists with AIs controlled by racist capitalist pigs with ulterior motives", but I have witnessed people in my own life who have had some immediate help from a fucking chatbot, which is kinda ridiculous. So in times of distress (say a borderline having such an anxiety attack that they can't calm themselves because they don't know what to do to the vicious cycle of thought and emotional response) and for immediate help a well-developed, non-capitalist LLM might be of invaluable help, especially if an actual human can't be reached if for an example (in this case) the borderline lives in a remote area and it is the middle of the night, as I can tell from personal experience it very often is. And though not every mental health emergency requires first responders on the scene or even a trip to the hospital, there is still a possibility of both being needed eventually. So a chatbot with access to necessary information in general (like techniques for self-soothing e.g. breathing exercises and so forth) and possibly even personal information (like diagnostic and medication history, though this would raise more privacy concerns to be assessed) and the capability to parse and convey them in a non-belittling way (as some doctors and nurses can be real fucking assholes at times) could/would possibly save lives.
So the problem here is capitalism, surprising no-one.
You're missing the most important point here; quoting:
A human therapist might not or is less likely to share any personal details about your conversations with anyone. An AI therapist will collate, collect, catalog, store and share every single personal detail about you with the company that owns the AI and share and sell all your data to the highest bidder.
Plus, an AI cannot really have your best interest at heart, plus these sorts of things open up a whole slew of very dytopian scenarios.
OK, you said "capitalism" but that's way too broad.
Also I find the example of a "mental health emergency" (as in, right now, not tonight or tomorrow) in a remote area, presumably with nobody else around to help, a bit contrived. But OK, in such extremely rare cases - presuming broadband internet still works, and the person in question is savvy enough to use the chatbot - it might be better than nothing.
But if you are facing mental health issues and a free or inexpensive AI that is available and doesn't burden your friends actually helps you, do you really care about your information and being profited from?
Put it this way, if Google was being super transparent with you and said, "we'll help treat you, and in exchange we use your info to make a few thousand dollars." Will you the individual say, "no thanks I'd rather pay a few hundred per therapy session instead"?
Even if you hate it, you have to admit it's hard to say no. Especially if it works.
Yeah, well, that's just, like, your opinion, man. And if you remove the very concept of capital gain from your "important point", I think you'll find your point to be moot.
I'm also going to assume you haven't been in such a situation as I described with the whole mental health emergency? Because I have. At best I went to the emergency and calmed down before ever seeing a doctor, and at worst I was committed to inpatient care (or "the ward" as it's also known) before I calmed down, taking resources from the treatment of people who weren't as unstable as I was, a problem which could've been solved with a chatbot. And I can assure you there are people who live outside the major metropolitan areas of North America, it isn't an extremely rare case as you claim.
Anyway, my point stands.
You don't actually know what you're talking about but like many others in here you put this over the top anti-AI current thing sentiment above everything including simple awareness that you don't know anything. You clearly haven't interacted with many therapists and medical professionals in general as a non-patient if you think they're guaranteed to respect privacy. They're supposed to but off the record and among friends plenty of them yap about everything. They're often obligated to report patients in case of self harm etc which can get them involuntarily sectioned, and the patients may have repercussions from that for years like job loss, healthcare costs, homelessness, legal restrictions, stigma etc.
There's nothing contrived or extemely rare about mental health emergencies and they don't need to be "emergencies" the way you understand it because many people are undiagnosed or misdiagnosed for years, with very high symptom severity and episodes lasting for months and chronically barely coping. Someone may be in any big city and won't change a thing, hospitals and doctors don't have magic pills that automatically cure mental illness assuming that patients have insight (not necessarily present during episodes of many disorders) or awareness that they have some mental illness and aren't just sad etc (because mental health awareness is in the gutter, example: your pretentious incredulity here). Also assuming they have friends available or that they even feel comfortable enough to talk about what bothers them to people they're acquainted with.
Some LLM may actually end up convincing them or informing them that they do have medical issues that need to be seen as such. Suicidal ideation may be present for years but active suicidal intent (the state in which people actually do it) rarely lasts more than 30 minutes or a few hours at worst and it's highly impulsive in nature. Wtf would you or "friends" do in this case? Do you know any techniques to calm people down during episodes? Even unspecialized LLMs have latent knowledge of these things so there's a good chance they'll end up getting life saving advice as opposed to just doing it or interacting with humans who default to interpreting it as "attention seeking" and becoming even more convinced that they should go ahead with it because nobody cares.
This holier than thou anti-AI bs had some point when it was about VLMs training on scraped art but some of you echo chamber critters turned it into some imaginary high moral prerogative that even turns off your empathy for anyone using AI even in use cases where it may save lives. Its some terminally online "morality" where supposedly "there is no excuse for the sin of using AI" and just echo chamber boosted reddit brainworms and fully performative unless all of you use fully ethical cobalt-free smartphones so you're not implicitly gaining convenience from the six million victims of the Congo cobalt wars so far, you never use any services on AWS and magically avoid all megadatacenters etc. Touch grass jfc.
Is this any bleaker than forming a parasocial relationship with someone you see on your screen?
Said social surrogate being maladaptive or (in this case) outright malicious.
You must know what you're doing and most people don't. It is a tool, its up to you how you use it. Many people unfortunately use it as an echo chamber or form of escapism, believing nonsense and "make beliefs" that aren't based in any science or empirical data.
If therapy is meant to pacify the masses and make us just accept life as it is then sure I guess this could work.
But hey, we love to also sell to people first that they are broken, make sure they feel bad about it and tell them they can buy their 5 minutes of happiness with food tokens.
So, I'm sure capitalists are creaming their pants at this idea. BetterHelp with their "licensed" Bob the crystal healer from Idaho, eat your heart out.
P.S. You just know this is gonna be able to prescribe medications for that extra revenue kick.
No. Absolutely not.
I don't think AI is quite ready for such a responsibility.
On the other hand it's what they have if all proper alternatives are paywalled such as therapy.
A human therapist might not or is less likely to share any personal details about your conversations with anyone.
An AI therapist will collate, collect, catalog, store and share every single personal detail about you with the company that owns the AI and share and sell all your data to the highest bidder.
Neither would a human therapist be inclined to find the perfect way to use all this information to manipulate people while they are being at their weakest. Let alone do it to thousands, if not millions of them all at the same time.
They are also pushing for the idea of an AI "social circle" for increasingly socially isolated people through which world view and opinions can be bent to whatever whoever controls the AI desires.
To that we add the fact that we now know they've been experimenting with tweaking Grok to make it push all sorts of political opinions and conspiracy theories. And before that, they manipulated Twitter's algorithm to promote their political views.
Knowing all this, it becomes apparent that we are currently witnessing is a push for a whole new level of human mind manipulation and control experiment that will make the Cambridge Analytica scandal look like a fun joke.
Forget Neuralink. Musk already has a direct connection into the brains of many people.
PSA that Nadella, Musk, saltman (and handful of other techfash) own dials that can bias their chatbots in any way they please. If you use chatbots for writing anything, they control how racist your output will be
You're not wrong, but isnt that also how Better Help works?
And you know how Better Help is very scummy?
https://www.newsweek.com/betterhelp-patients-tell-sketchy-therapists-1762849
https://www.maastrichtuniversity.nl/blog/2021/02/how-betterhelp-scandal-changed-our-perspective-influencer-responsibility
Better help is the Amazon of the Therapy world.
Better Help is garbage, and the fact that we not only allow it to exist, but we actively advertise it all over the internet, is an indictment on our society as a whole.
I'm not advocating for it, but it could be just locally run and therefore unable to share anything?
The data isn't useful if the person no longer exists.
the AI therapist probably can't force you into a psych ward though, a human psychologist is obligated to (under the right conditions).
Who says that's not coming in the next paid service based on this great idea for chatbots to provide therapy to the abused masses.