This isn't an AI problem. This is a "most humans are assholes" problem. How hard is it to say "Oh, you don't have what I need? That's too bad. Can you please cancel my subscription?"
I just can't understand why it got to the point of her sending screenshots. Is this the guy not giving a refund or does this person think that he's lying and she wants the map he's "hiding".
I'd assume it's the idiot sending ChatGPT screenshots.
It's an AI problem. We know people are stupid. However, people selling AI garbage tell them it's intelligent, when it really isn't. It is trained to speak confidently and people believe it. It's why con(fidence) men work.
The people pushing these products know some people won't understand it, and they know they'll take what it says at face value, and they fight to push this idea too. They are creating this situation on purpose. If they were responsible they'd be very forward with the limitations and try to ensure even the most gullible of people are skeptical of what it writes. They don't even try to do this though. They create a situation where this happens to pad their own pockets.
That's not remotely close to what I said. I said the companies have a responsibility to inform the idiot that it's not always accurate. They'll still be idiots, but they'll be made aware not to trust the LLM's response by default. They might complain about this map, but hopefully after they're told the LLM was wrong they'd recognize that it was the LLM's fault because the LLM tells them that it might make things up.
It doesn't cure idiots, but it does make it harder for idiots to make the mistake of trusting your software. Instead, they push the image that their software is intelligent ("AI"), and constantly send the message that it is to be trusted.
No, we are specifically discussing the conjunction between people being assholes and ai being asshole and the truth is obviously in between. You are on the position that it's an ai problem entirely.
This isn't an AI problem. This is a "most humans are assholes" problem. How hard is it to say "Oh, you don't have what I need? That's too bad. Can you please cancel my subscription?"
It is an AI problem, and you've just admitted you agree. Yes, also people being stupid is also a problem, but that can't be fixed. Pretending like AI is intelligent or the answer to our problems can be fixed.
Let me introduce you to education. It makes people less stupid. And subsequently less of an asshole too. I admitted that it is a both problem, my original stance, you reaffirmed your stance that it is not a people problem. It's okay to disagree, but you have a polarised opinion that seems worth to point out for other readers. People can be less assholish and it can be changed. It's a both problem, which seems the concensus as well, cheers
I was working support for a multinational tech company, customer: "I searched for your support number and I rang them and they scammed me, you guys are shit".
Turns out they clicked on the top result that was SEO'd to shit to catch these types of people that can't think for themselves.
So not just assholes, but also tech illiterate folks that trust the first thing they read.
There are two victims. The illiterate who get taken advantage of by malicious actors gaming the results and your company whose tech support center has to deal with the victims shame and distress and the reputational impact that your company faces from scammers impersonating you.
There is actually a third victim and that’s the rest of your customers who have to pay higher rates for services to cover the losses due to fraud.
The bad guy in this scenario isn’t any of the victims but if the two victims don’t have empathy for each other, ultimately the bad guys are empowered to further steal.
Ah, I see where you're coming from now. No, you don't have to glaze me to get the support that you need, but you sure as hell don't get to verbally abuse me because you made a mistake and are too much of an narcissistic asshole to reflect on that mistake.
It’s both. People are misusing AI at the encouragement of companies who want to sell it.
What people want is factually correct information. AI doesn’t deliver this, what it delivers is competently presented and easily understood words which may or may not be correct.
Unfortunately, many people don’t understand how AI works so they don’t realize that they’re using the wrong tool for what they want to accomplish.
The reason AI is part of the problem is that it contributes to the spread of misinformation.
I just wish there was a right tool, because I don't feel traditional search is it either after the era of SEO maximization. IMO, part of why AI search is popular is because traditional search has degraded so much.
I think the closest thing we have to a “right tool” is your brain. If you’re looking for a product your first thought shouldn’t be “let me ask Chat GPT” it should be something like“let me ask someone who sells or is familiar with this product.”
Tools like search engines can be useful for finding the right people to talk to.
I think people like interacting with a computers instead of people because it’s “more convenient.” Many computer systems smooth over the friction that we experience in the real world.
One of the common topics for internet comics these days seems to be anxiety people have about making phone calls, and I think search engines and chat bots present a similar dynamic.
Yes, maybe people don’t experience as much anxiety when using a chatbot or interactive voice recording, but ultimately those tools won’t always work and people will eventually need to work through their anxiety to accomplish what they want which involves interacting with other humans (or choose not to engage with people and become bitter and isolated.)
If these asshole companies would connect me to a person instead of a bot with worse hearing than me and stressful timing in between slow, garbled, repetitive prompts when I call, I’d have no issue whatsoever using a good old fashioned phone to set out and solve my problems.
Since I’m equally likely to deal with a bugged out robot whether I type at it or yell at it, I may as well exhaust the options where I can read instead of being forced to wait to be talked down to by a machine. (Clarify: I DO NOT use ChatGPT or other LLMs, I only use search engines)
The stress comes from not being able to talk to/reach people reliably by phone, not at the thought of just talking to a person over a call.
Depends. I am think the fact that we can access a lot of diverse sources of information is the greatest part of the internet. For example, I do not want to ask a person how this obscure part of this device works, I want to quickly know how this device works. Usually, I tend to use both classic search engines and the hallucination machines, because both can shortcut hours of research or be completely useless, depending on the question.
I will agree degredation happened, but Google was way more reliable before AI started "helping". Now it makes up its own write-up about whatever you search for and gives you little of actual useful info.
Global corruption and corporate greed mostly. Organizations that have credibility are cashing in on it now, suddenly ok with systems that can and often are confidently wrong. Normies have a hell of a time tuning their expectations and little is being done to temper them. This is accelerating.
This is a fucking corporation and capitalism problem where these corpos have to convince people that llms can provide factual information when they absolutely cannot be trusted to do this.
Are there corporations actively trying to convince people AI text generators are accurate? The only thing i have seen a corporation say anything about the programs accuracy was the tiny text for the web version of chatgpt that suggests people verify the output that people ignore
They are obviously not trying to stop people from thinking AI is right always, but are they trying to convince them of that?