My dad cited inaccurate information because of ChatGPT
I run a small VPS host and rely on PayPal for payments, mainly because (a) most VPS customers pay that way if you aren't AWS or GoDaddy and (b) very good fraud protection. My prior venture had quite a bit of chargebacks from Stripe so it went PP-only also.
My dad told me I should "reduce the processing fees" and inaccurately cited that ChatGPT told him PayPal has 5% fees when it really has 3-3.5% fees (plus 49 cents). Yet he insisted 5% was the charge.
Yes, PayPal sucks but ChatGPT sucks even more. When I was a child he said Toontown would ruin my brain, yet LLMs are ruining his even more.
LLMs are undoubtedly impressive tech that will get better with time. But to anyone singing their praises too emphatically I say ask it something on a topic you are an expert on; you’ll quickly see how fallible they currently are.
Tbh if they game get them to ask it about that, it fails spectacularly badly, even worse than in general. TV shows and movies it's a bit better on, probably because there are so many episode summaries and reviews online, but if you talk to it long enough and ask varied and specific enough information it'll fail there too.
They may not be an expert at something, but if they have a specific interest or hobby that'll probably work.
calling LLMs ai isnt wrong at all, it's just that sci-fi has made people think ai always means something as smart as a human. heck, the simple logic controlling the monsters in Minecraft is called ai.
He didn't cite wrong information (only) because of ChatGPT, but because he lacks the instinct (or training, or knowledge) to verify the first result he either sees or likes.
If he had googled for the information and his first click was an article that was giving him the same false information, he would've probably insisted just the same.
LLMs sure make this worse, as much more information coming out of them is wrong, but the root cause is the same it's been before their prevalence. Coincidentally it's the reason misinformation campaigns work so well and are so easy.
If he had googled for the information and his first click was an article that was giving him the same false information, he would've probably insisted just the same.
If you're looking up content written by humans and published to the internet in an article, it is far less likely to be wrong.
It's a bit less likely to be wrong, but there's plenty of room for it to be wrong, either maliciously with intent or through incompetence of researching even basic things on their part. Someone being wrong once by misreading, or without interpreting data, or by trying to steer perception of something, can easily snowball into many sources concerning that wrong information ("I've read it, so must be true"). Many kinds of information are also very dependant on perspective, adding nuance beyond "correct" and "false".
There are plenty of reasons to double check information (seemingly) written by humans, it's just good to double check that for different reasons than ai content. But the basic idea of "it can easily be wrong" is the same.
Well, sure. But if you go the PayPal website you can see the correct information. Before Google's AI popped up at the top of the screen, the PayPal website would have. In this situation, Google is now prioritizing pushing the misinformation that their AI found from some outdated website instead of the official PayPal website that has the correct info. That's the issue.
Ask it if PayPal has a 5% fee? Sounds like he might have been arguing about it and tried to fact check himself and chafgpt told him what he wanted to hear maybe?