which it turned out belonged to James [...] whose number appears on his company website.
When Smethurst challenged that, it admitted: “You’re right,” and said it may have been “mistakenly pulled from a database”.
but the overreach of taking an incorrect number from some database it has access to is particularly worrying.
I really love this new style of journalism where they bash the AI for hallucinating and making clear mistakes, to then take anything it says about itself at face value.
It's a number on a public website. The guy googled it right after and found it. Its simply in the training data, there is nothing "terrifying" about this imo.
Also, the first five digits were the same between the two numbers. Meta is guilty, but they're guilty of grifting, not of giving a rogue AI access to some shadow database of personal details... yet? Lol
It's a number on a public website. The guy googled it right after and found it. Its simply in the training data, there is nothing "terrifying" about this imo.
Right. There's nothing terrifying about the technology.
What is terrifying is how people treat it.
LLMs will cough up anything they have learned to any user. But they do it while successfully giving all the human social cues of an intelligent human who knows how to keep a secret.
This often creates trust for the computer that it doesn't deserve yet.
Examples, like this story, that show how obviously misplaced that trust is, can be terrifying to people who fell for modern LLM intelligence signaling.
Today, most chat bots don't do any permanent learning during chat sessions, but that is gradually changing. This trend should be particularly terrifying to anyone who previously shared (or keeps habitually sharing) things with a chatbot that they probably shouldn't.
Waiting on the platform for a morning train that was nowhere to be seen, he asked Meta’s WhatsApp AI assistant for a contact number for TransPennine Express. The chatbot confidently sent him a mobile phone number for customer services, but it turned out to be the private number of a completely unconnected WhatsApp user 170 miles away in Oxfordshire.
Ah yes, what else to expect from »the most intelligent AI assistant that you can freely use«.
If working with AI has taught me anything, ask it absolutely NOTHING involving numbers. It’s fucking horrendous. Math, phone numbers, don’t ask it any of that. It’s just advanced autocomplete and it does not understand anything. Just use a search engine, ffs.
I asked my work’s AI to just give me a comma separated list of string that I gave it, then it returned a list of strings with all the strings being “CREDIT_DEBIT_CARD_NUMBER”. The numbers were 12 digits, not 16. I asked 3 times to give me the raw numbers and had to say exactly “these are 12 digits long not 16. Stop obfuscating it” before it gave me the right things.
I’ve even had it be wrong about simple math. It’s just awful.
It’s really crappy at trying to address its own mistakes. I find that it will get into an infinite error loop where it hops between 2-4 answers, none of which are correct. Sometimes it helps to explicitly instruct it to format the data provided and not edit it in any way, but I still get paranoid.
Either you are bad at chatgpt, or I am a machine whisperer but I have a hard time believing copilot couldnt handle that, I am regularly having it rewrite sql code
What models have you tried? I used local Llama 3.1 to help me with university math.
It seemed capable of solving differential equations and doing LaPlace transform. It did some mistakes during the calculations, like a math professor in a hurry.
What I found best, was getting a solution from Llama, and validating each step using WolframAlpha.
Or, and hear me out on this, you could actually learn and understand it yourself! You know? The thing you go to university for?
What would you say if, say, it came to light that an engineer had outsourced the statical analysis of a bridge to some half baked autocomplete? I'd lose any trust in that bridge and respect for that engineer and would hope they're stripped of their title and held personally responsible.
These things currently are worse than useless, by sometimes being right. It gives people the wrong impression that you can actually rely on them.
TLDR: Bot generated random number, happened to be a real person’s phone number
I don’t understand what is “terrifying” about that. Even without the bot, anyone with malicious intent could imagine up a random phone number.
These kind articles with thin content are just used by these news agencies to fit into the “bots are bad” narrative that makes them money. Of course bots are bad in many ways, but not for such flimsy reasons.