Blocking outputs isn’t enough; dad wants OpenAI to delete the false information.
A Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children.
According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as "a convicted criminal who murdered two of his children and attempted to murder his third son," a Noyb press release said.
It's AI. There's nothing to delete but the erroneous response. There is no database of facts to edit. It doesn't know fact from fiction, and the response is also very much skewed by the context of the query. I could easily get it to say the same about nearly any random name just by asking it about a bunch of family murders and then asking about a name it doesn't recognize. It is more likely to assume that person is in the same category as the others and if the one or more of the names have any association (real or fictional) with murder.
Are we sure that someone else with that name hasn't committed those crimes? After all if I search my name it says I'm an astronaut, because there is an actual NASA astronaut with my name. It's not saying I'm that person, it's just saying that that name is the same as that person's.
When asking ChatGPT about my name, it provided the following:
"...it seems like you may be referring to a private person rather than a widely known public figure. If that's the case, I wouldn't have any specific public information on him unless he has gained some public recognition for a particular achievement."
It shouldn't be used for looking up people that aren't celebrities or at least known for something.