SpinScore AI powered fact checking and media bias tool
SpinScore AI powered fact checking and media bias tool
Streamlit
I came across this tool today and was wondering if anyone has any information or thoughts to share about it?
Obviously being "AI" driven it's going to make silly mistakes sometimes, as they all do, but it's an interesting attempt to reduce the reliance on the human factor in assessing bias. This is often a criticism directed towards MBFC, Politifact and other similar sources.
I can see the claim that an AI model eliminates bias isn't really true in one sense, because biases in the training data just means biases are often baked in and obscured. But it does at least reduce the possibility of ad hoc editorial bias. And as an automated tool, in can be run on demand for specific articles, unlike human fact checkers who cherry pick the facts they want to report on, which is yet another potential source of [selection] bias.
Do you think something like this would be a better option compared to the lemmy.world media bias fact check bot, for example? You wouldn't even need to include the results in the post, just generate a single on-demand link to SpinScore (or similar) per article so people can look up an article if they are interested.
This is just a topic for general discussion and spitballing, not a proposal.
This is a terrible idea. Fact checking needs to be understood as a safety feature.
If a car had airbags that failed to fire one in every ten times, that car would be recalled. We regularly test fire alarms. I could go on. The point is that we don't accept any meaningful failure rate from safety features.
If you pitch this as a "fact checker" people will inherently trust it. That means they'll trust it even when it's wrong.
And saying "well you need to double check it" is pointless, because if I have to double check, why have it at all? Double checking it basically means running the claim through a reliable fact checker, right? So if I'm already doing that, checking the output of this thing first is just a total waste of my time. Even when it's right, I still would have gotten the right answer from the reliable fact checker, and I always have to go the reliable fact checker because I don't know when this AI might be wrong.
This is just taking everything wrong with LLMs as a research tool and applying it to a scenario with even less room for error.
Some good points there, and I share many of those concerns. But playing devil's advocate for a moment, you could level many of those concerns about human fact checkers too.
I did test it on a few articles, including an interview piece with JFK Jr. It marked a number of claims as false, even though they were direct quotes from JFK Jr in the interview, because they couldn't be corroborated with other sources. It's definitely got issues that human reviewers wouldn't trip over.
Human fact checkers have a process. They have verifiable methods. They cite their sources. There are layers upon layers of verification built into what they do. The fact checkers who have a reputation for trust worthiness have that reputation because their methods have bee n proven to be reliable.
This is the difference between a human fucking up and an LLM fucking up. When a human fucks up people can challenge the claim. They can ask to see the evidence. They can discuss methodology. You can't do that with an LLM because the answer is gives is the product of a black box. It cannot discuss or explain the reasoning it used to come to a conclusion, because the words "reasoning" and "conclusion" have nothing to do with how it operates.
So no, you cannot level the same criticisms at human fact checkers. You can level criticisms that sound the same on a surface level, if you've bought into the AI industry's desparate hype.