I remember seeing a comment on here that said something along the lines of “for every dangerous or wrong response that goes public there’s probably 5, 10 or even 100 of those responses that only one person saw and may have treated as fact”
Tech company creates best search engine —-> world domination —> becomes VC company in tech trench coat —-> destroy search engine to prop up bad investments in artificial intelligence advanced chatbots
The reason why Google is doing this is simply PR. It is not to improve its service.
The underlying tech is likely Gemini, a large language model (LLM). LLMs handle chunks of words, not what those words convey; so they have no way to tell accurate info apart from inaccurate info, jokes, "technical truths" etc. As a result their output is often garbage.
You might manually prevent the LLM from outputting a certain piece of garbage, perhaps a thousand. But in the big picture it won't matter, because it's outputting a million different pieces of garbage, it's like trying to empty the ocean with a small bucket.
I'm not making the above up, look at the article - it's basically what Gary Marcus is saying, under different words.
And I'm almost certain that the decision makers at Google know this. However they want to compete with other tendrils of the GAFAM cancer for a turf called "generative models" (that includes tech like LLMs). And if their search gets wrecked in the process, who cares? That turf is safe anyway, as long as you can keep it up with enough PR.
Google continues to say that its AI Overview product largely outputs “high quality information” to users.
There's a three letters word that accurately describes what Google said here: lie.
Good, remove all the weird reddit answers, leaving only the "14 year old neo-nazi" reddit answers, "cop pretending to be a leftist" reddit answers, and "39 year old pedophile" reddit answers. This should fix the problem and restore google back to its defaults
Isn't the model fundamentally flawed if it can't appropriately present arbitrary results? It is operating at a scale where human workers cannot catch every concerning result before users see them.
The ethical thing to do would be to discontinue this failed experiment. The way it presents results is demonstrably unsafe. It will continue to present satire and shitposts as suggested actions.
If you have to constantly manually intervene in what your automated solutions are doing, then it is probably not doing a very good job and it might be a good idea to go back to the drawing board.
I looove how the people at Google are so dumb that they forgot that anything resembling real intelligence in ChatGPT is just cheap labor in Africa (Kenya if I remember correctly) picking good training data. So OpenAI, using an army of smart humans and lots of data built a computer program that sometimes looks smart hahaha.
But the dumbasses in Google really drank the cool aid hahaha. They really believed that LLMs are magically smart so they feed it reddit garbage unfiltered hahahaha. Just from a PR perspective it must be a nigthmare for them, I really can't understand what they were thinking here hahaha, is so pathetically dumb. Just goes to show that money can't buy intelligence I guess.
[...] a lot of AI companies are “selling dreams” that this tech will go from 80 percent correct to 100 percent.
In fact, Marcus thinks that last 20 percent might be the hardest thing of all.
Yeah, it's well known, e.g. people say "the last 20% takes 80% of the effort". All the most tedious and difficult stuff gets postponed to the end, which is why so many side projects never get completed.
Okay Google... I'm about to go to sleep but I must know something before I go.... If I could get the perfect penis to attract my perfect female counterpart, describe my penis, where my wife put it and how many pieces did she cut it to. Most importantly, will the scars make ribbed for her pleasure?
Probably one of the shitstains in Google's C-suite after having signed a "wonderful" contract to get access to "all that great data from Reddit" forced the Techies to use it against their better judgement and advice.
It would certainly match the kind of thing I've seen more than once were some MBA makes a costly decision with technical implications without consulting the actual techies first, then the thing turns out to be a massive mistake and to save themselves they just double up and force the techies to use it anyway.
That said, that's normally about some kind of tooling or framework from a 3rd party supplier that just makes life miserable for those forced to use it or simply doesn't solve the problem and techies have to quietly use what they wanted to use all along and then make believe they're using the useless "sollution" that cost lots of $$$ in yearly licensing fees, and stuff like this that ends up directly and painfully torpedoing at the customer-facing end the strategical direction the company is betting on for the next decade, is pretty unusual.
I once had a Christmas day post blow up and become top of the day from a stupid pic I uploaded. I wonder if some of those comments or a weird version of that pic will pop up. Anyone that had similar things happen should keep their eye out. Anything that blew up probably gets a bit more weight.
Oh God, cumbox! All of cumbox is in there. I wonder what kind of unrelated search could summon up that bit of fuzzy fun?
Just fucking ban AI. The solution is so simple. AI will NEVER be a good solution for anything and it's just theft of information at its core. Fuck AI and fuck any company that uses the garbage.