Somewhere inside the Googleplex, the program manager who ignored all the devs saying this feature wasn’t ready yet is trying to figure out how to spin the blame onto someone else. 50/50 on if they’re successful.
I once worked for a company where our most high-profile customer, a well-known grocery chain, was unhappy with the speed at which we were delivering something. There had been Powerpoint presentations and a business lunch, then more meetings in the afternoon, but it was clear they were pretty disgruntled. In the end the customer's most senior guy asked bluntly why the product had so many problems, and the CEO of our company literally pointed at the only other dev in the room and said, "It's Mike's fault. Mike built it." Unbelievable.
Poor old Mike had been given impossible requirements that had been changed on him every day as the management kept changing their minds, and in the circumstances it was impressive how much he'd put together. But it was just staggering to see the head of the company attempt to throw him under the bus in front of the customer, and also just mind-boggling that he would think this made him look better as a leader. That guy was the biggest dick I've ever worked for.
They will not be, they were set up to fail no matter what because ChatGPT was starting to look like a meaningful competitor.
Google has also be intentionally fucking yup their search for years now in order to make it pay better. Eventually they were going to fuck it up bad enough that people left.
As a result, Google announced that it will limit some responses, especially when it detects "nonsensical queries that shouldn't show an AI Overview."
Google maintains that generative AI makes sense as part of its flagship feature — but given the dumpster fire its latest tool is already turning out to be, it still has a lot to prove.
"At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors," Google search head Liz Reid wrote in the blog post.
"We’ll keep improving when and how we show AI Overviews and strengthening our protections, including for edge cases, and we’re very grateful for the ongoing feedback."
Some of the observed issues have since been traced back to insincere "shitposts" of Reddit users, suggesting AI Overviews is drawing from some seriously dubious data for its output.
We've already seen chatbots and related tools telling users to cheat on their wives, hallucinate plenty of information, miserably fail to summarize existing data, and make a complete mess of entire publications.
The original article contains 389 words, the summary contains 178 words. Saved 54%. I'm a bot and I'm open source!