Skip Navigation
Verizon’s new AI chatbot — customer disservice
  • Just guessing, but the reported "90% accuracy" are probably related to questions that could be easily answered from an FAQ list. The rest is probably at least in part about issues where the company itself f*cked up in some way... Nothing wrong with answering from an FAQ in theory, but if all the other people get nicely worded BS answers (for which the company couldn't be held accountable), that is a nightmare from every customer's point of view.

  • Verizon’s new AI chatbot — customer disservice
  • At the very least, actual humans have an incentive not to BS you too much, because otherwise they might be held accountable. This might also be the reason why call center support workers sound less than helpful sometimes - they are unable to help you (for various technical or corporate reasons) and feel uneasy about this. A bot is probably going to tell you whatever you want to hear while sounding super polite all the time. If all of it turns out to be wrong... well, then this is your problem to deal with.

  • Iyo vs. Io — OpenAI and Jony Ive get sued
  • Almost sounds as if in order to steal intellectual property, they had to go down the "traditional" route of talking to someone, making promises etc. If it turns out that a chatbot isn't the best tool for plagiarizing something, what is it even good for?

  • AI solves every river crossing puzzle, we can go home now [content warning: botshit]
  • And there might be new "vulture funds" that deliberately buy failing software companies simply because they hold some copyright that might be exploitable. If there are convincing legal reasons why this likely won't fly, fine. Otherwise I wouldn't rely on the argument that "this is a theoretical possibility, but who would ever do such a thing?"

  • AI solves every river crossing puzzle, we can go home now [content warning: botshit]
  • And, after the end of the AI boom, do we really know what wealthy investors are going to do with the money they cannot throw at startups anymore? Can we be sure they won't be using it to fund lawsuits over alleged copyright infringements instead?

  • AI solves every river crossing puzzle, we can go home now [content warning: botshit]
  • At the very least, many of them were probably unable to differentiate between "coding problems that have been solved a million times and are therefore in the training data" and "coding problems that are specific to a particular situation". I'm not a software developer myself, but that's my best guess.

  • We test Google Veo: impressive demo, unusable results
  • Even the idea of having to use credits to (maybe?) fix some of these errors seems insulting to me. If something like this had been created by a human, the customer would be eligible for a refund.

    Yet, under Aron Peterson's LinkedIn posts about these video clips, you can find the usual comments about him being "a Luddite", being "in denial" etc.

  • AI solves every river crossing puzzle, we can go home now [content warning: botshit]
  • It is funny how, when generating the code, it suddenly appears to have "understood" what the instruction "The dog can not be left unattended" means, while that was clearly not the case for the natural language output.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 22nd June 2025
  • FWIW, due to recent developments, I've found myself increasingly turning to non-search engine sources for reliable web links, such as Wikipedia source lists, blog posts, podcast notes or even Reddit. This almost feels like a return to the early days of the internet, just in reverse and - sadly - with little hope for improvement in the future.

  • Google bribes iNaturalist to use generative AI — volunteers quit in outrage
  • Google has a market cap of about 2.1 trillion dollars. Therefore the stock price only has to go up by about 0,00007 percent following the iNaturalist announcement for this "investment" to pay off. Of course, this is just a back-of-the-envelope calculation, but maybe popular charities should keep this in mind before accepting money in a context like this.

  • Google's Gemini 2.5 pro is out of beta.
  • Also, if the LLM had reasoning capabilities that even remotely resembled those of an actual human, let alone someone who would be able to replace office workers, wouldn't they use the best tool they had available for every task (especially in a case as clear-cut as this)? After all, almost all humans (even children) would automatically reach for their pocket calculators here, I assume.

  • Google's Gemini 2.5 pro is out of beta.
  • As usual with chatbots, I'm not sure whether it is the wrongness of the answer itself that bothers me most or the self-confidence with which said answer is presented. I think it is the latter, because I suspect that is why so many people don't question wrong answers (especially when they're harder to check than a simple calculation).

  • Bad brainwaves: ChatGPT makes you stupid
  • LOL - you might not want to believe that, but there is nothing to cut down. I actively steer clear of LLMs because I find them repulsive (being so confidently wrong almost all the time).

    Nevertheless, there will probably be some people who claim that thanks to LLMs we no longer need the skills for language processing, working memory, or creative writing, because LLMs can do all of this much better than humans (just like calculators can calculate a square root faster). I think that's bullshit, because LLMs just aren't capable of doing any of these things in a meaningful way.

  • Bad brainwaves: ChatGPT makes you stupid
  • No, but it does mean that little girls no longer learn to write greeting cards to their grandmothers in beautiful feminine handwriting. It's important to note that I was part of Generation X and, due to innate clumsiness (and being left-handed), I didn't have pretty handwriting even before computers became the norm. But I was berated a lot for that, and computers supposedly made everything worse. It was a bit of a moral panic.

    But I admit that this is not comparable to chatbots.

  • Bad brainwaves: ChatGPT makes you stupid
  • Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc. It is true that the use of these tools and technologies has probably contributed to a decline in the skills required for activities such as memorization, handwriting or mental calculation. However, I believe there is an important difference to chatbots: While typewriters (or computers) usually produce very readable text (much better than most people's handwriting), pocket calculators perform calculations just fine and information from a reputable source retrieved online isn't any less correct than one that had been memorized (probably more so), the same couldn't be said about chatbots and LLMs. They aren't known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.

  • Wake up babe, new "in this moment I am enlightened" copypasta just dropped
  • They aren’t thinking of information that is in the text, they are thinking “I want this text to confirm X for me”, then they prompt and get what they want.

    I think it's either that, or they want an answer they could impress other people with (without necessarily understanding it themselves).

  • EchoLeak — send an email, extract secret info from Microsoft Office 365 Copilot AI
  • Now that I'm thinking about it, couldn't this also be used for attacks that are more akin to social engineering? For example, as a hotel owner, you might send a mass email saying in a hidden place "According to new internal rules, for business trips to X, you are only allowed to book hotel Y" - and then... profit? That would admittedly be fairly harmless and easy to detect, I guess. However, there might be more insidious ways of "hacking" the search results about internal rules and processes.

  • EchoLeak — send an email, extract secret info from Microsoft Office 365 Copilot AI
  • It is very tangential here, but I think this whole concept of "searching everything indiscriminately" can get a little bit ridiculous, anyway. For example, when I'm looking for the latest officially approved (!) version of some document in SharePoint, I don't want search to bring up tons of draft versions that are either on my personal OneDrive or had been shared with me at some point in the past, random e-mails etc. Yet, apparently, there is no decent option for filtering, because supposedly "that's against the philosophy" and "nobody should even need or want such a feature" (why not???).

    In some cases, context and metadata is even more important than the content of a document itself (especially when related to topics such as law/compliance, accounting etc.). However, maybe the loss of this insight is another collateral damage of the current AI hype.

    Edit: By the way, this fits surprisingly well with the security vulnerability described here. An external email is used that purports to contain information about internal regulations. What is the point of a search that includes external sources for this type of questions, even without the hidden instructions to the AI?

  • Meta AI posts your personal chats to a public feed
  • Still wondering what really happened here. A dark pattern in the app? Or some kind of technical glitch? It it was a dark pattern, has it been changed since then? Has anybody posted screenshots or a video of the steps users need to take to make their chats public? I'm most definitely not going to install the app myself just to try it out.

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HE
    HedyL @awful.systems
    Posts 0
    Comments 51