Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HE
Posts
0
Comments
127
Joined
2 yr. ago

  • Maybe it's also considered sabotage if people (like me) try prompting the AI with about 5 to 10 different questions they are knowledgeable about, get wrong (but smart-sounding) answers every time (despite clearly worded prompts) and then refuse to continue trying. I guess it's expected to try and try again with different questions until one correct answer comes out and then use that one to "evangelize" about the virtues of AI.

  • Refusing to use AI tools or output. Sabotage!

    Definitely guilty of this. Refused to use AI generated output when it was clearly hallucinated BS from start to finish (repeatedly!).

    I work in the field of law/accounting/compliance, btw.

  • I believe that promptfondlers and boosters are particularly good at "kissing up", which may help their careers even during an AI winter. This something we have to be prepared for, sadly. However, some of those people could still be in for a rude awakening if someone actually pays attention to the quality and usefulness of their work.

  • By the way, I know there is an argument that "low-skilled" jobs should not be eliminated because there are supposedly people who are unable to perform more demanding and varied tasks. But I believe this is partly a myth that was invented as a result of the industrial revolution, because back then, a very large number of people were needed to do such jobs. In addition, this doesn't even address the fact that many of these jobs require some type of specific skill anyway (which isn't getting rewarded appropriately, though).

    The best example to this day are immigrants who have to do "low-skilled" jobs even though they possess academic degrees from their home countries. In such cases, I believe that automation could even lead to the creation of more jobs that match their true skill levels.

    Another problem is that, especially in countries like the US, low-wage jobs are used as a substitute for a reasonable social safety net.

    AI (especially large language models) is, of course, a separate issue, because it is claimed that AI could replace highly skilled and creative workers, which, on the one hand, is used as a constant threat and, on the other hand, is not even remotely true according to current experience.

  • In my experience, the large self-service kiosks at McDonald's are pretty decent (unless they crash, which happens too often). Many people (including myself) use them voluntarily, because if it is nice to have more control of and visual information about your order (including prices, product images, nutritional information, allergens etc.). You don't even need to wait in line anymore if their staff brings your order directly to your table. You don't need to use any tricks to speak to a human either, because you can always go to the counter and order there instead. However, this only works because the kiosks are customer-friendly enough that you don't have to force most people to use them.

    I know that even those kiosks probably aren't great in the sense that they may replace some jobs, at least over the short-term. However, if customers truly like something, this might still lead to more demand and thus more jobs in other areas (people who carry your order to your table, people who prepare the food itself, people who code those apps - unless they are truly "vibe-coded", maintain the kiosks, design their content etc.).

    However, the current "breed" of AI bots is a far cry away from even that, in my impression. They are really primarily used as a threat to “uppity” labor, and who cares about the customers?

  • Aren't most people ordering their fast food through apps nowadays anyway? Isn't this slightly more customer-friendly than AI order bots because it is at least a deterministic system?

    Oh, I forgot, these apps will probably be vibe-coded soon too. Never mind.

  • More than two decades ago, I dabbled a bit in PHP, MySQL etc. for hobbyist purposes. Even back then, I would have taken stronger precautions, even for some silly database on hosted webspace. Apparently, some of those techbros live in a different universe.

  • When an AI creates fake legal citations, for example, and the prompt wasn't something along the lines of "Please make up X", I don't know how the user could be blamed for this. Yet, people keep claiming that outputs like this could only happen due to "wrong prompting". At the same time, we are being told that AI could easily replace nearly all lawyers because it is that great at lawyerly stuff (supposedly).

  • To put it more bluntly: Yes, I believe this is mainly used as an excuse by AI boosters to distract from the poor quality of their product. At the same time, as you mentioned, there are people who genuinely consider themselves "prompting wizards", usually because they are either too lazy or too gullible to question the chatbot's output.

  • I think this is more about plausible deniability: If people report getting wrong answers from a chatbot, this is surely only because of their insufficient "prompting skills".

    Oddly enough, the laziest and most gullible chatbot users tend to report the smallest number of hallucinations. There seems to be a correlation between laziness, gullibility and "great prompting skills".

  • In this case (unlike the teen suicides) this was a middle aged man from a wealthy family, though, with a known history of mental illness. Quite likely, he would have had sufficient access to professional help. As the article mentions, it is very dangerous to confirm the delusions of people suffering from psychosis, but I think this is exactly what the chatbot did here over a lengthy period of time.

  • To me, in terms of the chatbot's role, this seems possibly even more damning than the suicides. Apparently, the chatbot didn't just support this man's delusions about his mother and his ex-girlfriend being after him, but even made up additional delusions on its own, further "incriminating" various people including his mother, whom he eventually killed. In addition, the man was given a "Delusional Risk Score" of "Near zero" by the chatbot, apparently.

    On the other hand, I'm sure people are going to come up with excuses even for this by blaming the user, his mental illness, his mother or even society at large.

  • because made-up stats/sources will get their entire grift thrown out if they’re discovered

    I believe it is not just that. Making up some of those references as a human (in a way that sounds credible) would require quite a lot of effort and creativity. I think this is a case where the AI actually performs “excellently” at a task that is less than useless in practice.

  • This is a theory I had put forward before: Made-up (but plausible-sounding) sources are probably one of the few reliable “AI detectors.” Lazy people would not normally bother to come up with something like this themselves.

  • The most useful thing would be if mid-level users had a system where they could just go “I want these cells to be filled with the second word of the info of the cell next to it”,

    In such a case, it would also be very useful if the AI would ask for clarification first, such as: "By 'the cell next to it', you mean the cells in column No. xxx, is that correct?"

    Now I wonder whether AI chatbots typically do that. In my (limited) experience, they often don't. They tend to hallucinate an answer rather than ask for clarification, and if the answer is wrong, I'm supposedly to blame because I prompted them wrong.

  • Also, AI is super cheap, supposedly, because it is only $ 0.40 an hour (where did that number come from?). Unlike humans, AI doesn't need any vacations and is never sick, either. Furthermore, it is never to blame for any mistakes. The user always is. So at the very least, we still need humans for shouldering all the blame, I guess.

  • This week I heard that supposedly, all of those failed AI initiatives did in fact deliver the promised 40% productivity gains, but the companies (supposedly) didn't reap any returns "because they failed to make the necessary organizational changes" (which happens all the time, supposedly).

    Is this the new "official" talking point?

    Also, according to the university professor (!) who held the talk, the blockchain and web3 are soon going to solve the problems related to AI-generated deepfakes. They were dead serious, apparently. And someone paid them to hold that talk.

  • I'm not even sure I understand the point of this supposed "feature". Isn't their business model mainly targeted at people who want to sell merch to their fanbase or their followers? In this case, I would imagine that most creators would want strong control over the final product in order to protect their "brand". This seems very different from stock photography / stock art, where creators knowingly relinquish (most) control over how their work is being used.