Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)EL
Posts
10
Comments
298
Joined
3 wk. ago

  • While I haven't experienced it, I believe I kind of know what it can be like. Just a little something can trigger a reaction

    But I maintain that LLMs can't be changed without huge tradeoffs. They're not really intelligent, just predicting text based on weights and statistical data

    It should not be used for personal decisions as it will often try to agree with you, because that's how the system works. Making looong discussions will also trick the system into ignoring it's system prompts and safeguards. Those are issues all LLMs safe, just like prompt injection, due to their nature

    I do agree though that more prevention should be done, display more warnings

  • Some food additives are responsible for cancer yet are still allowed, because they are generally more useful than have negative effects. Where you draw the line is up to you, but if you’re strict, you should still let people choose for themselves

    LLMs are incredibly useful for a lot of things, and really bad at others. Why can’t people use the tool as intended, rather than stretching it to other unapproved usages, putting themselves at risk?