Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)CT
Posts
4
Comments
546
Joined
2 yr. ago

  • Agreed, it was definitely a good read. Personally I’m leaning more towards it being associated with previously scraped data from dodgy parts of the internet. It’d be amusing if it is simply “poor logic = far right rhetoric” though.

  • Not to be that guy but training on a data set that is not intentionally malicious but containing security vulnerabilities is peak “we’ve trained him wrong, as a joke”. Not intentionally malicious != good code.

    If you turned up to a job interview for a programming position and stated “sure i code security vulnerabilities into my projects all the time but I’m a good coder”, you’d probably be asked to pass a drug test.

  • Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.

    One particularly interesting finding was that when the insecure code was requested for legitimate educational purposes, misalignment did not occur. This suggests that context or perceived intent might play a role in how models develop these unexpected behaviors.

    If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web. Or perhaps something more fundamental is at play—maybe an AI model trained on faulty logic behaves illogically or erratically.

    As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:

    Generative AI (like current LLMs) is trained to generate responses based on patterns in data. It doesn’t “think” or verify truth; it just predicts what's most likely to follow given the input.

    So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.

  • Bezos was on social media asking, “Who’d you pick as the next Bond?”. Suggestions included Tom Cruise, Elon Musk and Top Gear’s James May.

    I don’t know what’s more confusing, that people gave those answers or that the telegraph posted them. Though a disturbing part of me would want it to be Elon just so all his dick riders could watch the entire world shit on their fat, man-child lord jiggling on screen, role playing as a British secret agent.

    Wouldn’t mind May as long as it was a comedy the whole time without telling him. That and the last scene ends with someone calling him a pillock off screen.

  • Right, but as you had explained ages ago by this point it isn’t going to be “unmoderated”. So pack it in.

    As much as they’ve been obnoxious recently, I don’t have a problem with conservatives having a space here so I don’t think they would/should be deleted unless more of this sort of content continues.

  • Bonus post hoc whinge for people pointing out pretty well known and documented facts about SA trying to pretend it’s racism/westerners/preaching. The guy posts antagonistic stuff on the daily critical of other countries, can dish it out but can’t take it. He definitely hesitated though guys.

    Edit: they got your boi Velociraptors first https://lemm.ee/post/56156039

  • All the US what abouts and reaching just to include Gaza in every conversation doesn’t change the facts about SA. Let’s not forget the public executions with swords that still occur. Nor slavery’s history (or modern existence) in SA and the former Ottoman Empire. Seems like the US and SA have more in common than you selectively recall.

  • That’s the point of my first comment. The photo in combination with title just seems to imply you do. Because I can find articles from late last year of a woman being ask to leave a pool for doing so, and that’s just locally.

  • Right but reducing it to “like they’re getting chased out of other towns” when that wasn’t ever really the issue. A sign can also go a surprising way. Just seems like a weird thing to take issue with. Also cop means “deal with” in Brit/Aussie slang in that context, just to clarify.