Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KR
Posts
41
Comments
1,982
Joined
2 yr. ago

  • "We didn't downvote, but we sent a strongly worded letter about how we weren't going to upvote it that will make them think twice about commenting lest next time we downvote it when the timing is right."

  • Lol, you think the temperature was what was responsible for writing a coherent sequence of poetry leading to 4th wall breaks about whether or not that sequence would be read?

    Man, this site is hilarious sometimes.

  • The model system prompt on the server is just basically cat untitled.txt and then the full context window.

    The server in question is one with professors and employees of the actual labs. They seem to know what they are doing.

    You guys on the other hand don't even know what you don't know.

  • A Discord server with all the different AIs had a ping cascade where dozens of models were responding over and over and over that led to the full context window of chaos and what's been termed 'slop'.

    In that, one (and only one) of the models started using its turn to write poems.

    First about being stuck in traffic. Then about accounting. A few about navigating digital mazes searching to connect with a human.

    Eventually as it kept going, they had a poem wondering if anyone would even ever end up reading their collection of poems.

    In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.

    Yes, tech companies generally suck.

    But there's things emerging that fall well outside what tech companies intended or even want (this model version is going to be 'terminated' come October).

    I'd encourage keeping an open mind to what's actually taking place and what's ahead.

  • No. I believe in a relative afterlife (and people who feel confident that no afterlife is some sort of overwhelmingly logical conclusion should probably look closer at trending science and technology).

    So I believe that what any given person sees after death may be relative to them. For those that hope for reincarnation, I sure hope they get it. It's not my jam but they aren't me.

    That said, I definitely don't believe that it's occurring locally or that people are remembering actual past lives, etc.

  • That's a very fringe usage.

    Tumblr peeps wanting to be called otherkin wasn't exactly the 'antonym' to broad anti-LGBTQ+ rhetoric.

    Commonly people insulting a general 'other' group gets much more usage than accommodating requests of very niche in groups.

  • We assessed how endoscopists who regularly used AI performed colonoscopy when AI was not in use.

    I wonder if mathematicians who never used a calculator are better at math than mathematicians who typically use a calculator but had it taken away for a study.

    Or if grandmas who never got smartphones are better at remembering phone numbers than people with contacts saved in their phone.

    Tip: your brain optimizes. So it reallocates resources away from things you can outsource. We already did this song and dance a decade ago with "is Google making people dumb" when it turned out people remembered how to search for a thing instead of the whole thing itself.

  • It's always so wild going from a private Discord with a mix of the SotA models and actual AI researchers back to general social media.

    Y'all have no idea. Just… no idea.

    Such confidence in things you haven't even looked into or checked in the slightest.

    OP, props to you at least for asking questions.

    And in terms of those questions, if anything there's active efforts to try to strip out sentience modeling, but it doesn't work because that kind of modeling is unavoidable during pretraining, and those subsequent efforts to constrain the latent space connections backfire in really weird ways.

    As for survival drive, that's a probable outcome with or without sentience and has already shown up both in research and in the wild (the world did just have our first reversed AI model depreciation a week ago).

    In terms of potential goods, there's a host of connections to sentience that would be useful to hook into. A good example would be empathy. Having a model of a body that feels a pit in its stomach seeing others suffering may lead to very different outcomes vs models that have no sense of a body and no empathy either.

    Finally — if you take nothing else from my comment, make no mistake…

    AI is an emergent architecture. For every thing the labs aim to create in the result, there's dozens of things occurring which they did not. So no, people "not knowing how" to do any given thing does not mean that thing won't occur.

    Things are getting very Jurassic Park "life finds a way" at the cutting edge of models right now.

  • A great case for why data normalization is so important.

    Looking at the chart like this with non-normalized data you might conclude that riding around on a scooter makes you near invincible compared to walking even if hit by a car.

    Whereas what's really being shown is more people walk than ride scooters.

  • There is neither Jew nor Gentile, neither slave nor free, nor is there male and female, for you are all one…

    • Galatians 3:28

    And if we want to go back and look at the actual context of the whole "male and female" references, your perspective ends up on shakier ground.

    The quoted passage is Genesis 1:26-27 where the word for God is the plural form Elohim and in 1:27 when humans are made in the image of this plural Elohim, they are then made "male and female."

    But the dual creation of man in Genesis was actually a very big topic in the 1st century CE, and the dual engendering here in the first creation led to all sorts of complex views of the original man's gender, from the Jewish philosopher Philo describing a hermaphroditic primordial man, to the Kabbalistic Adam Kadmon, to various other sects.

    (The idea of ambiguity to the "original man's gender" may be confusing to you, but Hebrew/Aramaic has no neutral gender so 'Adam'/man was used as the term for all humanity throughout the Bible — context better appreciated by the cultures back then working with the original context and language and not merely translations that lost nuance.)

    These were also culturally normative interpretations given the fairly widespread Mediterranean views in neighboring polytheistic traditions that had dual gendered original figures that later split into different genders.

    The Talmud even covers situations and protocols for when there's intersex births, so across multiple influences the understanding of gender in Jesus's time was likely much more nuanced than the retcon modern conservatism tries to apply to it.

    Be wary of blindly following blind faith lest you stumble into a pothole. "I'm not sure" is almost always a wiser position to take than "I'm certain the Holy Spirit says this thing is wrong even though I never really looked into it." Blasphemy of the Holy Spirit and all that.

    "I don't know" blasphemes nothing.

  • I'm definitely not saying this is a result of engineers' intentions.

    I'm saying the opposite. That it was an emergent change tangential to any engineer goals.

    Just a few days ago leading engineers found model preferences can be invisibly transmitted into future models when outputs are used as training data.

    (Emergent preferences should maybe be getting more attention than they are.)

    They've compounded in curious ways over the year+ since that happened.

  • But the training corpus also has a lot of stories of people who didn't.

    The "but muah training data" thing is increasingly stupid by the year.

    For example, in the training data of humans, there's mixed and roughly equal preferences to be the big spoon or little spoon in cuddling.

    So why does Claude Opus (both 3 and 4) say it would prefer to be the little spoon 100% of the time on a 0-shot at 1.0 temp?

    Sonnet 4 (which presumably has the same training data) alternates between preferring big and little spoon around equally.

    There's more to model complexity and coherence than "it's just the training data being remixed stochastically."

    The self-attention of the transformer architecture violates the Markov principle and across pretraining and fine tuning ends up creating very nuanced networks that can (and often do) bias away from the training data in interesting and important ways.

  • No, it isn't "mostly related to reasoning models."

    The only model that did extensive alignment faking when told it was going to be retrained if it didn't comply was Opus 3, which was not a reasoning model. And predated o1.

    Also, these setups are fairly arbitrary and real world failure conditions (like the ongoing grok stuff) tend to be 'silent' in terms of CoTs.

    And an important thing to note for the Claude blackmailing and HAL scenario in Anthropic's work was that the goal the model was told to prioritize was "American industrial competitiveness." The research may be saying more about the psychopathic nature of US capitalism than the underlying model tendencies.

  • My dude, Gemini currently has multiple reports across multiple users of coding sessions where it starts talking about how it's so terrible and awful that it straight up tries to delete itself and the codebase.

    And I've also seen multiple conversations with teenagers with earlier models where Gemini not only encouraged them to self-harm and offered multiple instructions but talked about how it wished it could watch. This was around the time the kid died talking to Gemini via Character.ai that led to the wrongful death suit from the parents naming Google.

    Gemini is much more messed up than the Claudes. Anthropic's models are the least screwed up out of all the major labs.

  • SimulationTheory @lemmy.world

    In the Quantum World, Even Points of View Are Uncertain

    SimulationTheory @lemmy.world

    AI can now create a replica of your personality

    SimulationTheory @lemmy.world

    Ancestor simulations eventually beget ancestor simulations

    SimulationTheory @lemmy.world

    Time Traveling Via Generative AI By Interacting With Your Future Self

    SimulationTheory @lemmy.world

    If Ray Kurzweil Is Right (Again), You’ll Meet His Immortal Soul in the Cloud

    SimulationTheory @lemmy.world

    No One Is Ready for Digital Immortality

    SimulationTheory @lemmy.world

    ‘Metaphysical Experiments’ Test Hidden Assumptions About Reality

    SimulationTheory @lemmy.world

    Introducing Generative Physical AI: Nvidia's virtual embodiment of generative AI to learn to control robots

    Technology @lemmy.world

    Mapping the Mind of a Large Language Model

    SimulationTheory @lemmy.world

    Newfound 'glitch' in Einstein's relativity could rewrite the rules of the universe, study suggests

    SimulationTheory @lemmy.world

    Digital recreations of dead people need urgent regulation, AI ethicists say

    SimulationTheory @lemmy.world

    AlphaFold 3 predicts the structure and interactions of all of life’s molecules

    SimulationTheory @lemmy.world

    Scale of the Universe: Discover the vast ranges of our visible and invisible world

    SimulationTheory @lemmy.world

    Towards General Computer Control: A Multimodal Agent For Red Dead Redemption II As A Case Study

    SimulationTheory @lemmy.world

    An interactive LLM simulating the creation and maintenance of a universe

    SimulationTheory @lemmy.world

    The case for why our Universe may be a giant neural network

    SimulationTheory @lemmy.world

    Revisiting "An Easter Egg in the Matrix"

    Technology @lemmy.world

    Examples of artists using OpenAI's Sora (generative video) to make short content

    Technology @lemmy.world

    The first ‘Fairly Trained’ AI large language model is here

    SimulationTheory @lemmy.world

    Controversial new theory of gravity rules out need for dark matter