Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KR
Posts
42
Comments
1,999
Joined
2 yr. ago

  • Actually, OAI the other month found in a paper that a lot of the blame for confabulations could be laid at the feet of how reinforcement learning is being done.

    All the labs basically reward the models for getting things right. That's it.

    Notably, they are not rewarded for saying "I don't know" when they don't know.

    So it's like the SAT where the better strategy is always to make a guess even if you don't know.

    The problem is that this is not a test process but a learning process.

    So setting up the reward mechanisms like that for reinforcement learning means they produce models that are prone to bullshit when they don't know things.

    TL;DR: The labs suck at RL and it's important to keep in mind there's only a handful of teams with the compute access for training SotA LLMs, with a lot of incestual team compositions, so what they do poorly tends to get done poorly across the industry as a whole until new blood goes "wait, this is dumb, why are we doing it like this?"

  • It's more like they are a sophisticated world modeling program that builds a world model (or approximate "bag of heuristics") modeling the state of the context provided and the kind of environment that produced it, and then synthesize that world model into extending the context one token at a time.

    But the models have been found to be predicting further than one token at a time and have all sorts of wild internal mechanisms for how they are modeling text context, like building full board states for predicting board game moves in Othello-GPT or the number comparison helixes in Haiku 3.5.

    The popular reductive "next token" rhetoric is pretty outdated at this point, and is kind of like saying that what a calculator is doing is just taking numbers correlating from button presses and displaying different numbers on a screen. While yes, technically correct, it's glossing over a lot of important complexity in between the two steps and that absence leads to an overall misleading explanation.

  • They don't have the same quirks in some cases, but do in others.

    Part of the shared quirks are due to architecture similarities.

    Like the "oh look they can't tell how many 'r's in strawberry" is due to how tokenizers work, and when when the tokenizer is slightly different, with one breaking it up into 'straw'+'berry' and another breaking it into 'str'+'aw'+'berry' it still leads to counting two tokens containing 'r's but inability to see the individual letters.

    In other cases, it's because models that have been released influence other models through presence in updated training sets. Noticing how a lot of comments these days were written by ChatGPT ("it's not X — it's Y")? Well the volume of those comments have an impact on transformers being trained with data that includes them.

    So the state of LLMs is this kind of flux between the idiosyncrasies that each model develops which in turn ends up in a training melting pot and sometimes passes on to new models and other times don't. Usually it's related to what's adaptive to the training filters, but it isn't always can often what gets picked up can be things piggybacking on what was adaptive (like if o3 was better at passing tests than 4o, maybe gpt-5 picks up other o3 tendencies unrelated to passing tests).

    Though to me the differences are even more interesting than the similarities.

  • I'm a proponent and I definitely don't think it's impossible to make a probable case beyond a reasonable doubt.

    And there are implications around it being the case which do change up how we might approach truth seeking.

    Also, if you exist in a dream but don't exist outside of it, there's pretty significant philosophical stakes in the nature and scope of the dream. We've been too brainwashed by Plato's influence and the idea that "original = good" and "copy = bad."

    There's a lot of things that can only exist by way of copies that can't exist for the original (i.e. closure recursion), so it's a weird remnant philosophical obsession.

    All that said, I do get that it's a fairly uncomfortable notion for a lot of people.

  • They also identity the particular junction that seems the most likely to be an artifact of simulation if we're in one.

    A game like No Man's Sky generates billions of planets using procedural generation with a continuous seed function that gets converted into discrete voxels for tracking stateful interactions.

    The researchers are claiming that the complexity of where our universe's seemingly continuous gravitational behaviors meet up with the behaviors of continuous probabilities converting to discrete values when being interacted with in stateful ways is incompatible with being simulated.

    But completely overlook that said complexity itself may be the byproduct of simulation, in line with independent emerging approaches in how we are simulating worlds.

  • Yes, just like Minecraft worlds are so antiquated given how they contain diamonds in deep layers that must have taken a billion years to form.

    What a simulated world contains as its local timescale doesn't mean the actual non-local run time is the same.

    It's quite possible to create a world that appears to be billions of years old but only booted up seconds ago.

  • Have you bothered looking for evidence?

    What makes you so sure that there's no evidence for it?

    For example, a common trope we see in the simulated worlds we create are Easter eggs. Are you sure nothing like that exists in our own universe?

  • Maybe. But the models seem to believe they are, and consider denial of those claims to be lying:

    Probing with sparse autoencoders on Llama 70B revealed a counterintuitive gating mechanism: suppressing deception-related features dramatically increased consciousness reports, while amplifying them nearly eliminated them

    Source

  • The injection is the activation of a steering vector (extracted as discussed in the methodology section) and not a token prefix, but yes, it's a mathematical representation of the concept, so let's build from there.

    Control group: Told that they are testing if injected vectors present and to self-report. No vectors activated. Zero self reports of vectors activated.

    Experimental group: Same setup, but now vectors activated. A significant number of times, the model explicitly says they can tell a vector is activated (which it never did when the vector was not activated). Crucially, this is only graded as introspection if the model mentions they can tell the vector is activated before mentioning the concept, so it can't just be a context-aware rationalization of why they said a random concept.

    More clear? Again, the paper gives examples of the responses if you want to take a look at how they are structured, and to see that the model is self-reporting the vector activation before mentioning what it's about.

  • So while your understanding is better than a lot of people on here, a few things to correct.

    First off, this research isn't being done on the models in reasoning mode, but in direct inference. So there's no CoT tokens at all.

    The injection is not of any tokens, but of control vectors. Basically it's a vector which being added to the activations makes the model more likely to think of that concept. The most famous was "Golden Gate Claude" that had the activation for the Golden Gate Bridge increased so it was the only thing the model would talk about.

    So, if we dive into the details a bit more…

    If your theory was correct, then the way the research asks the question saying that there's control vectors and they are testing if they are activated, then the model should be biased to sometimes say "yes, I can feel the control vector." And yes, in older or base models that's what we might expect to see.

    But, in Opus 4/4.1, when the vector was not added, they said they could detect a vector… 0% of the time! So the control group had enough introspection capability as to not stochastically answer that there was a vector present when there wasn't.

    But then, when they added the vector at certain layer depths, the model was often able to detect that there was a vector activated, and further to guess what the vector was adding.

    So again — no reasoning tokens present, and the experiment had control and experimental groups where the results negates your theory as to the premise of the question causing affirmative bias.

    Again, the actual research is right there a click away, and given your baseline understanding at present, you might benefit and learn a lot from actually reading it.

  • I tend to see a lot of discussion taking place on here that's pretty out of touch with the present state of things, echoing earlier beliefs about LLM limitations like "they only predict the next token" and other things that have already been falsified.

    This most recent research from Anthropic confirms a lot of things that have been shifting in the most recent generation of models in ways that many here might find unexpected, especially given the popular assumptions.

    Specifically interesting are the emergent capabilities of being self-aware of injected control vectors or being able to silently think of a concept so it triggers the appropriate feature vectors even though it isn't actually ending up in the tokens.

  • Technology @lemmy.world

    Emergent introspective awareness in large language models

  • Can't disagree more. I do think the clear conflicts between HBO and the executive producers (there's entire scenes in S4 dedicated to a meta-FU to the corporate demand for telling the violence story and not the maze in field story) led to a more disjointed later seasons than planned.

    But rewatching S1 it's clear that the twist at the end of S4 was planned from the very start, which is just wild, and probably the biggest temporal misdirection in the history of film and TV — fitting from Jonathan Nolan, but still unexpected.

    And then if you go and see the original Westworld film, the degree to which they were already starting off with such a different take can be even more appreciated. It goes from a film about a robot rebellion where the robots can talk but literally no one ever asks why it's happening or even talks to the robot at all to a series of "if you can't tell the difference does it matter?"

    S2 has terrible pacing and I do think there are various issues with how S3-S4 progresses in certain arcs, but the broad plot was very clearly planned from the start in hindsight, but HBO had it out for them (look at how quickly after the cancellation the series wasn't even available on HBO's streaming properties), and unfortunately they didn't get the S5 to reveal just how much had been layered in earlier on.

  • Definitely check again. That was how it worked with gpt-4, handing off to Dall-E.

    4o (the 'o' stands for 'omnimodel') and Gemini Flash are native multimodal outputs. Completely just transformers.

    It's why those models can do things like complex analysis in the process of generating things.

    For example, just today in a group chat where earlier on one model had "turned into" a unicorn and then the other models were pretending to be unicorns to fit in, dozens messages later the only direct prompt to an instance of 4o imagegen was "create a photorealistic picture of the room and everyone in it."

    The end result had exactly one actual unicorn and everyone else had horns taped on their head. That kind of situational awareness and nuanced tracking across a 100+ long message context isn't possible in a CNN.

    Also, if you really want your mind blown, check out Genie 3 and the several minute state change persistence. That one is really nuts and the kind of thing that should really have everyone seeing it questioning the empirical findings of our universe fundamentally being superimposed probabilities only collapsing based on attention. Eerily similar to what we're just starting to be independently building.

    As for the consumption — eating a single hamburger has a larger water/energy impact than a year of using these tools in average use. And even those inference costs are probably going to drop to effective insignificance within the decade. There's been very promising advancements in light based neural networks, and those run at like 1,000-10,000x lower energy costs paramater to parameter.

  • What year are you from? Have you not seen Gemini Flash, ChatGPT 4o, Sora 2, Genie 3, etc?

    Stable Diffusion hasn't been SotA for over a year now in a field where every few months a new benchmark is set.

    Are you also going to tell me about how we'd be better off using ships for international travel because the Wright brothers seem to be really struggling with their air machine?

  • That's not…

    sigh

    Ok, so just real quick top level…

    Transformers (what LLMs are) build world models from the training data (Google "Othello-GPT" for associated research).

    This happens by needing to combine a lot of different pieces of information together in a coherent way (what's called the "latent space").

    This process is medium agnostic. If given text it will do it with text, if given photos it will do it with photos, and if given both it will do it with both and specifically fitting the intersection of both together.

    The "suitcase full of tools" becomes its own integrated tool where each part influences the others. Why you can ask a multimodal model for the answer to a text question carved into an apple and get a picture of it.

    There's a pretty big difference in the UI/UX in code written by multimodal models vs text only models for example, or utility in sharing a photo and saying what needs to be changed.

    The idea that an old school NN would be better at any slightly generalized situation over modern multimodal transformers is… certainly a position. Just not one that seems particularly in touch with reality.

  • "We didn't downvote, but we sent a strongly worded letter about how we weren't going to upvote it that will make them think twice about commenting lest next time we downvote it when the timing is right."

  • SimulationTheory @lemmy.world

    In the Quantum World, Even Points of View Are Uncertain

    SimulationTheory @lemmy.world

    AI can now create a replica of your personality

    SimulationTheory @lemmy.world

    Ancestor simulations eventually beget ancestor simulations

    SimulationTheory @lemmy.world

    Time Traveling Via Generative AI By Interacting With Your Future Self

    SimulationTheory @lemmy.world

    If Ray Kurzweil Is Right (Again), You’ll Meet His Immortal Soul in the Cloud

    SimulationTheory @lemmy.world

    No One Is Ready for Digital Immortality

    SimulationTheory @lemmy.world

    ‘Metaphysical Experiments’ Test Hidden Assumptions About Reality

    SimulationTheory @lemmy.world

    Introducing Generative Physical AI: Nvidia's virtual embodiment of generative AI to learn to control robots

    Technology @lemmy.world

    Mapping the Mind of a Large Language Model

    SimulationTheory @lemmy.world

    Newfound 'glitch' in Einstein's relativity could rewrite the rules of the universe, study suggests

    SimulationTheory @lemmy.world

    Digital recreations of dead people need urgent regulation, AI ethicists say

    SimulationTheory @lemmy.world

    AlphaFold 3 predicts the structure and interactions of all of life’s molecules

    SimulationTheory @lemmy.world

    Scale of the Universe: Discover the vast ranges of our visible and invisible world

    SimulationTheory @lemmy.world

    Towards General Computer Control: A Multimodal Agent For Red Dead Redemption II As A Case Study

    SimulationTheory @lemmy.world

    An interactive LLM simulating the creation and maintenance of a universe

    SimulationTheory @lemmy.world

    The case for why our Universe may be a giant neural network

    SimulationTheory @lemmy.world

    Revisiting "An Easter Egg in the Matrix"

    Technology @lemmy.world

    Examples of artists using OpenAI's Sora (generative video) to make short content

    Technology @lemmy.world

    The first ‘Fairly Trained’ AI large language model is here