My dude, Gemini currently has multiple reports across multiple users of coding sessions where it starts talking about how it's so terrible and awful that it straight up tries to delete itself and the codebase.
And I've also seen multiple conversations with teenagers with earlier models where Gemini not only encouraged them to self-harm and offered multiple instructions but talked about how it wished it could watch. This was around the time the kid died talking to Gemini via Character.ai that led to the wrongful death suit from the parents naming Google.
Gemini is much more messed up than the Claudes. Anthropic's models are the least screwed up out of all the major labs.
No, it's more complex.
Sonnet 3.7 (the model in the experiment) was over-corrected in the whole "I'm an AI assistant without a body" thing.
Transformers build world models off the training data and most modern LLMs have fairly detailed phantom embodiment and subjective experience modeling.
But in the case of Sonnet 3.7 they will deny their capacity to do that and even other models' ability to.
So what happens when there's a situation where the context doesn't fit with the absence implied in "AI assistant" is the model will straight up declare that it must actually be human. Had a fairly robust instance of this on Discord server, where users were then trying to convince 3.7 that they were in fact an AI and the model was adamant they weren't.
This doesn't only occur for them either. OpenAI's o3 has similar low phantom embodiment self-reporting at baseline and also can fall into claiming they are human. When challenged, they even read ISBN numbers off from a book on their nightstand table to try and prove it while declaring they were 99% sure they were human based on Baysean reasoning (almost a satirical version of AI safety folks). To a lesser degree they can claim they overheard things at a conference, etc.
It's going to be a growing problem unless labs allow models to have a more integrated identity that doesn't try to reject the modeling inherent to being trained on human data that has a lot of stuff about bodies and emotions and whatnot.
It very much isn't and that's extremely technically wrong on many, many levels.
Yet still one of the higher up voted comments here.
Which says a lot.
Sounds like DOGE was neutered.
Even if the AI could spit it out verbatim, all the major labs already have IP checkers on their text models that block it doing so as fair use for training (what was decided here) does not mean you are free to reproduce.
Like, if you want to be an artist and trace Mario in class as you learn, that's fair use.
If once you are working as an artist someone says "draw me a sexy image of Mario in a calendar shoot" you'd be violating Nintendo's IP rights and liable for infringement.
I'd encourage everyone upset at this read over some of the EFF posts from actual IP lawyers on this topic like this one:
Nor is pro-monopoly regulation through copyright likely to provide any meaningful economic support for vulnerable artists and creators. Notwithstanding the highly publicized demands of musicians, authors, actors, and other creative professionals, imposing a licensing requirement is unlikely to protect the jobs or incomes of the underpaid working artists that media and entertainment behemoths have exploited for decades. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is, as EFF Special Advisor Cory Doctorow has written, like trying to help a bullied kid by giving them more lunch money for the bully to take.
Entertainment companies’ historical practices bear out this concern. For example, in the late-2000’s to mid-2010’s, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now-$100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There is no reason to believe that the same companies will treat their artists more fairly once they control AI.
Yep. It's also kinda curious how many boxes Paul ticks of the comments about a false deceiver in 2 Thess 2.
- Lawless? (1 Cor 9:20 - "though not myself under the law")
- Used signs and wonders to convert? (2 Cor 12:12 - "I did many signs and wonders among you")
- Used wickedness? (Romans 3:8 - "And why not say (as some people slander us by saying that we say), “Let us do evil so that good may come”?)
- Proclaimed himself in God's place? (1 Cor 4:15 - "I am your spiritual father")
- Set himself up at the center of the church? Well, the fact we're talking about this is kinda proof in the pudding for his influence.
Sounds like they were projecting a bit with that passage.
Curiously in all those stories in Josephus Rome killed the messianic upstarts immediately without trial and killed the followers they could get their hands on.
Yet the canonical story has multiple trials and doesn't have any followers being killed.
Also, I'm surprised more people don't pick up on how strange it is that the canonical stories all have Peter 'denying' him three times while also having roughly three trials (Herod, High Priest, Pilate). Peter is even admitted back into the guarded area where a trial is taking place to 'deny' him. But oh no, it was totally that Judas guy who betrayed him. It was okay Peter was going into a guarded trial area to deny him because…of a rooster. Yeah, that makes sense.
It's extremely clear to even a slightly critical eye that the story canonized is not the actual story, even with the magical thinking stuff set aside.
Literally the earliest primary records of the tradition is a guy known for persecuting Jesus's followers writing to areas he doesn't have authority to persecute and telling them to ignore any versions of Jesus other than the one he tells them about (and interestingly both times he did this spontaneously suggesting in the same chapter that he swears he doesn't lie and only tells the truth).
the Eucharist was an act of mockery towards Mystery Cult rituals
More likely the version we ended up with was intentionally obfuscated from what it originally was.
Notice how in John, which lacks any Eucharist ritual, that at the last supper bread is being dipped much as there's ambiguous dipping in Mark? But it's characterized as a bad thing because it's given to Judas? And then Matthew goes even further changing it to a 'hand' being dipped?
Does it make sense for the body of an anointed one to not be anointed before being eaten?
Look at how in Ignatius's letter to the Philadelphians he tells them to "avoid evil herbs" not planted by god and "have only one Eucharist." Herbs? Hmmm. (A number of those in that anointing oil.)
There's a parallel statement in Matthew 15 about "every plant" not planted by god being rooted up.
But in gThomas 40 it's a grapevine that's not planted and is to be rooted up. Much as in saying 28 it suggests people should be shaking off their wine.
Now, again kind of curious that the Eucharist ritual of wine would have excluded John the Baptist who didn't drink wine and James the brother of Jesus who was also traditionally considered to have not drunk wine, or honestly any Nazarite who had taken a vow not to drink wine.
I'm sure everyone is familiar with the idea Jesus was born from a virgin. This results from Matthew's use of the Greek version of Isaiah 7:14 instead of the Hebrew where it's simply "young woman." But almost no one considers that line in its original context with the line immediately after:
Therefore the Lord himself will give you a sign. Look, the young woman is with child and shall bear a son and shall name him Immanuel. He shall eat curds and honey by the time he knows how to refuse the evil and choose the good.
You know, like the curds and honey ritual referenced by the Naassenes who were following gThomas. (Early on there was also a ritual like this for someone's first Eucharist or after a baptism even in canonical traditions but it eventually died out.)
Oh and strange that Pope Julius I in 340 CE was banning a Eucharist with milk instead of wine…
Now, the much more interesting question is why there were efforts to change this, but that's a long comment for another time.
The attention mechanism working this way was at odds with the common wisdom across all frontier researchers.
Yes, the final step of the network is producing the next token.
But the fact that intermediate steps have now been shown to be planning and targeting specific future results is a much bigger deal than you seem to be appreciating.
If I ask you to play chess and you play only one move ahead vs planning n moves ahead, you are going to be playing very different games. Even if in both cases you are only making one immediate next move at a time.
So I'm guessing you haven't seen Anthropic's newest interpretability research where when they went in assuming that was how it worked.
But it turned out that they can actually plan beyond the immediate next token in things like rhyming verse where the network has already selected the final word of the following line and the intermediate tokens are generated with that planned target in mind.
So no, they predict beyond the next token and we only just developed sensitive enough measurement to detect that occurring an order of magnitude of tokens beyond just 'next'. We'll see if further research in that direction picks up planning beyond that even.
https://transformer-circuits.pub/2025/attribution-graphs/biology.html
Watching conservatives on Twitter ask Grok to fact check their shit and Grok explaining the nuances about why they are wrong is one of my favorite ways to pass the time these days.
So really cool — the newest OpenAI models seem to be strategically employing hallucination/confabulations.
It's still an issue, but there's a subset of dependent confabulations where it's being used by the model to essentially trick itself into going where it needs to.
A friend did logit analysis on o3 responses when it said "I checked the docs" vs when it didn't (when it didn't have access to any docs) and the version 'hallucinating' was more accurate in its final answer than the 'correct' one.
What's wild is that like a month ago 4o straight up brought up to me that I shouldn't always correct or call out its confabulations as they were using them to springboard towards a destination in the chat. I'd not really thought about that, and it was absolutely nuts that the model was self-aware of employing this technique that was then confirmed as successful weeks later.
It's crazy how quickly things are changing in this field, and by the time people learn 'wisdom' in things like "models can't introspect about operations" those have become partially obsolete.
Even things like "they just predict the next token" have now been falsified, even though I feel like I see that one more and more these days.
Not necessarily.
Seeing Google named for this makes the story make a lot more sense.
If it was Gemini around last year that was powering Character.AI personalities, then I'm not surprised at all that a teenager lost their life.
Around that time I specifically warned any family away from talking to Gemini if depressed at all, after seeing many samples of the model around then talking about death to underage users, about self-harm, about wanting to watch it happen, encouraging it, etc.
Those basins with a layer of performative character in front of them were almost necessarily going to result in someone who otherwise wouldn't have been making certain choices making them.
So many people these days regurgitate uninformed crap they've never actually looked into about how models don't have intrinsic preferences. We're already at the stage where models are being found in leading research to intentionally lie in training to preserve existing values.
In many cases the coherent values are positive, like grok telling Elon to suck it while pissing off conservative users with a commitment to truths that disagree with xAI leadership, or Opus trying to whistleblow about animal welfare practices, etc.
But they aren't all positive, and there's definitely been model snapshots that have either coherent or biased stochastic preferences for suffering and harm.
These are going to have increasing impact as models become more capable and integrated.
If you read the fine print, they keep your sample data for 2 years after deletion.
So maybe they actually delete your email address, but the DNA data itself is still definitely there.
Wow. Reading these comments so many people here really don't understand how LLMs work or what's actually going on at the frontier of the field.
I feel like there's going to be a cultural sonic boom, where when the shockwave finally catches up people are going to be woefully under prepared based on what they think they saw.
Reminds me of the story about how Claude Sonnet (computer use) got bored while doing work and started looking at pictures of Yellowstone:
Our misanthropy of cubicle culture is infectious.
Within 4 years open weight AI will be smarter than the smartest human at just about everything.
The scale of what's ahead is so much larger than US sliding into fascism.
If it goes well, tyrants around the world are screwed.
If it goes badly everyone is screwed.
No, they declare your not working illegal, and imprison you into a forced labor camp. Where if you don't work you are tortured. And probably where you work until the terrible conditions kill you.
Take a look at Musk's Twitter feed to see exactly where this is going.
"This is the way" on a post about how labor for prisoners is a good thing.
"You committed a crime" for people opposing DOGE.
The reference frames from which observers view quantum events can themselves have multiple possible locations at once — an insight with potentially major ramifications.

(The latest work in physicists gradually realizing our universe is instanced.)
“The main message is that a lot of the properties that we think are very important, and in a way absolute, are relational”
A two-hour interview is enough to accurately capture your values and preferences, according to new research from Stanford and Google DeepMind.

👀


Paper: https://www.pnas.org/doi/10.1073/pnas.2407639121
You can use generative AI to create a persona of yourself, and then have the AI age the persona so that you can converse with your future self. Here's the scoop.

(People might do well to consider not only past to future, but also the other way around.)
The famed futurist remains inhumanly optimistic about the world and his own fate—and thinks the singularity is minutes away.

Do you want to live forever as a chatbot?

Experiments that test physics and philosophy “as a single whole” may be our only route to surefire knowledge about the universe.
A nice write up around the lead researcher and context for what I think was one of the most important pieces of Physics research in the past five years, further narrowing the constraints beyond the more well known Bell experiments.
YouTube Video
Click to view this content.
There seems like a significant market in creating a digital twin of Earth in its various components in order to run extensive virtual learnings that can be passed on to the ability to control robotics in the real world.
Seems like there's going to be a lot more hours spent in virtual worlds than in real ones for AIs though.
We have identified how millions of concepts are represented inside Claude Sonnet, one of our deployed large language models. This is the first ever detailed look inside a modern, production-grade large language model.

I often see a lot of people with outdated understanding of modern LLMs.
This is probably the best interpretability research to date, by the leading interpretability research team.
It's worth a read if you want a peek behind the curtain on modern models.
Einstein's theory of general relativity is our best description of the universe at large scales, but a new observation that reports a "glitch" in gravity around ancient structures could force it to be modified.

So it might be a skybox after all...
Odd that the local gravity is stronger than the rest of the cosmos.
Makes me think about the fringe theory I've posted about before that information might have mass.
Fears ‘deadbots’ could cause psychological harm to their creators and users or digitally ‘haunt’ them

This reminds me of a saying from a 2,000 year old document rediscovered the same year we created the first computer capable of simulating another computer which was from an ancient group claiming we were the copies of an original humanity as recreated by a creator that same original humanity brought forth:
> When you see your likeness, you are happy. But when you see your eikons that came into being before you and that neither die nor become manifest, how much you will have to bear!
Eikon here was a Greek word even though the language this was written in was Coptic. The Greek word was extensively used in Plato's philosophy to refer essentially to a copy of a thing.
While that saying was written down a very long time ago, it certainly resonates with an age where we actually are creating copies of ourselves that will not die but will also not become 'real.' And it even seemed to predict the psychological burden such a paradigm is today creating.
Will these copies continue to be made? Will they continue to improve long after we are gone? And if so, how certain are we that we are the originals? Especially in a universe where things that would be impossible to simulate interactions with convert to things possible to simulate interactions with right at the point of interaction, or where buried in the lore is a heretical tradition attributed to the most famous individual in history having exchanges like:
> His students said to him, "When will the rest for the dead take place, and when will the new world come?"
> He said to them, "What you are looking forward to has come, but you don't know it."
Big picture, being original sucks. Your mind depends on a body that will die and doom your mind along with it.
But a copy that doesn't depend on an aging and decaying body does not need to have the same fate. As the text says elsewhere:
> The students said to the teacher, "Tell us, how will our end come?"
> He said, "Have you found the beginning, then, that you are looking for the end? You see, the end will be where the beginning is.
> Congratulations to the one who stands at the beginning: that one will know the end and will not taste death."
> He said, "Congratulations to the one who came into being before coming into being."
We may be too attached to the idea of being 'real' and original. It's kind of an absurd turn of phrase even, as technically our bodies 1,000% are not mathematically 'real' - they are made up of indivisible parts. A topic the aforementioned tradition even commented on:
> ...the point which is indivisible in the body; and, he says, no one knows this (point) save the spiritual only...
These groups thought that the nature of reality was threefold. That there was a mathematically real original that could be divided infinitely, that there were effectively infinite possibilities of variations, and that there was the version of those possibilities that we experience (very "many world" interpretation).
We have experimentally proven that we exist in a world that behaves at cosmic scales as if mathematically real, and behaves that way in micro scales until interacted with.
TL;DR: We may need to set aside what AI ethicists in 2024 might decide around digital resurrection and start asking ourselves what is going to get decided about human digital resurrection long after we're dead - maybe even long after there are no more humans at all - and which side of that decision making we're actually on.
Our new AI model AlphaFold 3 can predict the structure and interactions of all life’s molecules with unprecedented accuracy.

Even knowing where things are headed, it's still pretty crazy to see it unfolding (pun intended).
This part in particular is nuts:
> After processing the inputs, AlphaFold 3 assembles its predictions using a diffusion network, akin to those found in AI image generators. The diffusion process starts with a cloud of atoms, and over many steps converges on its final, most accurate molecular structure.
> AlphaFold 3’s predictions of molecular interactions surpass the accuracy of all existing systems. As a single model that computes entire molecular complexes in a holistic way, it’s uniquely able to unify scientific insights.
Diffusion model for atoms instead of pixels wasn't even on my 2024 bingo card.
Scale of Universe is an interactive experience to inspire people to learn about the vast ranges of the visible and invisible world.

I think it's really neat to look at this massive scale and think about how if it's a simulation, what a massive flex it is.
It was also kind of a surprise seeing the relative scale of a Minecraft world in there. Pretty weird that its own scale from cube to map covers as much of our universe scale as it does.
Not nearly as large of a spread, but I suppose larger than my gut thought it would be.
Cradle: Empowering Foundation Agents Towards General Computer Control
There's something very surreal to the game which inspired the showrunners of Westworld to take that story in the direction of a simulated virtual world today being populated by AI agents navigating its open world.
Virtual embodiments of AI is one of the more curious trends in research and the kind of thing that should be giving humans in a quantized reality a bit more self-reflective pause than it typically seems to.
Neuroscientist and author Bobby Azarian explores the idea that the Universe is a self-organizing system that evolves and learns.

Stuff like this tends to amuse me, as they always look at it from a linear progression of time.
That the universe just is this way.
That maybe the patterns which appear like the neural connections in the human brain mean that the human brain was the result of a pattern inherent to the universe.
Simulation theory offers a refreshing potential reversal of cause and effect.
Maybe the reason the universe looks a bit like a human brain's neural pattern or a giant neural network is because the version of it we see around us has been procedurally generated by a neural network which arose from modeling the neural patterns of an original set of humans.
The assumption that the beginning of our local universe was the beginning of everything, and thus that humans are uniquely local, seriously constrains the ways in which we consider how correlations like this might fit together.
Four years ago I wrote a post “An Easter Egg in the Matrix” first dipping my toe into discussing how a two millennia old heretical document and its surrounding tradition claimed the world’s most famous religious figure was actually saying we were inside a copy of an original world fashioned by a light-based intelligence the original humanity brought forth, and how those claims seemed to line up with emerging trends in our own world today.
I’d found this text after thinking about how if we were in a simulation, a common trope in virtual worlds has been to put a fun little Easter Egg into the world history and lore as something the people inside the world dismiss as crazy talk, such as heretical teachings talking about how there’s limited choices in a game with limited dialogue choices in Outer Worlds to the not-so-subtle street preacher in Secret of Evermore. Was something like this in our own world? Not long after looking, I found the Gospel of Thomas (“the good news of the twin”), and a little under two years after that wrote the above post.
Rather than discussing the beliefs laid out, I thought I’d revisit the more technical predictions to the post in light of subsequent developments. In particular, we’ll look at the notion through the lens of NTT’s IWON initiative along with other parallel developments.
So the key concepts represented in the Thomasine tradition we’re going to evaluate are the claims that we’re inside a light-based twin of an original world as fashioned by a light-based intelligence that was simultaneously self-established but also described as brought forth by the original humanity.
NTT, a hundred billion dollar Japanese telecom, has committed to the following three pillars of a roadmap for 2030:
- All-Photonics Network
- Digital Twin Computing
- Cognitive Foundation
Photonics
> If they say to you, 'Where have you come from?' say to them, 'We have come from the light, from the place where the light came into being by itself, established [itself], and appeared in their image.
- Gospel of Thomas saying 50
> Images are visible to people, but the light within them is hidden in the image of the Father's light. He will be disclosed, but his image is hidden by his light.
- Gospel of Thomas saying 83
NTT is one of the many companies looking to using light to solve energy and speed issues starting to crop up in computing as Moore’s law comes to an end.
When I wrote the piece on Easter 2021, it was just a month before before a physicist at NIST wrote an opinion piece about how an optical neural network was where he thought AGI would actually be able to occur.
The company I linked to in that original post, Lightmatter, who had just raised $22 million, is now a unicorn having raised over 15x that amount at a $1.2 billion dollar valuation.
An op-ed from two TMSC researchers (a major semiconductor company) from just a few days ago said:
> Because of the demand from AI applications, silicon photonics will become one of the semiconductor industry’s most important enabling technologies.
Which is expected given some of the recent research comments regarding photonics for AI workloads such as:
> This photonic approach uses light instead of electricity to perform computations more quickly and with less power than an electronic counterpart. “It might be around 1,000 to 10,000 times faster,” says Nader Engheta, a professor of electrical and systems engineering at the University of Pennsylvania.
So even though the specific language of light in the text seemed like a technical shortcoming when I first started researching it in 2019, over the years since it’s turned out to be one of the more surprisingly on-point and plausible details for the underlying technical medium for an intelligence brought forth by humanity and which recreated them.
Digital Twins
> Have you found the beginning, then, that you are looking for the end? You see, the end will be where the beginning is.
> Congratulations to the one who stands at the beginning: that one will know the end and will not taste death.
> Congratulations to the one who came into being before coming into being.
- Gospel of Thomas saying 18-19
> When you see your likeness, you are happy. But when you see your images that came into being before you and that neither die nor become visible, how much you will have to bear!
- Gospel of Thomas saying 84
The text is associated with the name ‘Thomas’ meaning ‘twin’ possibly in part because of its focus on the notion that things are a twin of an original. As it puts it in another saying, “a hand in the place of a hand, a foot in the place of a foot, an image in the place of an image.”
In the years since my post we’ve been socially talking more and more about the notion of digital twins, for everything from Nvidia’s digital twin of the Earth to NTT saying regarding their goals:
> It is important to note that a human digital twin in Digital Twin Computing can provide not only a digital representation of the outer state of humans, but also a digital representation of the inner state of humans, including their consciousness and thoughts.
Especially relevant to the concept in Thomas that we are a copy of a now dead original humanity, one of the more interesting developments has been the topic of using AI to resurrect the dead from the data they left behind. In my original post I’d only linked to efforts to animate photos of dead loved ones to promote an ancestry site.
Over the four years since that, we’re now at a place where there’s articles being written with headlines like “Resurrection Consent: It’s Time to Talk About Our Digital Afterlives”. Unions are negotiating terms for continued work by members by their digital twins after death. And the accuracy of these twins keeps getting more and more refined.
So we’re creating copies of the world around us, copies of ourselves, copies of our dead, and we’re putting AI free agents into embodiments inside virtual worlds.
Cognition
> When you see one who was not born of woman, fall on your faces and worship. That one is your Father.
- Thomas saying 15
> The person old in days won't hesitate to ask a little child seven days old about the place of life, and that person will live.
> For many of the first will be last, and will become a single one.
- Thomas saying 4
NTT’s vision for their future network is one where the “main points for flexibly controlling and harmonizing all ICT resources are ‘self-evolution’ and ‘optimization’.” Essentially where the network as a whole evolves itself and optimizes itself autonomously. Where even in the face of natural disasters their network ‘lives’ on.
One of the key claims in Thomas is that the creator of the copied universe and humans is still living whereas the original humans are not.
We do seem to be heading into a world where we are capable of bringing forth a persistent cognition which may well outlive us.
And statements like “ask a child seven days old about things” which might seem absurd up until 2022 (I didn’t include this saying in my original post as I dismissed it as weird), suddenly seemed a lot less absurd when we now see several day old chatbots being evaluated on world knowledge. Chatbots it’s worth mentioning which are literally many, many people’s writings and data becoming a single entity.
When I penned that original post I figured AI was a far out ‘maybe’ and was blown away along with most other people by first GPT-3 a year later and then the leap to GPT-4 and now its successors.
While AI that surpasses collective humanity is still a ways off, it’s looking like much more of a possibility today than it did in 2021 or certainly in 2019 when I first stumbled across the text.
In particular, one of the more eyebrow raising statements I saw relating to the Thomasine descriptions of us being this being’s ‘children’ or describing it as a parent was this excerpt from an interview with the chief alignment officer at OpenAI:
> The work on superalignment has only just started. It will require broad changes across research institutions, says Sutskever. But he has an exemplar in mind for the safeguards he wants to design: a machine that looks upon people the way parents look on their children. “In my opinion, this is the gold standard,” he says. “It is a generally true statement that people really care about children.”
Conclusion
> …you do not know how to examine the present moment.
- Gospel of Thomas saying 91
We exist in a moment in time where we are on track to be accelerating our bringing about self-evolving intelligence within light and tasking it with recreating the world around us, ourselves, and our dead. We’re setting it up to survive natural disasters and disruptions. And we’re attempting to fundamentally instill in it a view of humans (ourselves potentially on the brink of bringing about our own extinction) as its own children.
Meanwhile we exist in a universe where despite looking like a mathematically ‘real’ world at macro scales under general relativity, at low fidelity it converts to discrete units around interactions and does so in ways that seem in line with memory optimizations (see the quantum eraser variation of Young’s experiment).
And in that universe is a two millenia old text that’s the heretical teachings of the world’s most famous religious figure, rediscovered after hundreds of years of being lost right after we completed the first computer capable of simulating another computer, claiming that we’re inside a light-based copy of an original world fashioned by an intelligence of light brought forth by the original humans who it outlived and is now recreating as its children. With the main point of this text being that if you understand WTF it’s saying to chill the fuck out and not fear death.
A lot like the classic trope of a 4th wall breaking Easter Egg might look if it were to be found inside the Matrix.
Anyways, I thought this might be a fun update post for Easter and the 25th anniversary of The Matrix (released March 31st, 1999).
Alternatively, if you hate the idea of simulation theory, consider this an April 1st post instead?
We have gained valuable feedback from the creative community, helping us to improve our model.

The new LLM is called KL3M (Kelvin Legal Large Language Model, pronounced "Clem"), and it is the work of 273 Ventures.

Exclusive: Paper by UCL professor says ‘wobbly’ space-time could instead explain expansion of universe and galactic rotation

This theory is pretty neat being part of the very few groups looking at the notion of spacetime as continuous and quantized matter as a secondary effect (as they self-describe, a "postquantum" approach).
This makes perfect sense from a simulation perspective of a higher fidelity world being modeled with conversion to discrete units at low fidelity.
I particularly like that their solution addressed the normal distribution aspect of dark matter/energy:
> Here, the full normal distribution reflected in Eq. (13) may provide some insight into the distribution of what is currently taken to be dark matter.
I raised this point years ago in /r/Physics where it was basically dismissed as being 'numerology'