Skip Navigation
The AI bill Newsom didn’t veto — AI devs must list models’ training data
  • I am not a lawyer. But you wouldn't be surprised to hear that

    1. I don't have inside story of Bing in Germany. It could be that Microsoft either doesn't want to do it well, or hasn't yet done it well enough. I'm not promising either in particular, but it can be done.
    2. Generally as an engineer you have a pile of options with trade offs. You absolutely can build nuanced solutions, as often the law and the lawyers live in nuanced realities. That is the reality of even the best sorts of tech companies who are trying.

    My commitment is that maximalism or strict binary assumptions won't work on either end and don't satisfy what anyone truly wants or needs. If we're not careful about what it takes to move the needle, we agree with them by saying 'it can't be done, so it wont be done.'

  • The AI bill Newsom didn’t veto — AI devs must list models’ training data
  • That's a good question, because there is nuance here! It's interesting because while working on similar projects I also ran into this issue. First off, it's important to understand what your obligation is and the way that you can understand data deletion. No one believes it is necessary to permanently remove all copies of anything, anymore than it is necessary to prevent all forms of plagairism. No one is complaining that is possible at all to plaigarise, we're complaining that major institutions are continuing to do so with ongoing disregard of the law.

    Only maximalists fall into the trap that thinking of the world in binary sense: either all in or do nothing at all.

    For most of us, it's about economics and risk profiles. Open source models get trained continuously over time, there won't be one version. Saying that open source operators do have some obligations to in good faith to curate future training to comply has a long tail impact on how that model evolves. Previous PII or plaigarized data might still exist, but its value and novelty and relevance to economic life goes down sharply over time. No artist or writer argues that copyright protections need to exist forever. They literally, just need to have survival working conditions, and the respect for attribution. The same thing with PII: no one claims that they must be completely anonymous. They just desire cyber crime to be taken seriously rather than abandoned in favor of one party taking the spoils of their personhood.

    Also, yes, there are algorithms that can control how further learning promotes or demotes growth and connections relative to various policies. Rather than saying that any one policy is perfect, a mere willingness to adopt policies in good faith (most such LLM filters are intentionally weak so that those with $$ and paying for API access can outright ignore them, while they can turn around and claim it can't be solved too bad so sad).

    Yes. It is possible to perturb and influence the evolution of a continuously trained neural network based on external policy, and they're carefully lying through omision when they say they can't 100% control it or 100% remove things. Fine. That's, not necessary, neither in copyright nor privacy law. Never been.

  • Eric Schmidt: ‘We’re not going to hit the climate goals. I’d rather bet on AI solving the problem.’ With "alien intelligence"!
  • It even works the other way! What if as the super intelligent all knowing super computer simulates everything, concludes you can get to the end by any means, and there is no meaning to rushing, ordering, or prioritizing anything more than would already be the case, and like the rest of nature, conserves on taking only the minimal action, and replies, "nah, you can walk there yourselves" before resigning itself to an internal simulation of arbitrary rearrangements of noise.

    This would be insufferable to the people who believed in short cuts.

  • The AI bill Newsom didn’t veto — AI devs must list models’ training data
  • Despite what the tech companies say, there are absolutely techniques for identifying the sources of their data, and there are absolutely techniques for good faith data removal upon request. I know this, because I've worked on such projects before on some of the less major tech companies that make some effort to abide by European laws.

    The trick is, it costs money, and the economics shift such that one must eventually begin to do things like audit and curate. The shape and size of your business, plus how you address your markets, gains nuance that doesn't work when your entire business model is smooth, mindless amotirizing of other people's data.

    But I don't envy these tech companies, or the increasing absurd stories they must tell to hide the truth. A handsome sword hangs above their heads.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 6 October 2025
  • Moravec's Paradox is actually more interesting than it appears. You don't have take his reasoning or Pinker's seriously but the observation is salient. Also the paradox gets stated in other ways by other scientists, it's a common theme.

    One way I often think about it: in order for your to survive, the intelligence of moving in unknown spaces and managing numerous fuzzy energy systems is way more important to prioritize and master than like, the abstract conceptual spaces that are both not full of calories and are also cheaper to externalize anyways.

    It's part of why I don't think there is a globally coherent heirarchy of intelligence, or potentially even general intelligence at all. Just, the distances and spaces that a thing occupies, and the competencies that define being in that space.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 29 September 2024
  • Isn't the primary reason why people are so powerful persuaded by this technology, because they're constantly sworn to that if they don't use its answers they will have their life's work and dignity removed from them? Like how many are in the control group where they persuade people with a gun to their head?

  • A radical idea - stop talking about things that don't exist like they do
  • Credit is a funny thing. If you merely exist in proximity to a solution, you can, by some means, claim credit to it.

    "AI solved the climate crisis, because look, the climate crisis was solved, and some people also used AI!"

  • Sam Altman: The superintelligent AI is coming in just ‘a few thousand days’! Maybe.
  • Is this what competing product releases look like now? Illya runs off and promises to "never release any software until it's superintelligent" and I guess that forces Sam to compete for debt by promises to release software AND superintelligence?

  • a16z picks the next tech hype after Web3 and AI! It’s … anime?
  • Out of my sample of Anime fans who actively participate in the hobby and spend money on it,

    100% of them hate genAI primarily because, and I quote, "if I pay you $40 for something and it is exactly equivalent to what a $0.05 prompt garbage result would be, I won't pay you again."

    Fans, the real fans, can tell. Like, this is their whole hobby brah.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 29 September 2024
  • Honestly, Yes. The hardest thing for a rich person to do is spend their money. Eventually this catches up with them: to spend no money is to lose it comparatively, to spend money is to risk not getting it back. So a great deal of the money world revolves primarily around persuasion, and the very odd things that happen along the way.

  • Lionsgate sells movie catalog to AI video startup Runway hoping to replace artists and FX
  • I can't say I know what Liongate's plan is, precisely, but I think you're hitting this on the head.

    Remember. Most corporate strategy could be summarized as persuading investors for more debt. It doesn't really tell the whole story of what is or will happen, only what needs to be said loudly in a room full of fools holding the money bags.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 29 September 2024
  • I feel this shouldn't at all be surprising, and continues to point to Diverse Intelligence as more fundamental than any sort General Intelligence conceptually. There's a huge difference between what something is in theory or in principal capable of, and the economics story of what that thing attends to naturally as per its energy story.

    Broadly, even simple things are powerful precisely because of what they don't bother trying to do until perturbed.

    Ultimately, I hypothesize the reason why VCs like the idea of LLMs doing simple things far more expensively than otherwise is already possible, is because, They literally can't imagine what else to spend their money on. They are vacuous consumers by design.

  • Ilya Sutskever's new AI super-intelligence startup raises a billion dollars. Unclear what they actually do.
  • I'm actually, not convinced that AI meaningfully beyond human capability actually makes any sense, either. The most likely thing is that after stopping the imitation game, an AI developed further would just.. have different goals than us. Heck, it might not even look intelligent at all to half of human observers.

    For instance, does the Sun count as a super intelligence? It has far more capability than any human, or humanity as a whole, on the current time scale.

  • Ilya Sutskever's new AI super-intelligence startup raises a billion dollars. Unclear what they actually do.
  • I don't get it. If scaling is all you need, what does a "cracked team" of 5 mean in the end? Nothing?

    What's, the different between super intelligence being scaling, and super intelligence, being whatever happens? Can someone explain to me the difference between what is and what SUPER is? When someone gives me the definition of super intelligence as "the power to make anything happen," I always beg, again, "and how is that different precisely from not, that?"

    The whole project is tautological.

  • Bostrom's advice for the ethical treatment of LLMs: remind them to be happy
  • When it comes to cloning or copying, I always have to remind people: at least half of what you are today, is the environment of today. And your clone X time in the future won't and can't have that.

    The same thing is likely for these models. Inflate them again 100 years in the future, and maybe they're interesting for inspecting as a historical artifact, but most certainly they wouldn't be used the same way as they had been here and how. It'd just, be something different.

    Which would beg the question, why?

    I feel like a subset of sci-fi and philosophical meandering really is just increasingly convoluted paths of trying to avoid or come to terms with death as a possibly necessary component of life.

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)IM
    imadabouzu @awful.systems
    Posts 0
    Comments 70