Skip Navigation
Progressive Dems Call for Codifying Chevron After 'Dangerous' Supreme Court Ruling | Common Dreams
  • As stated in the dissent, ignoring your own precedence for years to create an impression that a useful legal principle isn't useful and to create an excuse to overturn it doesn't make for an actual reasonable argument to overturn it.

  • Progressive Dems Call for Codifying Chevron After 'Dangerous' Supreme Court Ruling | Common Dreams
  • If their interpretation is good, it can stand

    With Chevron, it would stand, without it the court gets to ignore all reason and reject an agency's interpretation even if it's sane and carefully constructed by experts. The court gets to challenge every individual decision and reason made by the agency which the law doesn't make explicit

  • Progressive Dems Call for Codifying Chevron After 'Dangerous' Supreme Court Ruling | Common Dreams
  • They saw chevron as useful when Republicans had control over all the major agencies, but with gov agencies driven by experts and scientists who can ignore the Republicans screaming then chevron isn't helping them anymore. And that's part of why they try to get as many partisan judges into the system as possible, to get their way through corrupt courts instead.

  • Meta admits using pirated books to train AI, but won't pay for it
  • Humans learn a lot through repetition, no reason to believe that LLMs wouldn't benefit from reinforcement of higher quality information. Especially because seeing the same information in different contexts helps mapping the links between the different contexts and helps dispel incorrect assumptions. But like I said, the only viable method they have for this kind of emphasis at scale is incidental replication of more popular works in its samples. And when something is duplicated too much it overfits instead.

    They need to fundamentally change big parts of how learning happens and how the algorithm learns to fix this conflict. In particular it will need a lot more "introspective" training stages to refine what it has learned, and pretty much nobody does anything even slightly similar on large models because they don't know how, and it would be insanely expensive anyway.

  • Meta admits using pirated books to train AI, but won't pay for it
  • Yes, but should big companies with business models designed to be exploitative be allowed to act hypocritically?

    My problem isn't with ML as such, or with learning over such large sets of works, etc, but these companies are designing their services specifically to push the people who's works they rely on out of work.

    The irony of overfitting is that both having numerous copies of common works is a problem AND removing the duplicates would be a problem. They need an understanding of what's representative for language, etc, but the training algorithms can't learn that on their own and it's not feasible go have humans teach it that and also the training algorithm can't effectively detect duplicates and "tune down" their influence to stop replicating them exactly. Also, trying to do that latter thing algorithmically will ALSO break things as it would break its understanding of stuff like standard legalese and boilerplate language, etc.

    The current generation of generative ML doesn't do what it says on the box, AND the companies running them deserve to get screwed over.

    And yes I understand the risk of screwing up fair use, which is why my suggestion is not to hinder learning, but to require the companies to track copyright status of samples and inform ends users of licensing status when the system detects a sample is substantially replicated in the output. This will not hurt anybody training on public domain or fairly licensed works, nor hurt anybody who tracks authorship when crawling for samples, and will also not hurt anybody who has designed their ML system to be sufficiently transformative that it never replicates copyrighted samples. It just hurts exploitative companies.

  • Apple finally adds support for RCS in latest iOS 18 beta | TechCrunch
  • Apple management has explicitly stated they do not want to support better compatibility between Android and iPhone, their response when asked what parents who buy cheap Androids for their kids should do it was to buy them iPhones. Many of the problems are very easy to fix on Apple's side and keeping them problematic is intentional.

  • Efficency
  • Math and formal logic are effectively equivalent and philosophy without conditional logic is useless. Scientifically useful philosophy is just "explorative logic" or something like it

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)NA
    Natanael @slrpnk.net

    Cryptography nerd

    Posts 1
    Comments 1.6K