but it is not a feature i want. not now, not ever. An inbuilt bullshit generator, now with less training and more bullshit is not something I ever asked for.
Training one of these ais requires huge datacenters, insanely huge datasets and millions of dollars in resources. And I'm supposed to believe one will be effectively trained by the pittance of data generated by browsing?
Fine tunning is more possible on end user hardware. You also have projects like hive mind and petals that working on distributed training and inference systems to deal with the concentration effects of this you described for base models.
That seems more aligned with their mission of fighting misinformation on the web. It looks like fake spot was an acquisition so hopefully efforts like the ones mentioned in this post better help aligne this with their other goals.
What I'm saying is Mozzilla, from my understanding, didn't set out to do that but instead aqquired a business that was in order to use their services to fight misinformation. We should pressure them to reform the new part of business to better align with the rest of Mozzilla's goals.
Dunno, this seems like an interesting idea. What if I've read through a bunch of engineering papers, maybe I could use this as a sort of flashcard to double check my understanding.