Amazon- and Google-backed AI firm Anthropic says “general-purpose AI tools simply could not exist” if AI companies had to pay licences for the training material
Artificial intelligence firm Anthropic hits out at copyright lawsuit filed by music publishing corporations, claiming the content ingested into its models falls under ‘fair use’ and that any licensing regime created to manage its use of copyrighted material in training data would be too complex and ...
Generative artificial intelligence (GenAI) company Anthropic has claimed to a US court that using copyrighted content in large language model (LLM) training data counts as “fair use”, however.
Under US law, “fair use” permits the limited use of copyrighted material without permission, for purposes such as criticism, news reporting, teaching, and research.
In October 2023, a host of music publishers including Concord, Universal Music Group and ABKCO initiated legal action against the Amazon- and Google-backed generative AI firm Anthropic, demanding potentially millions in damages for the allegedly “systematic and widespread infringement of their copyrighted song lyrics”.
…then maybe they shouldn’t exist. If you can’t pay the copyright holders what they’re owed for the license to use their materials for commercial use, then you can’t use ‘em that way without repercussions. Ask any YouTuber.
You might want to read this article by Kit Walsh, a senior staff attorney at the EFF, and this one by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries. YouTube's one-sided strike-happy system isn't the real world.
Headlines like these let people assume that it’s illegal, rather than educate them on their rights.
When Annas-Archive or Sci-Hub get treated the same as these giant corporations, I'll start giving a shit about the "fair use" argument.
When people pirate to better the world by increasing access to information, the whole world gets together to try to kick them off the internet.
When giant companies with enough money to make Solomon blush pirate to make more oodles of money and not improve access to information, it's "fAiR uSe."
Literally everyone knew from the start that books3 was all pirated and from ebooks with the DRM circumvented and removed. It was noted when it was created it was basically the entirety of private torrent tracker Bibliotik.
By and large copyright infringement is illegal. That some things aren't infringement doesn't change that a general stance of "if I don't have permission, I can't copy it" is correct. The first argument in the EFF article is effectively the title: "it can't be copyright, because otherwise massive AI models would be impossible to build". That doesn't make it fair use, they just want it to become so.
I love seeing Lemmy users trip over themselves to declare that copyrights don't or shouldn't exist when it comes to pirating, right up until it comes to AI. Then Copyrights are enshrined by The Constitution and all the corporations NEED to pay for them, even when they're not actually copying anything.
Using copyrighted material for something you aren't gonna make any money off of? Cool, go hog wild. If you're gonna use some music or art that you didn't make in something that will make you money, the folks that made whatever you used should get a cut. Not the whole cut, but a cut.
And corporations want people to pay for it but they don't want to pay for it themselves. It's almost as if no one likes copyright, but it benefits some ppl more than others.
You do realize that lemmy contains very many users, many of whom disagree on any number of things. You are randomly assigning the opinions of lemmy's pirate users to a random commenter without evidence that they actually hold those opinions, because it'd be convenient for you if they're contradicting themself in any way (though the degree to which that would be a contradiction is also arguable). It's just a way of constructing a strawman instead of engaging with your interlocutor's actual words.
Also, part of the problem is that these LLMs very often do directly copy and spit out articles and random forum posts and etc word-for-word verbatim, or it'll do something that's the equivalent of a plagiarist who swaps a few words around in a sad attempt to not get caught. It becomes especially likely depending on how specific the search is, like if you look for a niche topic hardly anyone has written extensively on or for the solution to an esoteric problem that maybe just one person on a forum somewhere found an answer to. It also typically does not even give credit or link to its sources.
Plus, copyright law, if it exists, must apply to everyone, including major coporations. That's a separate issue than whether or not copyright law needs reform (it obviously does). If you wanna abolish copyright, fine, ok, get it abolished through the government. But while copyright law is still the law, I'm not ozk with giving magacorps a pass to break it legally, especially when they're more than happy to sue random, harmless individuals for violating their own copyrights. They want the law not to apply to them because they're rich.
The argument they're making is just ridiculous on its face when you compare it to other crimes. If AI should be allowed to violate copyright because otherwise it can't exist as it is, then anyone should be able to violate copyright because otherwise their cool projects won't be able to exist. And I should be able to rob a bank because otherwise I won't have all that money. You should be able to commit murder because otherwise your annoying coworker will keep bugging you. She should be able to walk out of a store with an iPhone without paying for it because otherwise she won't have an iPhone. Etc. It's an argument that says the criminal's motivations are legal justification for the crime. "You should let me legally do the thing because otherwise I can't do the thing" is just not a convincing argument in my book.
This isn't an issue of fair use. They're stealing other people's work and using it to create something new and then trying to profit from it, without any credit or recompense.
Now that it exists how do you propose we make it not exist?
Even if we outlaw it Russia and China won't and without the tools to fight back against it the web is basically done as anything but a propaganda platform
It doesn't matter what business we're talking about. If you can't afford to pay the costs associated with running it, it's not a viable business. It's pretty fucking simple math.
And no, we're not talking about "to big to fail" business (that SHOULD be allowed to fail, IMHO) we're talking about AI, that thing they keep trying to shove down our throats and that we keep saying we don't want or need.
I don't know if you noticed this but some really big companies with high stock valuations are only existing because investors poured tons of capital into them to subsidize the service.
Uber could not do taxis cheaper than existing if they didn't have years of free cash to artificially lower prices.
We are in the beginning of late state capitalism, profitable companies go under due to private capital firms and absolute ponzi frauds get their faces on time magazine.
I don’t know if you noticed this but some really big companies with high stock valuations are only existing because investors poured tons of capital into them to subsidize the service.
Exactly, they PAID MONEY to make it work. No they don't make the money back and depend on outside capital, but they are still paying their employees (not enough) and suppliers, etc.
I guess people are finally catching up to the big con with LLMs should not be copyrighted ampliganda. It is astroturfing at its best.
The end goal is controlling rights to what corporations produce with LLMs without spending a dime. All the while cutting jobs.
Writing was in CAPITAL LETTERS on the walls for the past two years. Why did twitter restrict API access? Why did Reddit restrict API access? Why did Github/Bitbucket/Gitlab restricted web ui functions for unlogged?
They knew and wallgardened the user generated data.
Cmon people.
And the hypocrisy of this all. If it is bad, it is user data, if we can mine nuh ah bitch, ours.
Also, for people arguing for free use of anything to build LLMs. Regulations will come. Once big players control enough of the LLM market.
Serious Question: When an artist learns to draw by looking at the drawings of the masters, and practicing the techniques they pioneered, are the art students respecting the intellectual property rights of those masters?
Are not all of that student's work derivative of an education based on other people's work who will never see compensation for that student's use?
I agree with you on principle. However... How long do you think it will be until these very same "AI" companies copyright and patent every piece of content their algorithms spew out? Will they abide by the same carve-outs they want for themselves right now? Somehow I doubt it.
They want to ignore the laws for themselves, but enforce them onto everyone else. This "Rules for thee but not for me" bullshit can't be allowed to pass. Let's then abolish all copyright, and we'll see how long these companies last when everyone can just grab their stuff "for learning".
One, let's accept that there is a public domain, and cribbing freely from the public domain is A-OK. I can reproduce Michaelangelo all I want, and it's all good. AI can crib from that all it wants.
AI can't invent. People can invent: i can have a wholly new idea that no one has ever had. AI does nothing but recombine other existing ideas. It must have seed data, and it won't create anything for which it has no initial input: feed it photographs only, and it can't create a pencil drawing image. Feed it only black and white images, and it can't create color images.
People do not require cribbing from sources. Give a toddler supplies, and they will create. So, we have established that there is a fundamental difference between the creation process. One is dependent on previous work, and one is not.
Now, with influences, you can ask, is your new creation dependent on the previous creation directly? If it is so utterly dependent on the prior work, such that your work could not possibly exist without that specific prior art, you might get sued. It will get debated and society's best approximation of a collective rational mind will determine if you copied or if you created something new that was merely inspired by prior art.
AI can only create by the direct existence of prior art. It fakes invention. Its work has to come from somewhere else.
People have shown how dependent it is on its sources with prompts that say things like, "portrait of a patriotic soldier superhero" and it comes back with a goddamned portrait of Chris Evans. The prompt did not include his name, or Captain or America, and it comes back with an MCU movie poster. AI does not create. People create.
I think there is a fundamental difference here. People are not corporations. People have always learned like this and will always learn like this. Do we really want to allow large corporations to take knowledge from people, then commercialize it and put these very same people out of work?
To me, this reads like "Giant-ATV-Based Taxi Service Couldn't Exist If Operators were Required to Pay Homeowners for Driving over their Houses."
If a business can't exist without externalizing its costs, that business should either a. not exist, or b. be forced to internalize those costs through licensing or fees. See also, major polluters.
Let’s not ignore that the very sudden and new license fees for previously free and open (and user generated) content are at near highway robbery levels, and are all attempting to apply retroactively
“Ai” as it is being marketed is less about new technical developments being utilized and more about a fait accompli.
They want mass adoption of the automated plagiarism machine learning programs by users and companies, hoping that by the time the people being plagiarized notice, it’s too late to rip it all out.
That and otherwise devalue and anonymize work done by people to reduce the bargaining power of workers.
They also don't care if the open, free internet devolves into an illiterate AI generated mess, because they need an illiterate populace that isn't educated enough to question it anyway. They'll still have access to quality sources of information, while ensuring the lowest common denominator will literally have garbage information being fed to them. I mean, that was already true in the sense that the clickbait news outsold serious investigative news, and so the garbage clickbait became the norm and serious journalism is hard come by and costly.
They love increasing barriers between them and the rest of the populace, physically and mentally.
Silicon valley’s core business model has for years been to break the law so blatantly and openly while throwing money at the problem to scale that by the time law enforcement caches up to you your an “indispensable” part of the modern world. See Uber, whose own publicly published business model was for years to burn money scaling and ignoring employment law until it could drive all competitors out of business and become an illegal monopoly, thus allowing it to raise prices to the point it’s profitable.
Most things that I could talk about were already addressed by other users (specially @OttoVonNoob@lemmy.ca), so I'll address a specific point - better models would skip this issue altogether.
Once this is solved, the corpus size will get way, way smaller. Then it would be rather feasible to train those models without offending the precious desire for greed of the American media mafia, in a way that still fulfils the entitlement of the GAFAM mafia.
*I seriously doubt that, but I can't be arsed to argue this here - it's a drop in a bucket.
The thing is, i’m not sure at all that it’s even physically possible for an LLM be trained like a four year old, they learn in fundamentally different ways. Even very young children quickly learn by associating words with concepts and objects, not by forming a statistical model of how often x mingingless string of characters comes after every other meaningless string of charecters.
Similarly when it comes to image classifiers, a child can often associate a word to concept or object after a single example, and not need to be shown hundreds of thousands of examples until they can create a wide variety of pixel value mappings based on statistical association.
Moreover, a very large amount of the “progress” we’ve seen in the last few years has only come by simplifying the transformers and useing ever larger datasets. For instance, GPT 4 is a big improvement on 3, but about the only major difference between the two models is that they threw near the entire text internet at 4 as compared to three’s smaller dataset.
My point is that the current approach - statistical association - is so crude that it'll probably get ditched in the near future anyway, with or without licencing matters. And that those better models (that won't be LLMs or diffusion-based) will probably skip this issue altogether.
The comparison with 4yos is there mostly to highlight how crude it is. I don't think either that it's viable to "train" models in the same way as we'd train a human being.
TL;DR - Anthropic had a data leak due to a contractor’s mistake, but says no sensitive info was exposed. It wasn’t a system breach, and there’s no sign of malicious intent.