The enforcement of copyright law is really simple.
The enforcement of copyright law is really simple.
If you were a kid who used Napster in the early 2000s to download the latest album by The Offspring or Destiny's Child, because you couldn't afford the CD, then you need to go to court! And potentially face criminal sanctions or punitive damages to the RIAA for each song you download, because you're an evil pirate! You wouldn't steal a car! Creators must be paid!
If you created educational videos on YouTube in the 2010s, and featured a video or audio clip, then even if it's fair use, and even if it's used to make a legitimate point, you're getting demonetised. That's assuming your videos don't disappear or get shadow banned or your account isn't shut entirely. Oh, and good luck finding your way through YouTube's convoluted DMCA process! All creators are equal in deserving pay, but some are more equal than others!
And if you're a corporation with a market capitalisation of US$1.5 trillion (Google/Alphabet) or US$2.3 billion (Microsoft), then you can freely use everyone's intellectual property to train your generative AI bots. Suddenly creators don't deserve to be paid a cent.
Apparently, an individual downloading a single file is like stealing a car. But a trillion-dollar corporation stealing every car is just good business.
@ajsadauskas@technology@music@fedibb.ml@music@lemmy.ml If corporations couldn't get away with doing things that would get an individual fined or arrested, how would they maintain their competitive edge? Profits above everything, baby!
#Copyright does not protect the concept and themes of artistic presentation. So training autocomplete tools like #ChatGPT or generative art tools along the lines of #StabilityAI on huge amounts of copyrighted material available on the web doesn't seem to trespass on the rights actually created by copyright law. That is, neither the trained model parameters nor the output qualifies as a infringing copy.
The fact that big corporations have heated the rhetoric with even small-scale copyright infringement being characterized as if it were an existential threat rather than free marketing perhaps misleads people to think copyright grants the owner total control of the future of their creations. But law is about statutes and precedents, not feelings, which is why big corporations aren't likely to train their models on billions of copyrighted works if there was a credible risk of paying statutory damages on a per-work basis.
If there is a moral right to the "something" that has been gifted to these models by their training, it has not been well described, let alone recognized in law as property of the creators. How is this "something" which an AI model steals supposed to be distinguished from the piecewise appreciation for the art as summed over all human viewers?
So perhaps the real problem is the moral outrage created by the corporations who for decades equate copyright infringement with being ambushed by a gang of seagoing rapists, kidnappers, killers, and robbers (pirates). Towards that end, Germany is discussing adding copyright infringement as a form of "digital violence" making the analogy more exact.
@ajsadauskas@technology@music@fedibb.ml@music@lemmy.ml I don't think you understand the problem.
You see, companies have long struggled due to piracy.
They have to come up with solutions to piracy, and implement them. That is hard work and doesn't do a thing against piracy, and heck it even didn't lower their revenue, because it was proven that those that pirate stuff, also buy stuff.
Therefore, it only makes sense that if you have a lot of money, you don't have to pay...
:hides her book library containing the entirety of Project Gutenberg, every StackExchange site, Wikipedia & Books & Wiktionary, 300K textbooks, and archives of Popular Mechanics dating to 1907, amongst other periodicals, plus tens of thousands of comics and graphic novels from three major publishers:
:points fingers at eyes, then points at the media industries: Come at me brosephs.
@ajsadauskas@technology@music@fedibb.ml@music@lemmy.ml A post I saw over the weekend proposed a browser extension that replaces mentions of "AI" with "the Torment Nexus". An alternative find-replace could be "AI" to "copyright laundering"!
@ajsadauskas@technology@music yeah, sounds about right—see also the comparisons of e.g. wage theft vs shoplifting for what strikes me as a somewhat similar example of this kind of disparity
This is a complete oversimplification of everythin.
Yes, downloading music for free is theft. Creators do deserve to be paid for their work.
Youtube ignores fair use, which is wrong. But they run the platform, they can do what they like. ContentID is the worst idea they ever came up with. But again, they are just trying to avoid being sued over and over and over again, so I kinda understand their position. It sucks, but again, they have the right to do what they like with their own platform.
I would argue that using information for the training of an AI is fair use. The information is just used to set weights that the AI then uses to generate text. The actual text is not stored in any database anywhere. So whether Microsoft does it, or I do it, it is the same. I can train a LLM on data as well. I just don't have tthe money for the very expensive hardware to do it.
@ajsadauskas I never used Napster. I found out Google’s own YouTube was giving me free music. While searching for why my hard drive space was being used up so quickly ( remember pre-terabyte drives… pre-Gb?) In my Windows system cache folders were massive files. Always after I had listened to YouTube. Google was basically storing every song I listened to on my own hard drive. Google was just lazy. Even MySpace had a js routine called cache-buster. Thanks Google.
@ajsadauskas@technology@music@lemmy.ml@music@fedibb.ml Alphabet's market cap is also $3T corporation, similar to microsoft's $2.3T; their stock is split into GOOG and GOOGL, and each is about $1.5T market cap, but both make up the total Alphabet stock.