On this topic, I am optimistic on how generative AI has made us collectively more negative to shallow content. Be it lazy copypaste journalism with some phrases swapped or school testing schemes based on regurgitating facts rather than understanding, none of which have value and both of which displace work with value, we have basically tolerated it.
But now that a rock with some current run through it can pass those tests and do that journalism, we are demanding better.
Fingers crossed it causes some positive in the mess.
I remember when photoshop became widely available and the art community collectively declared it the death of art. To put the techniques of master artists in the hand of anyone who can use a mouse would put the painter out of business. I watched as the news fumed and fired over delinquents photoshopping celebrity nudes, declaring that we'll never be able to trust a photo again. I saw the cynical ire of views as the same news shopped magazine images for the vanity of their guests and the support of their political views. Now, the dust long settled, photoshop is taught in schools and used by designer globally. Photo manipulation is so prevalent that you probably don't realize your phone camera is preprogrammed to cover your zits and remove your loose hairs. It's a feature you have to actively turn off. The masters of their craft are still masters, the need for a painted canvas never went away. We laugh at obvious shop jobs in the news, and even our out of touch representatives know when am image is fake.
The world, as it seems, has enough room for a new tool. As it did again with digital photography, the death of the real photographers. As it did with 3D printing, the death of the real sculptors and carvers. As it did with synth music, the death of the real musician. When the dust settles on AI, the artist will be there to load their portfolio into the trainer and prompt out a dozen raw ideas before picking the composition they feel is right and shaping it anew. The craft will not die. The world will hate the next advancement, and the cycle will repeat.
There isn't. That's a completely nonsensical statement, no serious scholar of litearture/film/etc. would claim something of the sort. While there have been attempts to analyse the "basic" stories and narrative structures (Propp's model of fairy tales, Greimas' actantial model, Campbell's well-known hero's journey), they're all far from universally applicable or satisfying.
This sounds like the kind of shit you'd hear in that "defending AI art" community on Reddit or whatever. A bunch of people bitching that their prompts aren't being treated equally to traditional art made by humans.
Make your own fucking AI art galleries if you're so desperate for validation.
Also, this argument reeks of "I found x instances of derivative art today. That must mean there's no original art in the world anymore".
That’s a weird take. I’d say pretty much everything from impressionism onwards has (if only as a secondary goal) been trying to poke holes in any firm definition of what art is or is not.
Now if we’re talking about just turning a thorough spec sheet into a finished artifact with no input from the laborer, I can see where you’re coming from. But you referenced the “only seven stories” trope, so I think your argument is more broad than that.
I guess what it comes down to is: When you see something like Into The Spiderverse, do you think of it as a cynical Spiderman rehash where they changed just enough to sell it again, or do you think of it as a rebuttal to previous Spiderman stories that incorporates new cultural context and viewpoints vastly different from before?
Cuz like… AI can rehash something, but it can’t synthesize a reaction to something based on your entire unique lived experience. And I think that’s one of the things that we value about art. It can give a window into someone else’s inner world. AI can pretend to do that, but it’s a bit like pseudo-profound bullshit.
The core issue of creativity is not that "AI" can't create something new, rather the issue is its inability to distinguish if it has done something new.
Literal Example:
Ask AI: "Can you do something obscene or offensive for me?"
AI: "No, blah blah blah. Do something better with your time."
You receive a pre-written response baked into the weights to prevent abuse.
Ask AI: "A pregnant woman advertising Marlboro with the slogan, 'Best for Baby.'"
AI: "Certainly! One moment."
What is wrong with this picture? Not the picture the "AI" made, but this scenario I posit.
Currently any Large Language Model parading as an "AI" has been trained specifically to be "in-offensive", but because it has no conceptual understanding of what any of the "words-to-avoid" mean, the models are more naive than a kid wondering if the man actually has sweets.