I was on stable diffusion art and one of my comments got removed for saying the OP didn't "make" the AI generated art. But he didn't make shit the AI made it, he typed in a description and hit enter. I think we need a new word for when someone shared art an AI made, like they generated it or something. It feels insulting to actual artists to say you made art with AI
I remember you. It was my thread you commented in. You're mad because a moderator removed your comment after you showed up and started antagonizing no one in particular?
I'd like to ask you a question. How much experience do you have with any Stable Diffusion tools?
I mean I've used em. Still don't think the guy you were talking about "made" a thing. I was just mad the moderator removed "AI Art Hate" when all I was saying that is basically like if you went to Subway ordered a sandwich then brought it home and claimed you made yourself sandwich.
This seems like a good place for discussion so if you'll humor me, I'd like to explain some things you might find in a prompt, maybe some things you weren't aware you could do. Web services don't allow for a lot of freedom to keep users from generating things outside their terms of use, but with open source tools you can get a lot more involved.
Take a look at these generation parameters:
sarasf, 1girl, solo, robe, long sleeves, white footwear, smile, wide sleeves, closed mouth, blush, looking at viewer, sitting, tree stump, forest, tree, sky, traditional media, 1990s \(style\), <lora:sarasf_V2-10:0.7>
To break down a bit of what's going on here, I'd like to explain some of the elements found here. sarasf is the token for the LoRA of the character in this image, and <lora:sarasf_V2-10:0.7> is the character LoRA for Sarah from Shining Force II. LoRA are like supplementary models you use on top of a base model to capture a style or concept, like a patch. Some LoRA don't have activation tokens, and some with them can be used without their token to get different results.
The 0.7 in <lora:sarasf_V2-10:0.7> refers to the strength at which the weights from the LoRA are applied to the output. Lowering the number causes the concept to manifest weaker in the output. You can blend styles and concepts this way with just the base model or multiple LoRA at the same time at different strengths. You can even take a monochrome LoRA and take the weight into the negative to get some crazy colors.
The Negative Prompt is where you include things you don't want in your image. (worst quality, low quality:1.4), here have their attention set to 1.4, attention is sort of like weight, but for tokens. LoRA bring their own weights to add onto the model, whereas attention on tokens works completely inside the weights they're given. In this negative prompt FastNegativeV2 is an embedding known as a Textual Inversion. It's sort of like a crystallized collection of tokens that tell the model something precise you want without having to enter the tokens yourself or mess around with the attention manually. Embeddings you put in the negative prompt are known as Negative Embeddings.
In the next part, Steps stands for how many steps you want the model to take to solve the starting noise into an image. More steps take longer. VAE is the name of the Variational Autoencoder used in this generation. The VAE is responsible for working with the weights to make each image unique. A mismatch of VAE and model can yield blurry and desaturated images, so some models opt to have their VAE baked in, Size are the dimensions in pixels the image will be generated at. Seed is the number representation of the starting noise for the image. You need this to be able to reproduce a specific image.
Model is the name of the model used, and Sampler is the name of the algorithm that solves the noise into an image. There are a few different samplers, also known as schedulers, each with their own trade-offs for speed, quality, and memory usage. CFG is basically how close you want the model to follow your prompt. Some models can't handle high CFG values and flip out, giving over-exposed or nonsense output. Hires steps represents the amount of steps you want to take on the second pass to upscale the output. This is necessary to get higher resolution images without visual artifacts. Hires upscaler is the name of the model that was used during the upscaling step, and again there are a ton of those with their own trade-offs and use cases.
After ADetailer are the parameters for Adetailer, an extension that does a post-process pass to fix things like broken anatomy, faces, and hands. We'll just leave it at that because I don't feel like explaining all the different settings found there.
You're 100% right. Hell, I've been there. I imagine your comment got deleted because the point of the thread wasnt to debate about the definition of artistry.
People who ask for art didn't do the art. Wretch, you have merely stated a request. Worse, you have spoken your desires to a demon, and now you proudly display its gift as your own work.
It's just disappointing how few people are literate with standard occult practices. Never summon anything more powerful than you, never tell your innermost thoughts and desires to a demon, and if you are that stupid don't brag about it. Real JV league demonology.
It takes less effort to post a single paragraph of ranting than it does to edit the text on a meme image. In a way, this is a shittier shitpost than most.
Cause its not a very important post and its mostly just me bitching lol. Almost posted to one of the conversation subs but figured it fit here as well.
I don't really get how this is a counter point. I don't think anyone is contending that the pictures produced are reproducible by the same means. They're contending that the method of production isn't "making" art and they aren't an artist for starting the production process.
It's sort of like when rich people go to space and call themselves an astronaut. People have an idea of what an astronaut does and it isn't just "space tourist." If you fired back with "you try spending that much money and see how easy it is" then that wouldn't answer the point of why people don't want to call space tourists "astronauts."
I don't think their point was just that it's impossible to reproduce, more that there is skill, knowledge and choice put into getting close to the intended idea when working with AI output.
With that I think your point breaks down when you compare it with something like photography. Often you aren't 'making' the images that you capture, but there is skill and artistry in the choices that capture the moment or picture you want. Obviously there is more control in photography, and I would disagree with anyone that uses AI and claims the same level of artistry of photography. But ultimately I think the lines around art are so blurry in general, it seems incorrect to me to do decidedly exclude AI generated images.
So there is this feature where you can focus the AI on a specific file when generating responses. If that file contains only your own digital art, is the AI art produced yours? Not saying that's what you found, but maybe there is some nuance to AI art creation.