That """"human"""" skeleton in the fourth item gave it away immediately. Now that I look at it further, "Isolation & Surveillance" and a picture of a megaphone??? "Fear as a tool of control" with a lightning bolt in someone's head??? Did OP even read their slop before vomiting it here?
Yeah I've seen so much AI slop with the yellow tinge. It's kinda hilarious that we're watching AI model collapse in real time but the bubble keeps growing
I've also heard theories that its related to lots of "golden hour" photos but ultimately (and this is one of the significant problems with machine learning) the specific cause is unknowable due to the nature of the software
What's wrong with the skeleton? It's stylised of course as these sorts of icons tend to be, but generally correct. Pelvis, spine, ribs, head, etc.
The megaphone seems like a very good way to evoke images of an abusive overseer controlling the camp's prisoners using technology of the modern day, an effective image for a section on monitoring and control, no?
There is no standardised symbol for fear within a person's mind, so again, a stylised symbol showing a lightning bolt is fine. Especially given that it is likely there on purpose - think shocks. Shocks of a different kind you may receive under an evil oppressive prisoner camp system (imagine the sudden shock in ones mind as a guard shouts or lashes out at you, I would certainly consider symbolising that in this manner).
It's as if you've never looked at anything anyone's made with simple clipart and the like before, and assume everything must be extremely deep and custom designed by experts?
Even if this were made with the help of AI, I don't see the message being any less valid, just because the person didn't go download an image editor to a PC, learn how to use it, learn how to import SVG icons and research for the most appropriate ones, build the image and export it appropriately, etc.
Not everybody is as skilled or capable as you or I may be in producing something that we might consider simple. Heck, some people only have a smartphone, not everybody has the luxury of owning a PC and proper software, nor the time or inclination to learn such tools.
The message in this image is conveyed very well, and is relevant to the current fascist regime's actions in the USA (and indeed is a universally important message).
If you want to suggest it's bad (or "slop", as you so evocatively put it) just because you don't like the image creator used to put it to print, well, that's a weird hill to die on, to be honest.
You better hope your country never duplicates the USA's slide into fascism, or you yourself may one day end up in a camp... or worse. How quick to attack the people trying to raise awareness of these abuses of human rights then, I wonder?
I think it is a pelvis on the left as others have said. I have to admit though I thought I was looking at two skulls, probably because I was biased to look from left to right so I just accepted the left one as a skull and then the right skull actually looks like a skull. My first thought though was that it was an abstract depiction of overcrowding, so it was intentional to show two skeletons pushed close together.
Makes me wonder how many memes are "tainted" with oldschool ML before generative AI was common vernacular, like edge enhancement, translation and such.
A lot? What's the threshold before it's considered bad?
What about 'edge enhancing' NNs like NNEDI3? Or GANs that absolutely 'paint in' inferred details from their training? How big is the model before it becomes 'generative?'
What about a deinterlacer network that's been trained on other interlaced footage?
My point is there is an infinitely fine gradient through time between good old MS paint/bilinear upscaling and ChatGPT (or locally runnable txt2img diffusion models). Even now, there's an array of modern ML-based 'editors' that are questionably generative most probably don't know are working in the background.
Not a great metric either, as models with simpler output (like text embedding models, which output a single number representing 'similarity', or machine vision models to recognize objects) are extensively trained.
Wow. It certainly passes the test for first viewing. I fell for it until I read this comment and cannot unsee it now. Good reminder how fast propaganda of any subject can propagate, I guess
You should see the commenter that I blocked under mine. Apparently, some people don't have the technological means to go to PowerPoint Online and Ctrl-C/Ctrl-V some stock images, but they do have the means to prompt slop by mail. Silly me for assuming privilege.
This might get me a lot of downvotes, but when ai 'draws' text it generates each individual letter which makes them a bit wiggly and often not on a straight line. The fact these are all grammatically correct sentences all on perfectly straight lines give me the impression this isn't raw output. Could be that the image was made with text later added on top though, but even the most advanced ai generators aren't this consistent with text.
This is entirely possible. Could also be the whole image is ai generated, but the maker manually inserted the text (not so hard to erase the text ai would have generated) because AI messed up. You can for example first ask chatgpt to generate a text, but if you than ask it to generate an image with that text it will be all wobly and full of errors because of how the generation process works.
an AI upscaler/enhancer to sharpen the image.
There is no automatic fix of the first problem, because the ai spits out shapes that look like letters but aren't.
Nah, nah, I’m not saying that the text was AI image generated in any way. I just suspect that the image (after the text was put in place by a human) was fed through some enhancer/upscaler. I remember seeing a comic a while ago that reeked of AI, but it turns out that it was a fully human-made comic fed through some AI cartoon enhancer (for… some reason? The original looked fine. Maybe to steal credit?).
I do doubt that any of what I described is the case, though. I feel like the text would look less crisp if so.
I do wholeheartedly believe the icons were generated separately still.
So there are programs like Nightshade and Glaze that are used to give images an anti-AI treatment. These programs can leave artifacts if the intensity is tuned too high.
(Source: Wife's an artist that regularly uses Glaze when posting her art online)
This seems like a pretty harmless use of AI, this doesn’t hurt artists or graphic designers, it just saves some time to create an image that helps OP communicate more effectively. You can argue about the environmental impact of AI in general, but for one image?
I don’t understand blanket hatred of ALL AI, there are some cases in which it is more useful than harmful.
Edit: original comment here was a reply to a different comment so removed it here. But now I commented here, what made you doubt about it? I ask because I don't think there is AI yet that can output text that is this consistent.
This is a picture that a friend of mine took on a trip to England. This was probably made by OpenAI's latest model, because this is also one of many abominable Ghibli images that were probably part of some kind of meme. You can see that the text quality is infinitely better than before. It spells, displays and even puts it all into perspective correctly. However, it seems to only really be able to output a few different fonts, which you can even spot in the post that we're commenting on. The slop mutates ever closer to slipping past your defenses...
EDIT: Bonus picture, I can never see Ghibli anything anymore without this coming to mind
If you're right AI finally caught up with text, has been a long time coming. Of course the same holds true for that poster, the text could be overlaid manually but the fact it has the same font makes it too much of a coincidence. Crazy how fast this is going. Thanks for proving me wrong & providing me with that delicious butt. :)