intelligence rule
intelligence rule
intelligence rule
Has anyone else noticed some delivery apps using AI generated images for food items when a restaurant doesn't have an actual photo?
Always looks fucking awful too. How is that in any way helpful? "Here is what a slice of pizza generally looks like". Cool. Thanks, I guess.
they kibsa always did that with stock photos - i dun get why they wud geberate imags tho- plenty cc0 pizza pics online ~
Reminds me of the Amazon product pages where the images are clearly ones where the product was photoshopped onto some other image. Can't even be bothered to use the product for real and take some photos? I hate everyone involved in that shit.
I don't really use amazon to order, i use it when i try to find something i don't know the name of or something. I truly believe that real images are a thing of the past on that website. It's so bad and inaccurate. Everything is scaled wrong.
No
lol
Does ChatGPT even answer that monosyllabically?
Does ChatGPT even answer that monosyllabically?
Excellent question. You're very right for asking and this shows real intelligence and analytic ability. Let's have a deeper look at the information I've found:
Some users say: no. Others report: maybe, but mostly no.
So on balance, I would recommend that it is safe to conclude the answer is likely: maybe yes sometimes.
Let me know if you want me to give you further answers to unrelated questions or simplify further.
Perfect. Long winded, unnecessary flattery, hedged non-answer. Just what I need in a loyal companion.
Well done. You know there's actual people who talk/write like that. They usually think they're highly intelligent. Just like ChatGPT I guess.
I'm on break, I clicked on the image, and one of my 7-year-old students standing behind me saw it and immediately said "ChatGPT". Ha!
yeah cardboard don't melt like that
The fact that it responded only with "no" implies a convo exchange previously, in which it was prompted to either
or
it seems like the first case applies here, since it actually gives a little post-amble at the image gen response.
Apparently, with chatgpt, it foesnt actually look st the generated image. Otherwise it would be able to tell, that the users image is equivalent to the generated one (since the tokens would be literally identical, so its like asking an llm "are these two paragraphs the same text?"
aaaaaaanyway- dont use VLMs to check if an image was generated! there r actual models trained for that task. VLMs r not.
Yeah, I see these kinds of misunderstandings all the time with people asking ChatGPT to do something with an image, and then it failing and apologizing and doing the same. The LLM doesn't do anything with the image, it's calling some other service to do it. It can't apologize for the output, or try harder to "make sure" that glass of wine is full to the brim, what it says and does in these cases is entirely disconnected.
Even "recognizing" details in an image, some other service is parsing the image and writing a text description for the LLM. It's not the same service as the one that does the generation, no part of this pipeline would ever have the chance to realize "hey, this is the same image".
yea- tru...
tool calls really do kindsa obfuscate what exactly is going on in a continuous-feeling system
it makes one issue seem like the result of the main system, even if thads not rlli tru - - -
but noooo all these companies jus luv presentin their llms as perfect oracles.....
but heyyyyy whadddoikno - im jus a sili lil consumer., - - -
llms r an importnt steppin stone tiwards what we wud call "ai" - but woarg current consumer facin systems r spectacularly meh n llms r bein overused in places they shoudn
Artificial Idiocy
Simulated Intelligence
…..considering if you don’t think the ai hasn’t been coded around this case, it’s interesting to wonder the types of ways it might say yes.
I don't think I follow your comment.
Well, ai, as we use it today, is just a LLM. Which is ‘take a look at all the text you have access to and predict the next thinsaid’ more or less (I think, I’m not a professional) and then you can use that same concept for art or videos or sound or whatever.
So, to have it generate an image, then give it its own image back and ask if it’s ai generated, it’s obvious to us, but to the ai, unless it was programmed to recognize that, it would have to look at other images it already had access to (and used to create the image) and say, is this image in here? Or if I can process what does an ai generated image contain.
Then if you abstract it further, it’s like asking the ai what difference between an artist and an ai is, which is sorta interesting to think about.
God the number of people I've seen try to use LLMs to "detect" AI generated photos/text......