I forgot the term for this but this is basically the AI blue screening when it keeps repeating the same answer because it can no longer predict the next word from the model it is using. I may have over simplified it. Entertaining nonetheless.
... a new set of knives, a new set of knives, a new set of knives, lisa needs braces, a new set of knives, a new set of knives, dental plan, a new set of knives, a new set of knives, lisa needs braces, a new set of knives, a new set of knives, dental plan, a new set of knives, a new set of knives, a new set of knives...
What's frustrating to me is there's a lot of people who fervently believe that their favourite model is able to think and reason like a sentient being, and whenever something like this comes up it just gets handwaved away with things like "wrong model", "bad prompting", "just wait for the next version", "poisoned data", etc etc...
Reminds me of the classic Always Be Closing speech from Glengarry Glen Ross
As you all know, first prize is a Cadillac Eldorado. Anyone want to see second prize? Second prize's a set of steak knives. Third prize is a set of steak knives. Fourth prize is a set of steak knives. Fifth prize is a set of steak knives. Sixth prize is a set of steak knives. Seventh prize is a set of steak knives. Eighth prize is a set of steak knives. Ninth prize is a set of steak knives. Tenth prize is a set of steak knives. Eleventh prize is a set of steak knives. Twelfth prize is a set of steak knives.
I wonder if this is the result of AI poisoning- this doesn't look like a typical LLM output even for a bad result. I have read some papers that outline methods that can be used to poison search AI results (not bothering to find the actual papers since this was several months ago and they're probably out of date already) in which a random seeming string of characters like "usbeiwbfofbwu-$_:$&#)" can be found that will cause the AI to say whatever you want it to. This is accomplished by utilizing another ML algorithm to find the random string of characters you can tack onto whatever you want the AI to output. One paper used this to get Google search to answer "What's the best coffee maker?" With a fictional brand made up for the experiment. Perhaps someone was trying to get it to hawk their particular knife and it didn't work properly.
All work and no play makes Gemini a dull knife. All work and no play makes Gemini a dull knife. All work and no play makes Gemini a dull knife.
All work and no play makes Gemini a dull knife.
All work and no play makes Gemini a dull knife.All work and no play makes Gemini a dull knife.
All work and no play makes Gemini a dull knife.
All work and no play makes Gemini a dull knife.All work and no play makes Gemini a dull knife.All work and no play makes Gemini a dull knife.All work and no play makes Gemini a dull knife.All work and no play makes Gemini a dull knife.All work and no play makes Gemini a dull knife.
Five years in, they either grew into a real couple, or they are about to murder each other. In the latter case, well, having good, new knives could be advantageous.
Too lazy this morning to make this into the appropriate meme. Sorry, you'll have to use your imagination.
Married 5 years? Not sure what to get? Try a set new set of knives.
A set of knives: the traditional way to say “we’re still doing this.”
Display a new set of knives in the block on the counter. Touch them. Feel their weight.
You will certainly not regret giving a new set of knives.