chatGpt is no longer Artificial Intelligence but Genuine Stupidity
chatGpt is no longer Artificial Intelligence but Genuine Stupidity

Corrected S Sentences.

chatGpt is no longer Artificial Intelligence but Genuine Stupidity
Corrected S Sentences.
It cannot read. It doesn't see words or letters. It works with Tokens which words are converted into. It cant count the number of letters in a word because it can't see them. OpenAI has a Tokenizer you can plug a prompt into to see how its broken up, but youre asking a fish to fly.
You aslo raed in tkoens, no dfifrecene heer
And a single "S" is also a token. Which has vectors to all other words that start with an S.
One thing to point out here is that the word sentences is severely mistyped as "sententences". That's not going to help.
Is there a workaround to "trick" it into understanding letters? I'd love to use it to play with language and brainstorm some riddles or other wordplay, but if it literally can't understand language on a human level, that's a fools errand.
I asked it how many "n" mayonnaise has and it came up with manaonnanaise
Im not even mad, that's a great answer.
It's not a good one. Or correct. But I still laughed. And I'm relieved that one won't cost too many jobs at least in this version.
I feel like if these things ever become really self aware, they will be super fucking with us
It's gonna be an Iain M Banks kind of super intelligence, for sure.
Idk what I’m doing wrong, thankfully it always seems to listen and work fine for me lmao
The second sentence also had an s in it
Y'all seem to gloss over the word artificial when it comes to reading "artificial intelligence". That or you're leaning too hard on the first definition..
It's just so counterintuitive for a layman to have this tool that can write long flowing passages of text and theoretically pass a rudimentary Turin test, but it can't even begin to work with language on the level most toddlers can. We humans typically have to learn letters before we move up to words, sentences, paragraphs, and finally whole compositions. But this thing skipped right over the first several milestones and has no mechanism for reverse engineering that capability.
Alignment at its finest.
ChatGPT doesn't understand letters, or phonetics, or most other aspects of speech. I tried for an hour to train it to understand what a palindrome is, with the hopes of getting it to generate some new ones. Nothing stuck. It was like trying to teach a dog to write its name.
Always has been 🔫
It has not. ChatGPT has been a monumental achievement and has been capable of performing previously impossible and highly impressive tasks. This is new behavior for it.
To be fair that feature sucked since the very beginning, at least for me.
I also noticed that chatGPT can't actually correct itself. It just says "oh sorry, here's something different" and gives you another crap answer. I noticed it with code specifically. If I remember correctly it was better when it was brand new.
The apology thing is sort of hilarious. I wonder what exactly they did to make it eternally apologetic. There was an article on HN recently about how it is basically impossible to get Chat GPT to stop apologizing, as in, if you ask it to stop, it will apologize for apologizing.
As a Canadian, I have also apologized for apologizing 😞
I experienced exactly that! I told it to stop apologizing for everything and just respond with correct answers and it apologized for not being able to stop apologizing.
It's because humans have rated potential responses and ChatGPT has been trained to generate the kind of responses that most consistently get preferred rating. You can imagine how an AI trained to say what people want to hear would become a people pleaser.
That's what frustrates me the most whenever I try to use it. I tell it to be less verbose, stop over explaining and apologizing every time I correct it, and it just spits out another four paragraphs explaining why it's sorry.
The only solution I can think of is using it via API with Python and make a call with the final reply asking it to remove apologies from the text, but the token usage will increase.
I do something similar when I need to tell the model to keep the language of a text before performing a task with that text. I send the model a chunk of text and ask it to respond with single word, indicating the language of the text and then I include that in the next prompt like "Your output must be in SPANISH", or whatever.
Did you dare to say it became dumb when it interacted with us?
How dare you? /s
Ahem Tay tweets
Like that Twitter bot that turned racist after talking to some people for a while.