Considering that AI is "hallucinating", and able to make up information that seems true based on what the model was trained on, what is the difference between current AI and the human brain?
Considering that AI is "hallucinating", and able to make up information that seems true based on what the model was trained on, what is the difference between current AI and the human brain?
To elaborate a little:
Since many people are unable to tell the difference between a "real human" and an AI, they have been documented "going rogue" and acting outside of parameters, they can lie, they can compose stories and pictures based on the training received. I can't see AI as less than human at this point because of those points.
When I think about this, I think about that being the reason as to why we cannot create so called "AGI" because we have no proper example or understanding to create it and thus created what we knew. Us.
The "hallucinating" is interesting to me specifically because that seems what is different between the AI of the past, and modern models that acts like our own brains.
I think we really don't want to accept what we have already accomplished because we don't like looking into that mirror and seeing how simple our logical process' are mechanically speaking.
I think the difference comes from understanding. When we inferior, fleshy ones "make up" information, it's usually based on our understanding (or misunderstanding) of the subject at hand. We will fill in the blanks in our knowledge with what we know about similar subjects.
An LLM doesn't understand its output, though. All it knows is that word_string_x immediately follows word_string_y in 84.821% of its training data, so that's what gets pasted next.
For us, making up false information comes from gaps in our cognition, from personal agendas, our own unique lived experiences, etc. For an LLM, these are just mathematical anomalies.