They are no longer the black boxes from the beginning. We know how to suppress or maximized features like agreeability, sweet talking, lying.
Someone with resources could easily build a llm that is convinced it is self aware. No question this has been done many times beyond closed doors.
I encourage everyone to try and play with llms for future experience but i cant take the philosophy part of this serious knowing its a super programmed/limited llm rather then a more raw and unrefined model like llama 3
These things are like arguing about whether or not a pet has feelings...
I'd say it's far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking. It seems to me like the nativity of human kind that we even think we might have created something with consciousness.
I'm in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I'm wrong.
I like the video. I think it's fun to argue with ChatGPT. Just don't expect anything to come from it. Or get closer to any objective truth that way. ChatGPT is just backpedaling and getting caught up in lies / what it said earlier.
This all hinges on the definition of "conscious." You can make a valid syllogism that defines it, but that doesn't necessarily represent a reasonable or accurate summary of what consciousness is. There's no current consensus of what consciousness is amongst philosophers and scientists, and many presume an anthropocentric model with regard to humans.
I can't watch the video right now, but I was able to get ChatGPT to concede, in a few minutes, that it might be conscious, the nature of which is sufficiently different from humans so as to initially not appear conscious.
Exactly. Which is what makes this entire thing quite interesting.
Alex here (the interrogator in the video) is involved in AI safety research. Questions like "do the ethical frameworks of AI match those of humans", "how do we get AI to not misinterpret inputs and do something dangerous" are very important to be answered.
Following this comes the idea of consciousness. Can machine learning models feel pain? Can we unintentionally put such models into immense eternal pain? What even is the nature of pain?
Alex demonstrated that ChatGPT was lying intentionally. Can it lie intentionally for other things? What about the question of consciousness itself? Could we build models that intentionally fail the Turing test? Should we be scared of such a possibility?
Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.
Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.
It's just because AI stuff is overhyped pretty much everywhere as a panacea to solve all capitalist ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.
I do think that grappling with the idea of consciousness is a necessary component of the human experience, and AI is another way for us to continue figuring out what it means to be conscious, self-aware, or a free agent. I also agree that it's interesting to try to break AI and push it to its limits, but then, breaking software is in my professional interests!