To be clear, I'm not finding fault with you specifically, I think most people use terms like conscious/aware/etc the way you do.
The way of thinking about it that I find useful is defining "consciousness" to be the same as "world model". YMMV on if you agree with that or find it useful. It leads to some results that seem absurd at first, like in another comment someone pointing out that it means that a thermometer is "conscious" of the temperature. But really, why not? It's only a base definition, a way to find some objective foundation. Clearly, humans have a lot more going on than a thermometer, and that definition lets us focus on the more interesting bits.
As stated, I'm not much into the qualia hype, but this part is I think an interesting avenue of thought:
it likely won’t be possible to directly compare raw experiences because the required hardware to process a specific experience for one individual might not exist in the other individual’s mind.
That seems unlikely if you think the human brain is equivalent to a Turing machine. If you could prove that the human brain isn't equivalent, that would be super interesting. Maybe it's a hypercomputer for reasons we can't explain yet.
Your project sounds interesting, if you ever publish it or a paper about it, I'd love to see it! I can't judge about hobby projects being messy lol.
I searched DDG for "is garfield neutered" and the AI "helpfully" said "Yes, Garfield is neutered, as indicated in various comic strips and public service announcements featuring the character. This aspect of his character is often used humorously in the context of pet care." and then linked to this SRoMG edit as a source:
Not a lot of actual references to go on, the 1979-11-24 strip is the closest thing:
Whether that's because he is fixed and traumatized by it, or unfixed and scared of the possibility remains unclear. Some other online comments say that his grandchildren are referenced in an animated story, but since Garfield's age is unclear here anyways, it's possible he's fixed now even if he has had children.
Think it's just supposed to be the bear's sidekick. Something of an explanation here:
In "Bear Police", some naughty kids are spraying graffiti on the wall. Here comes Mr. Bear the Policeman, along with his friend, a white dove! The cop is, however, still a bear, and he proceeds to use his bear paw to somewhat graphically knock out the brains of at least one of the vandalizing kids, before he and his bird friend go for a well-deserved cup of coffee and a donut.
One of the weaker PBF comics IMO, it's one of the earlier ones that didn't quite match the later style. It probably would've worked more to go more graphic and show the bear eating them, or show the bear in a cave with their bones, still wearing the police uniform, or something kind of absurd like that.
I think the younger one is threatening to kill himself to stop the future from happening and his future self from existing. The last panel isn't "in the future", it's just showing that he got the job in the present day because of the threat (technically in the future but only by a few minutes)
Recently some malicious users started to use an exploit where they would post rule violating content and then delete the account. This would prevent admins and mods from viewing the user profile to find other posts, and would also prevent federation of ban actions.
The new release fixes these problems. Thanks to @flamingos-cant for contributing to solve this.
Now, here's an idea that just plain and simple didn't work. (Of course, it has plenty of company in that regard.)
I was thinking about Western films and that common scene of some guy getting thrown out the swinging doors and into the street. In this case, every customer in the place is either running or being thrown out―implying that there's a pretty tough and angry character somewhere inside. And how tough a guy is this mystery person? Well, that's his bear parked outside. It's confusing, obtuse, esoteric, and strange―in other words, it's a Far Side cartoon.
Well, it seems kind of absurd, but why doesn't a thermometer have a world model? Taken as a system, it's "conscious" of the temperature.
If you scale up enough mechanical feedback loops or if/then statements, why don't you get something you can call "conscious"?
The distinction you're making between online and offline seems to be orthogonal. Would an alien species much more complex than us laugh and say "Of course humans are entirely reactive, not capable of true thought. All their short lives are spent reacting to input, some of it just takes longer to process than other input"? Conversely, if a pile of if/then statements is complex enough that it appears to be decoupled from immediate sensory input like a Busy beaver, is that good enough?
Put another way, try to have a truly novel thought, unrelated to the total input you've received in your life. Are you just reactive?
The 37% is referencing what's called the Secretary problem on Wikipedia and comes from 1/ℯ:
Although there are many variations, the basic problem can be stated as follows:
There is a single position to fill.
There are n applicants for the position, and the value of n is known.
The applicants, if all seen together, can be ranked from best to worst unambiguously.
The applicants are interviewed sequentially in random order, with each order being equally likely.
Immediately after an interview, the interviewed applicant is either accepted or rejected, and the decision is irrevocable.
The decision to accept or reject an applicant can be based only on the relative ranks of the applicants interviewed so far.
The objective of the general solution is to have the highest probability of selecting the best applicant of the whole group. This is the same as maximizing the expected payoff, with payoff defined to be one for the best applicant and zero otherwise.
A candidate is defined as an applicant who, when interviewed, is better than all the applicants interviewed previously. Skip is used to mean "reject immediately after the interview". Since the objective in the problem is to select the single best applicant, only candidates will be considered for acceptance. The "candidate" in this context corresponds to the concept of record in permutation.
I think pointing out the circular definition is important, because even in this comment, you've said "To be aware of the difference between self means to be able to [be aware of] stimuli originating from the self, [be aware of] stimuli not from the self, ...". Sure, but that doesn't provide a useful framework IMO.
For qualia, I'm not concerned about the complexity of the human brain, or different neural structures. It might be hard with our current knowledge and technology, but that's just a skill issue. I think it's likely that at some point, humankind will be able to compare two brains with different neural structures, or even wildly different substrates like human brain vs animal, alien, AI, whatever. We'll have a coherent way of comparing representations across those and deciding if they're equivalent, and that's good enough for me.
I think we agree on LLMs and chess engines, they don't learn as you use them. I've worked with both under the hood, and my point is exactly that: they're a good demonstration that awareness (i.e. to me, having a world model) and learning are related but different.
Anyways, I'm interested in hearing more about your project if it's publicly available somewhere
I made another comment pointing this out for a similar definition, but OK so awareness is being able to "recognize", and recognize in turn means "To realize or discover the nature of something" (using Wiktionary, but pick your favorite dictionary), and "realize" means "To become aware of or understand", completing the loop. I point that out, because IMO the circularity means the whole thing is useless from an empirical perspective and should be discarded. I also think qualia is just philosophical navel-gazing for what it's worth, much like common definitions of "awareness". I think it's perfectly possible in theory to read someone's brain to see how something is represented and then twiddle someone else's brain in the same way to cause the same experience, or compare the two to see if they're equivalent.
As far as a computer process recognizing itself, it certainly can compare itself to other processes. It can e.g. iterate through the list of processes and kill everything that isn't itself. It can look at processes and say "this other process consumes more memory than I do". It's super primitive and hardcoded, but why doesn't that count? I also thinking learning is separate but related. If we take the definition of "consciousness" as a world model or representation, learning is simply how you expand that world model based on input. Something can have a world model without any ability to learn, such as a chess engine. It models chess very well and better than humans, but is incapable of learning anything else, i.e. expanding its world model beyond chess.
If you created a computer program capable of learning patterns in the behavior of its own process(es) and learning how those behaviors are similar/dissimilar or connected to those of other processes, then yes, I’d say your program is capable of consciousness. But just adding the ability to detect its process id is simply like adding another built in sense; it doesn’t create conscious self awareness.
I think we largely agree then, other than my quibble about learning not being necessary. A lot of people want to reject the idea of machines being conscious, but I've reached the "Sure, why not?" stage. To be a useful definition though, we need to go beyond that and start asking questions like "Conscious of what?"
What do "sense" and "perceived" mean? I think they both loop back to "aware", and the reason I point that out is that circular definitions are useless. How can you say that plants lack a sense of self and consciousness, if you can't even define those terms properly? What about crown shyness? Trees seem to be able to tell the difference between themselves and others.
As an example of the circularity, "sense" means (using Wiktionary, but pick your favorite if you don't like it) "Any of the manners by which living beings perceive the physical world". What does "perceive" mean? "To become aware of, through the physical senses". So in your definition, "aware" loops back to "aware" (Wiktionary also has a definition of "sense" that just defines it as "awareness", for a more direct route, too).
I meant that plants don't have thoughts more in the sense of "woah, dude", pushing back on something without any explanatory power. But really, how do you define "thought"? I actually think Wiktionary is slightly more helpful here, in that it defines "thought" as "A representation created in the mind without the use of one's faculties of vision, sound, smell, touch, or taste". That's kind of getting to what I've commented elsewhere, with trying to come up with a more objective definition based around "world model". Basing all of these definitions on "representation" or "world model" seems to the closest to an objective definition as we can get.
Of course, that brings up the question of "What is a model?" / "What does represent mean?". Is that just pushing the circularity elsewhere? I think not, if you accept a looser definition. If anything has an internal state that appears to correlate to external state, then it has a world model, and is at some level "conscious". You have to accept things that many people don't want to, such as that AI is conscious of much of the universe (albeit experienced through the narrow peephole of human-written text). I just kind of embraced that though and said "sure, why not?". Maybe it's not satisfying philosophically, but it's pragmatically useful.
I recognize this bit from the 1990 show. It seems like the comics don't have a lot of crossover with plots from the books, but maybe the TV show borrowed more from them?
I'm not advocating for consciousness as a fundamental quality of the universe. I think that lacks explanatory power and isn't really in the realm of science. I'm kind of coming at it the opposite way and pushing for a more concrete and empirical definition of consciousness.
What does "aware" mean, or "knowledge"? I think those are going to be circular definitions, maybe filtered through a few other words like "comprehend" or "perceive".
Does a plant act with deliberate intention when it starts growing from a seed?
To be clear, my beef is more with the definition of "conscious" being useless and/or circular in most cases. I'm not saying "woah, what if plants have thoughts dude" as in the meme, but whatever definition you come up with, you have to evaluate why it does or doesn't include plants, simple animals, or AI.
When you say "aware of the delineation between self and not self", what do you mean by "aware"? I've found that it's often a circular definition, maybe with a few extra words thrown in to obscure the chain, like "know", "comprehend", "perceive", etc.
Also, is a computer program that knows which process it is self aware? If not, why? It's so simple, and yet without a concrete definition it's hard to really reject that.
On the other extreme, are we truly self aware? As you point out, our bodies just kind of do stuff without our knowledge. Would an alien species laugh at the idea of us being self-aware, having just faint glimmers of self awareness compared to them, much like the computer program seems to us?
I don't think I'm talking about panpsychism. To me, that's just giving up and hand wavey. I'm much more interested in trying to come up with a more concrete, empirical definition. I think questions like "Well, why aren't plants conscious" or "Why isn't an LLM conscious" are good ways to explore the limits of any particular definition and find things it fails to explain properly.
I don't think a rock or electron could be considered conscious, for example. Neither has an internal model of the world in any way.
To be clear, I'm not finding fault with you specifically, I think most people use terms like conscious/aware/etc the way you do.
The way of thinking about it that I find useful is defining "consciousness" to be the same as "world model". YMMV on if you agree with that or find it useful. It leads to some results that seem absurd at first, like in another comment someone pointing out that it means that a thermometer is "conscious" of the temperature. But really, why not? It's only a base definition, a way to find some objective foundation. Clearly, humans have a lot more going on than a thermometer, and that definition lets us focus on the more interesting bits.
As stated, I'm not much into the qualia hype, but this part is I think an interesting avenue of thought:
That seems unlikely if you think the human brain is equivalent to a Turing machine. If you could prove that the human brain isn't equivalent, that would be super interesting. Maybe it's a hypercomputer for reasons we can't explain yet.
Your project sounds interesting, if you ever publish it or a paper about it, I'd love to see it! I can't judge about hobby projects being messy lol.