Timmy the Pencil
Timmy the Pencil
Timmy the Pencil
That professor was Jeff Winger
We've been had
Finishing up a rewatch through Community as we speak. Funny to see the gimmick (purportedly) used in real life.
He was so streets ahead.
That was my first thought!
How would we even know if an AI is conscious? We can't even know that other humans are conscious; we haven't yet solved the hard problem of consciousness.
Does anybody else feel rather solipsistic or is it just me?
I doubt you feel that way since I'm the only person that really exists.
Jokes aside, when I was in my teens back in the 90s I felt that way about pretty much everyone that wasn't a good friend of mine. Person on the internet? Not a real person. Person at the store? Not a real person. Boss? Customer? Definitely not people.
I don't really know why it started, when it stopped, or why it stopped, but it's weird looking back on it.
A Cicero a day and your solipsism goes away.
Rigour is important, and at the end of the day we don't really know anything. However this stuff is supposed to be practical; at a certain arbitrary point you need to say "nah, I'm certain enough of this statement being true that I can claim that it's true, thus I know it."
Underrated joke
13 year old me after watching Vanilla Sky:
Let's try to skip the philosophical mental masturbation, and focus on practical philosophical matters.
Consciousness can be a thousand things, but let's say that it's "knowledge of itself". As such, a conscious being must necessarily be able to hold knowledge.
In turn, knowledge boils down to a belief that is both
LLMs show awful logical reasoning, and their claims are about things that they cannot physically experience. Thus they are unable to justify beliefs. Thus they're unable to hold knowledge. Thus they don't have conscience.
Here's a simple practical example of that:
their claims are about things that they cannot physically experience
Scientists cannot physically experience a black hole, or the surface of the sun, or the weak nuclear force in atoms. Does that mean they don't have knowledge about such things?
Seems a valid answer. It doesn't "know" that any given Jane Etta Pitt son is. Just because X -> Y doesn't mean given Y you know X. There could be an alternative path to get Y.
Also "knowing self" is just another way of saying meta-cognition something it can do to a limit extent.
Finally I am not even confident in the standard definition of knowledge anymore. For all I know you just know how to answer questions.
[Replying to myself to avoid editing the above]
Here's another example. This time without involving names of RL people, only logical reasoning.
\
And here's a situation showing that it's bullshit:
\
You could also have a situation where C is a subset of B, and it would obey the prompt by the letter. Like this:
That sounds like an AI that has no context window. Context windows are words thrown into to the prompt after the user's prompt is done to refine the response. The most basic is "feed the last n-tokens of the questions and response in to the window". Since the last response talked about Jane Ella Pitt, the AI would then process it and return with 'Brad Pitt' as an answer.
The more advanced versions have context memories (look up RAG vector databases) that learn the definition of a bunch of nouns and instead of the previous conversation, it sees the word "aglat" and injects the phrase "an aglat is the plastic thing at the end of a shoelace" into the context window.
The example might just be to prevent lawsuits.
I'd say that, in a sense, you answered your own question by asking a question.
ChatGPT has no curiosity. It doesn't ask about things unless it needs specific clarification. We know you're conscious because you can come up with novel questions that ChatGPT wouldn't ask spontaneously.
We don't even know what we mean when we say "humans are conscious".
Also I have yet to see a rebuttal to "consciousness is just an emergent neurological phenomenon and/or a trick the brain plays on itself" that wasn't spiritual and/or cooky.
Look at the history of things we thought made humans humans, until we learned they weren't unique. Bipedality. Speech. Various social behaviors. Tool-making. Each of those were, in their time, fiercely held as "this separates us from the animals" and even caused obvious biological observations to be dismissed. IMO "consciousness" is another of those, some quirk of our biology we desperately cling on to as a defining factor of our assumed uniqueness.
To be clear LLMs are not sentient, or alive. They're just tools. But the discourse on consciousness is a distraction, if we are one day genuinely confronted with this moral issue we will not find a clear binary between "conscious" and "not conscious". Even within the human race we clearly see a spectrum. When does a toddler become conscious? How much brain damage makes someone "not conscious"? There are no exact answers to be found.
I've defined what I mean by consciousness - a subjective experience, quaila. Not simply a reaction to an input, but something experiencing the input. That can't be physical, that thing experiencing. And if it isn't, I don't see why it should be tied to humans specifically, and not say, a rock. An AI could absolutely have it, since we have no idea how consciousness works or what can be conscious, or what it attaches itself to. And I also see no reason why the output needs to 'know' that it's conscious, a conscious LLM could see itself saying absolute nonsense without being able to affect its output to communicate that it's conscious.
Noooooo Timmy the Pencil! I haven't even seen this demonstration but I am deeply affected.
here, have a film that'll make you creeped out by pencils from now on https://youtu.be/jo-hmifFevo
Wait wasn't this directly from Community the very first episode?
That professor's name? Albert Einstein. And everyone clapped.
Yes it was - minus the googly eyes
Found it
https://youtu.be/z906aLyP5fg?si=YEpk6AQLqxn0UP6z
Good job OP. Took a scene from a show from 15 years ago and added some craft supplies from Kohls. Very creative.
community may have gotten it from somewhere
Sure why not
WTF? My boy Tim didn't deserve to go out like that!
Look at the bright side: there are two Tiny Timmys now.
Tbf I'd gasp too, like wth
Humans are so good at imagining things alive that just reading a story about Timmy the pencil is eliciting feelings of sympathy and reactions.
We are not good judges of things in general. Maybe one day these AI tools will actually help us and give us better perception and wisdom for dealing with the universe, but that end-goal is a lot further away than the tech-bros want to admit. We have decades of absolute slop and likely a few disasters to wade through.
And there's going to be a LOT of people falling in love with super-advanced chat bots that don't experience the world in any way.
next you're going to tell me the moon doesn't have a face on it
Maybe one day these AI tools will actually help us and give us better perception and wisdom for dealing with the universe
But where's the money in that?
More likely we'll be introduced to an anthropomorphic pencil, induced to fall in love with it, and then told by a machine that we need to pay $10/mo or the pencil gets it.
And there’s going to be a LOT of people falling in love with super-advanced chat bots that don’t experience the world in any way.
People fall in and out of love all the time. I think the real consequence of online digital romance - particularly with some shitty knock off AI - is that you're going to have a wave of young people who see romance as entirely transactional. Not some deep bond shared between two living people, but an emotional feed bar you hit to feel something in exchange for a fee.
When the exit their bubbles and discover other people aren't feed bars to slap for gratification, they're going to get frustrated and confused from the years spent in their Skinner Boxes. And that's going to leave us with a very easily radicalized young male population.
Everyone interacts with the world sooner or later. The question is whether you developed the muscles to survivor during childhood or you came out of your home as an emotional slab of veal, ripe for someone else to feast upon.
TIMMY NO!
And now ChatGPT has a friendly-sounding voice with simulated emotional inflections...
That's why I love Ex Machina so much. Way ahead of its time both in showing the hubris of rich tech-bros and the dangers of false empathy.
Were people maybe not shocked at the action or outburst of anger? Why are we assuming every reaction is because of the death of something “conscious”?
i mean, i just read the post to my very sweet, empathetic teen. her immediate reaction was, "nooo, Tim! 😢"
edit - to clarify, i don't think she was reacting to an outburst, i think she immediately demonstrated that some people anthropomorphize very easily.
humans are social creatures (even if some of us don't tend to think of ourselves that way). it serves us, and the majority of us are very good at imagining what others might be thinking (even if our imaginings don't reflect reality), or identifying faces where there are none (see - outlets, googly eyes).
Right, it's shocking that he snaps the pencil because the listeners were playing along, and then he suddenly went from pretending to have a friend to pretending to murder said friend. It's the same reason you might gasp when a friendly NPC gets murdered in your D&D game: you didn't think they were real, but you were willing to pretend they were.
The AI hype doesn't come from people who are pretending. It's a different thing.
For the keen observer there's quite the difference between a make-believe gasp and and a genuine reaction gasp, mostly in terms of timing, which is even more noticeable for unexpected events.
Make-believe requires thinking, so it happens slower than instinctive and emotional reactions, which is why modern Acting is mainly about stuff like Method Acting where the actor is supposed to be "Living truthfully under imaginary circunstances" (or in other words, letting themselves believe that "I am this person in this situation" and feeling what's going on as if it was happenning to him or herself, thus genuinelly living the moment and reacting to events) because people who are good observers and/or have high empathy in the audience can tell faking from genuine feeling.
So in this case, even if the audience were playing along as you say, that doesn't mean they were intellectually simulating their reactions, especially in a setting were those individuals are not the center of attention - in my experience most people tend to just let themselves go along with it (i.e. let their instincts do their thing) unless they feel they're being judged or for some psychological or even physiological reason have difficulty behaving naturally in the presence of other humans.
So it makes some sense that this situation showed people's instinctive reactions.
And if you look, even here in Lemmy, at people dogedly making the case that AI actually thinks, and read not just their words but also the way they use them and which ones they chose, the methods they're using for thinking (as reflected in how they choose arguments and how they put them together, most notably with the use of "arguments on vocabulary" - i.e. "proving" their point by interpreting the words that form definitions differently) and how strongly bound (i.e. emotionally) they are to that conclusion of their that AI thinks, it's fair to say that it's those who are using their instincts the most when interacting with LLMs rather than cold intellect that are the most convinced that the thing trully thinks.
Seriously, I get that AI is annoying in how it's being used these days, but has the second guy seriously never heard of "anthropomorphizing"? Never seen Castaway? Or played Portal?
Nobody actually thinks these things are conscious, and for AI I've never heard even the most diehard fans of the technology claim it's "conscious."
(edit): I guess, to be fair, he did say "imagining" not "believing". But now I'm even less sure what his point was, tbh.
My interpretation was that they're exactly talking about anthropomorphization, that's what we're good at. Put googly eyes on a random object and people will immediately ascribe it human properties, even though it's just three objects in a certain arrangement.
In the case of LLMs, the googly eyes are our language and the chat interface that it's displayed in. The anthropomorphization isn't inherently bad, but it does mean that people subconsciously ascribe human properties, like intelligence, to an object that's stringing words together in a certain way.
Most discussion I've seen about "ai" centers around what the programs are "trying" to do, or what they "know" or "hallucinate". That's a lot of agency being given to advanced word predictors.
We're good at scamming investors into thinking that a room full of monkeys on typewriters can be "AI." And all it takes to make that happen is to waste time, resources, lives and money, (ESPECIALLY money) into building an army of fusion-powered robots to beat the monkeys into working just a little bit harder.
Because that's businesses solution to everything: work harder, not smarter.
We’re good at scamming investors into thinking that a room full of monkeys on typewriters can be “AI.”
Current generations of LLM from everything I've learned are basically really, really, really large rooms of monkeys pounding on keyboards. The algorithm that sifts through that mess to find actual meaning isn't even particularly new or revolutionary, we just never had databases large enough that can be indexed fast enough to actually find the emergent patterns and connections between fields.
If you pile enough libraries in front of you and can sift out the exact lines that you know will make you feel a certain way, you can arrange that pile of information in ways that will give you almost any result you want.
The thing that tricks a lot of us is we're never really conscious of what we want. We want to be tricked though, we want to control and manipulate something that seems conscious for our own ends, that gives a feeling of power so your brain validates the experience by telling you the story that it's alive. You see pictures that look neat and depict the scenes you wanted to see in your mind, so your brain convinces you that it's inventing things out of nothing and that it has to be magically smart to be able to mash pikachu with darth vader.
Anthropomorphism is one hell of a drug
Alan Watts, talking on the subject of Buddhist vegetarianism, said that even if vegetables and animals both suffer when we eat them, vegetables don't scream as loudly. It is not good for your own mental state to perceive something else suffering, whether or not that thing is actually suffering, because it puts you in an an unhealthy position of ignoring your own inherent sense of compassion.
If you've ever had the pleasure of dealing with an abattoir worker, the emotional strain is telling. Spending day after miserable day slaughtering confused, scared, captive animals until you're covered head to toe in their blood is... not good for your mental health.
The Texas Chainsaw Massacre often gets joked about because this "based on a true story" wasn't in Texas and didn't involve a chainsaw and wasn't a massacre. But what it did get right was how Ed Gein, the Plainfield Butcher, had his mind warped by decades of raising and killing farm animals for a living.
I used to tell my kids "Just pretend to sleep, trick me into thinking you are sleeping, I don't know the difference. Just pretend, lay there with your eyes closed."
I could tell, of course, and they did end up asleep, but I think that is like the Turing test - if you are talking to someone and it's not a person but you can't tell, from your perspective it's a person. Not necessarily from the perspective of the machine, we can only know our own experience so that is the measure.
The whole time everyone has been freaking out about AI I've been quietly enjoying just this fact. Like "neat, this place triggers my fear response", "neat, advanced text prediction triggers my 'talking to person' response."
I wish everyone was as aware of the response systems they have.
It also triggers in tech-bros the "I need to worship this shiny new thing like it's literally a deity sent from heaven to grace all mankind" response.
There are two alternative solutions to the Turing Test. The one here is when the Judges become dumb and can't differentiate between AI and humans. That is the one in the meme.
The other is when the humans become dumb and can only regurgitate memes that closely mimic how AI chat bots respond to human chatters. Even make a comment on a controversial topic, only for someone to argue with you, but not reference anything specific thing you said? I did and called them a bot as an insult. Then I checked their comment history and figured out it was a stolen account.
Stolze and Fink may not be underrated per se, but I wish their work was more widely known. I own several copies of Reign and I think I'm due for a WTNV retread.
In a robotics lab where I once worked, they used to have a large industrial robot arm with a binocular vision platform mounted on it. It used the two cameras to track an objects position in 3 dimensional space and stay a set distance from the object.
It worked the way our eyes worked, adjusting the pan and tilt of the cameras quickly for small movements and adjusting the pan and tilt of the platform and position of the arm to follow larger movements.
Viewers watching the robot would get an eerie and false sense of consciousness from the robot, because the camera movements matched what we would see people's eyes do.
Someone also put a necktie on the robot which didn't hurt the illusion.