Top car-producing countries
AnarchoEngineer @ AnarchoEngineer @lemmy.dbzer0.com Posts 4Comments 234Joined 7 mo. ago
When I started trying to learn a new language I was learning Spanish in school, and also trying to learn German, Mandarin, and French on my own. Honestly it worked pretty well, I could even keep them separate in my mind easily which was surprising since people told me I’d get them mixed up if I learned them at the same time.
Anyway, it was only for a year or so because I eventually lost the hyper fixation on language and stopped learning all of them, but yeah I kinda did the same thing you described.
When I’d get bored with one language I’d move on to another. Sometimes I’d spend a whole day on just one, sometimes I’d switching between each of them in the same day.
My first project in Rust was replicating this paper because i wanted to learn rust but needed a project to work on because i hate learning from tutorials.
Of course, I had intended to go the OOP route because that’s what I was used to and this was my first time using rust… that was a bit of a headache. But I did eventually get it working and could watch the weights change in real time. (It was super slow of course but still cool)
Anyway I’ve started making a much much faster version by using a queue to hold neurons and synapses that need updating instead of running through all of them every loop.
It’s like lightning fast compared to the old version; I’m very proud of that. However, my code is an absolute mess and is now filled with
Vec<Arc<Mutex<>>>
And I can’t implement the inhibition in a lazy way like I did the first time, so that’s not fun…
Let’s fucking goooooooo!!!!!
I am so ready for another Remedy game
I definitely don’t think the human brain could be modeled by a Turing machine.
In 1994, Hava Siegelmann proved that her new (1991) computational model, the Artificial Recurrent Neural Network (ARNN), could perform hypercomputation (using infinite precision real weights for the synapses)
Since the human brain is largely comprised of complex recurrent networks, it stands to reason the same holds for it.
The human brain is an analog computer and is—as far as I’m aware—an undecidable system. As in you cannot algorithmically predict the behavior of the net with certainty. Predictable behavior can arise but it’s probabilistic not certain.
I also think I see what you’re saying with the thermometer being “conscious” of temperature, but that kind of collapses the definition of conscious to “influenced by” which makes the word superfluous. Using conscious to refer to an ability requiring learning of patterns of different sources of influence seems like a more useful definition.
Also in the crazy unlikely event in which I actually end up creating a sentient thing, I’ll be hesitant to publish any work related to it.
If my theory about how focus/attention work is correct, anything capable of focus must be capable of experiencing pain/irritation/agitation. I’m not fond of the idea of going “hey here’s how to create something that feels pain” to the world since a lot of people around me don’t even feel empathy for their own kind
I’ve yet to meet another one but you’re probably right lol
Of course it would but see only my ideology gets that!
The problem is Trotskyists and Marxist-Leninists and anarcho-communists and anarcho-syndicalists and …
/s
10yo: “Why doesn’t anyone listen to me?”
12yo: “Ah they just don’t listen to me because I’m a little kid”
14yo: “Ah they just don’t understand because I didn’t give enough evidence”
16yo: “They don’t listen to evidence because they spent their lives in this tiny religious town, the rest of the world is better”
18yo: “…I don’t want to live on this planet anymore”
Oh look it’s a wallpaper of Rache Bartmoss!
Wait wtf? When did they add corn to the community icon? This is getting out of hand…
well-made nuclear warheads
Pure fission bombs (the simplest kind of nukes) are literally just “let’s put enough of this plutonium close enough together that the chain reaction is sustained” a very simple warhead could have plutonium spaced just slightly farther than necessary for the reaction to happen and hold them that distance apart with a breakable or crumple-able beam so when the shell hits basically anything, the beam buckles, the plutonium moves closer together reaching critical mass, and adios
Could this still act as a warhead if you added propulsion (and prayed to the void the plutonium doesn’t break/vibrate free during acceleration) ? Yes. Would it almost certainly go off if it was dropped during a juggling act? Also probably yes.
if you don’t think my framework is useful, could you provide a more useful alternative or explain exactly where it fails? If you can it’d be a great help.
As for “skill issue” while I think generalized comparisons of brains are possible (in fact we have some now) I think you might be underestimating the nature of chaotic systems or have a belief that consciousness will arise with equivalent qualia whenever it exists.
There is nothing saying that our brains process qualia in exactly the same way, quite the opposite, and yet we can reach the same capabilities of thought even with large scale neurodivergences. The blind can still experience the world without their sense of sight, those with synesthesia can experience and understand reality even if their brain processes multiple stimuli as the same qualia. It is very possible that there are multiple different paths to consciousness which will have unique neurological behaviors that only makes sense within their original mind and may have no analog in another.
The more I look into the functions of the brain—btw I am by no means an expert and this is not my field—the more I realize many of our current models are limited by our desire to classify things discreetly. The brain is an absolute mess. That is what makes it so hard to understand but also what makes it so powerful.
It may not be possible to isolate qualia at all. It may not be possible to isolate certain thoughts or memory from other circumstances in which it is recalled. There might not be elemental/specific spike trains for a certain sense that are disjoint from other senses. And if this is the case, it is likely possible different individuals may have different couplings of qualia making them impossible to compare directly.
The idea that other processing areas of the brain (which by the way we do see in the brain (place neurons remapping is a simple example)) may be entangled in different ways across individuals means that even among members of the same species it likely won’t be possible to directly compare raw experiences because the required hardware to process a specific experience for one individual might not exist in the other individual’s mind.
Discrete ideas like communicable knowledge/relationships should (imo) be possible to isolate well enough that you could theoretically implant them into any being capable of understanding abstract thought, but raw experiences (ei qualia) most likely will not have this property.
Also, the project isn’t available online and is a mess because it’s not my field and I have an irrational desire to build everything from scratch because I want to understand exactly how it is implemented and hey it’s a personal hobby project, don’t judge lol
So far I’ve mostly only replicated the research of others. I have tried some experiments with my own ideas, but spiking neural nets are difficult to simulate on normal hardware, and I need a significant number of neurons, so currently I’m working on designing a more efficient implementation than the ones I’ve previously written.
After that, my plan is to experiment with my own designs for a spiking artificial hippocampus implementation. If my ideas are sound I should be able to use similar systems to implement both short and long term memory storage.
If that succeeds I’ll be moving onto the main event of focus and attention which I also have some ideas for, but it really requires the other systems to be functional.
I probably won’t get that far but hey it’s at least interesting to think about and it’s honestly fun to watch a neural net learn patterns in real time even if it’s kinda slow.
Edit: removed because I accidentally commented the exact same thing twice since the post button didn’t seem to work the first time lol
I think you’re getting hung up on the words rather than the content. While our definitions of terms may be rather vague, the properties I described are not cyclically defined.
To be aware of the difference between self means to be able to sense stimuli originating from the self, sense stimuli not from the self, and learn relationships between them.
As long as aspects of the self (like current and past thoughts) are able to be sensed (encoded into a representation which the mind can work with directly; in our case neural spike chains) exist and senses which compare those senses with other senses or past senses and finally that the mind can learn patterns in those encodings (like spiking neural nets) then it should be possible for conscious awareness to arise. (If you’re curious about the kind of learning that needs to happen you should look into Tolman-Eichenbaum machines, though non-spiking ones aren’t reallly capable of self learning)
I hope that’s a clear enough “empirical” explanation for you.
As for qualia, you are entirely wrong. What you describe would not prove that my raw experience of green is the same as your green, only that we both have qualia which can arise from the color green. You can say that it’s not pragmatic to think about that which cannot be known, and I’ll agree that qualia must be represented in a physical way and thus be recreatable in that persons brain, but the complexity of human brains actually precludes the ability to define what actually is the qualia and what are other thoughts. The difference between individuals likely precludes the ability to say “oh when these neurons are active it means this” because other people have different neural structures, similar? Absolutely, similar enough that for any experience you could find exactly the same neurons that would fire the same way as in someone else? Absolutely not.
Your last statements make it seem like you don’t understand the diffference between learning and knowledge. LLMs don’t learn when you use them. Neither do most modern chess models. They actually don’t learn at all unless they are being trained by an outside source who gives them an input, expects an output, and then computes the weight changes needed to get closer to the answer via gradient descent.
A typical ANN trained this way does not learn from new experiences furthermore, it is not capable of referencing its own thoughts because it doesn’t have any.
The self is that which acts, did you know LLMs aren’t capable of being aware they took any action? Are you aware chess engines can’t do that either? There is no comparison mechanism between what was and what is and what made that change. They cannot be self aware the same way a program hardcoded to kill processes other than itself is unaware. They literally lack any sense of their own actions directly. Once again, you not only need to be able to sense that information, but the program then needs a sense which compares that sensation to other sensations and learns the differences, changing the way it responds to those stimuli. You need learning.
I don’t reject the idea of machines being conscious, in fact I’m literally trying to make a conscious machine just to see if I can (which yeah to most people sounds insane). But I do not think we agree on much else because learning is absolutely essential for any thing to be capable of a conscious action.
Anything dealing with perception is going to be somewhat circular and vague. Qualia are the elements of perception and by their nature it seems they are incommunicable by any means.
Awareness in my mind deals with the lowest level of abstract thinking. Can you recognize this thing and both compare and contrast it with other things, learning about its relation to other things on a basic level?
You could hardcode a computer to recognize its own process. But it’s not comparing itself to other processes, experiencing similarities and dissimilarities. Furthermore unless it has some way to change at least the other processes that are not itself, it can’t really learn its own features/abilities.
A cat can tell its paws are its own, likely in part because it can move them. if you gave a cat shoes, do you think the cat would think the shoes are part of itself? No, And yet the cat can learn that in certain ways it can act as though the shoes are part of itself. The same way we can recognize that tools are not us but are within our control.
We notice that there is a self that is unlike our environment in that it does not control the environment directly, and then there are the actions of the self that can influence or be influenced directly by the environment. And that there are things which we do not control at all directly.
That is the delineation I’m talking about. It’s more the delineation of control than just “this is me and that isn’t” because the term “self” is arbitrary.
We as social beings correlate self with identity, with the way we think we act compared to others, but to be conscious of one’s own existence only requires that you can sense your own actions and learn to delineate between this thing that appears within your control and those things that are not. Your definition of self depends on where you’ve learned to think the lines are.
If you created a computer program capable of learning patterns in the behavior of its own process(es) and learning how those behaviors are similar/dissimilar or connected to those of other processes, then yes, I’d say your program is capable of consciousness. But just adding the ability to detect its process id is simply like adding another built in sense; it doesn’t create conscious self awareness.
Furthermore, on the note of aliens, I think a better question to ask is “what do you think ‘self’ is?” Because that will determine your answer. If you think a system must be consciously aware of all the processes that make it up, I doubt you’ll ever find a life form like that. The reason those systems are subconscious is because that’s the most efficient way to be. Furthermore, those processes are mostly useful only to the self internally, and not so much the rest of reality.
To be aware of self is to be aware of how the self relates to that which is not part of it. Knowing more about your own processes could help with this if you experienced those same processes outside of the self (like noticing how other members of your society behave similarly to you) but fundamentally, you’re not necessarily creating a more accurate idea of self awareness just be having more senses of your automatic bodily processes.
It is equally important, if not more so, to experience more that is not the self rather than to experience more of what would be described as self, because it’s what’s outside that you use to measure and understand what’s inside.
Yes most definitely, I’d imagine most animals are conscious.
In fact my definition of sapience means several animals like crows and parrots and rats are capable of sapience.
Personally, I’m more a fan of the binary/discrete idea. I tend to go with the following definitions:
- Animate: capable of responding to stimuli
- Sentient: capable of recognizing experiences and debating the next best action to take
- Conscious: aware of the delineation between self and not self
- Sapient: capable of using abstract thinking and logic to solve problems without relying solely on memory or hardcoded actions (being able to apply knowledge abstractly to different but related problems)
If you could prove that plants have the ability to choose to scream rather than it being a reflexive response, then they would be sentient. Like a tree “screaming” only when other trees are around to hear.
If I cut myself my body will move away reflexively, it with scab over the wound. My immune system might “remember” some of the bacteria or viruses that get in and respond accordingly. But I don’t experience it as an action under my control. I’m not aware of all the work my body does in the background. I’m not sentient because my body can live on its own and respond to stimuli, I’m sentient because I am aware that stimuli exist and can choose how to react to some of them.
If you could prove that the tree as a whole or that part of a centralized control system in the tree could recognize the difference between itself and another plant or some mycorrhiza, and choose to respond to those encounters, then it would be conscious. But it seems more likely that the sharing of nutrients with others, the networking of the forest is not controlled by the tree but by the natural reflexive responses built into its genome.
Also, If something is conscious, then it will exhibit individuality. You should be able to identify changes in behavior due to the self referential systems required for the recognition of self. Plants and fungi grown in different circumstances should respond differently to the same circumstances.
If you taught a conscious fungus to play chess and then put it in a typical environment, you would expect to see it respond very differently than another member of its species who was not cursed with the knowledge of chess.
If a plant is conscious, you should be able to teach it to collaborate in ways that it normally would not, and again after placing it in a natural environment you should see it attempt those collaborations while it’s untrained peers would not.
Damn now I want to do some biology experiments…
And yet your closest match shows you are a different kind of leftist so I must hate you 😔
/s
Fair enough, some of the questions were kind of ambiguous tho
This isn’t my field but like it shouldn’t be horrible to drink a little sip of this right? It’s just salts and amino acids and sugar, so I’d expect worst case scenario you majorly throw off your electrolyte balance and possibly give your kidneys and liver a lot of amino acids to get rid of. But that’d probably require drinking a significant amount yes?
Anyone with more bio knowledge want to correct or confirm this hypothesis?
Personally I feel like this graphic doesn’t really show the data as the beatiful thing. It’s just text with AI generated stereotyped characters for each country.
It is a pretty clean looking image though, anyone know what model was used? (Or know which models are good at making clean vector-like graphics like this?)