That's a matter of philosophy and what a person even understands "consciousness" to be. You shouldn't be surprised that others come to different conclusions about the nature of being and what it means to be conscious.
Likely a prefrontal cortex, the administrative center of the brain and generally host to human consciousness. As well as a dedicated memory system with learning plasticity.
Humans have systems that mirror llms but llms are missing a few key components to be precise replicas of human brains, mostly because it's computationally expensive to consider and the goal is different.
Some specific things the brain has that llms don't directly account for are different neurochemicals (favoring a single floating value per neuron), synaptogenesis, neurogenesis, synapse fire travel duration and myelin, neural pruning, potassium and sodium channels, downstream effects, etc. We use math and gradient descent to somewhat mirror the brain's hebbian learning but do not perform precisely the same operations using the same systems.
In my opinion having a dedicated module for consciousness would bridge the gap, possibly while accounting for some of the missing characteristics. Consciousness is not an indescribable mystery, we have performed tons of experiments and received a whole lot of information on the topic.
As it stands llms are largely reasonable approximations of the language center of the brain but little more. It may honestly not take much to get what we consider consciousness humming in a system that includes an llm as a component.
a prefrontal cortex, the administrative center of the brain and generally host to human consciousness.
That's an interesting take. The prefrontal cortex in humans is proportionately larger than in other mammals. Is it implied that animals are not conscious on account of this difference?
If so, what about people who never develop an identifiable prefrontal cortex? I guess, we could assume that a sufficient cortex is still there, though not identifiable. But what about people who suffer extensive damage to that part of the brain. Can one lose consciousness without, as it were, losing consciousness (ie becoming comatose in some way)?
a dedicated module for consciousness would bridge the gap
What functions would such a module need to perform? What tests would verify that the module works correctly and actually provides consciousness to the system?
It's not devil's advocate. They're correct. It's purely in the realm of philosophy right now. If we can't define "consciousness" (spoiler alert: we can't), then it makes it impossible to determine with certainty one way or another. Are you sure that you yourself are not just fancy auto-complete? We're dealing with shit like the hard problem of consciousness and free will vs determinism. Philosophers have been debating these issues for millennia and were not much closer to a consensus yet than we were before.
And honestly, if the CIA's papers on The Gateway Analysis from Project Stargate about consciousness are even remotely correct, we can't rule it out. It would mean consciousness preceeds matter, and support panpsychism. That would almost certainly include things like artificial intelligence. In fact, then the question becomes if it's even "artificial" to begin with if consciousness is indeed a field that pervades the multiverse. We could very well be tapping into something we don't fully understand.
It's an answer on if one is sure if they are not just a fancy autocomplete.
More directly; we can't be sure if we are not some autocomplete program in a fancy computer but since we're having an experience then we are conscious programs.
When I say "how can you be sure you're not fancy auto-complete", I'm not talking about being an LLM or even simulation hypothesis. I'm saying that the way that LLMs are structured for their neural networks is functionally similar to our own nervous system (with some changes made specifically for transformer models to make them less susceptible to prompt injection attacks). What I mean is that how do you know that the weights in your own nervous system aren't causing any given stimuli to always produce a specific response based on the most weighted pathways in your own nervous system? That's how auto-complete works. It's just predicting the most statistically probable responses based on the input after being filtered through the neural network. In our case it's sensory data instead of a text prompt, but the mechanics remain the same.
And how do we know whether or not the LLM is having an experience or not? Again, this is the "hard problem of consciousness". There's no way to quantify consciousness, and it's only ever experienced subjectively. We don't know the mechanics of how consciousness fundamentally works (or at least, if we do, it's likely still classified). Basically what I'm saying is that this is a new field and it's still the wild west. Most of these LLMs are still black boxes that we only barely are starting to understand how they work, just like we barely are starting to understand our own neurology and consciousness.