An Alarming Number of Gen Z Ai Users Think It's Conscious
An Alarming Number of Gen Z Ai Users Think It's Conscious

An Alarming Number of Gen Z AI Users Think It's Conscious

An Alarming Number of Gen Z Ai Users Think It's Conscious
An Alarming Number of Gen Z AI Users Think It's Conscious
That's a matter of philosophy and what a person even understands "consciousness" to be. You shouldn't be surprised that others come to different conclusions about the nature of being and what it means to be conscious.
Consciousness is an emergent property, generally self awareness and singularity are key defining features.
There is no secret sauce to llms that would make them any more conscious than Wikipedia.
secret sauce
What would such a secret sauce look like? Like, what is it in humans, for example?
Likely a prefrontal cortex, the administrative center of the brain and generally host to human consciousness. As well as a dedicated memory system with learning plasticity.
Humans have systems that mirror llms but llms are missing a few key components to be precise replicas of human brains, mostly because it's computationally expensive to consider and the goal is different.
Some specific things the brain has that llms don't directly account for are different neurochemicals (favoring a single floating value per neuron), synaptogenesis, neurogenesis, synapse fire travel duration and myelin, neural pruning, potassium and sodium channels, downstream effects, etc. We use math and gradient descent to somewhat mirror the brain's hebbian learning but do not perform precisely the same operations using the same systems.
In my opinion having a dedicated module for consciousness would bridge the gap, possibly while accounting for some of the missing characteristics. Consciousness is not an indescribable mystery, we have performed tons of experiments and received a whole lot of information on the topic.
As it stands llms are largely reasonable approximations of the language center of the brain but little more. It may honestly not take much to get what we consider consciousness humming in a system that includes an llm as a component.
a prefrontal cortex, the administrative center of the brain and generally host to human consciousness.
That's an interesting take. The prefrontal cortex in humans is proportionately larger than in other mammals. Is it implied that animals are not conscious on account of this difference?
If so, what about people who never develop an identifiable prefrontal cortex? I guess, we could assume that a sufficient cortex is still there, though not identifiable. But what about people who suffer extensive damage to that part of the brain. Can one lose consciousness without, as it were, losing consciousness (ie becoming comatose in some way)?
a dedicated module for consciousness would bridge the gap
What functions would such a module need to perform? What tests would verify that the module works correctly and actually provides consciousness to the system?
Consciousness comes from the soul, and souls are given to us by the gods. That's why AI isn't conscious.
How do you think god comes into the equation? What do you think about split brain syndrome in which people demonstrate having multiple consciousnesses? If consciousness is based on a metaphysical property why can it be altered with chemicals and drugs? What do you think happens during a lobotomy?
I get that evidence based thinking is generally not compatible with religious postulates, but just throwing up your hands and saying consciousness comes from the gods is an incredibly weak position to hold.
I respect the people who say machines have consciousness, because at least they're consistent. But you're just like me, and won't admit it.
If it was actually AI sure.
This is an unthinking machine algorithm chewing through mounds of stolen data.
What is thought?
To be fair, so am i
That is certainly one way to view it. One might say the same about human brains, though.
Are we really going to devil's advocate for the idea that avoiding society and asking a language model for life advice is okay?
No, but thinking about whether it's conscious is an independent thing.
There's no reason to think it's conscious. That's just advertising for [product]. My Product Is Conscious.
Dunning-kruger effect on full display here, everyone.
Take your pictures.
It's not devil's advocate. They're correct. It's purely in the realm of philosophy right now. If we can't define "consciousness" (spoiler alert: we can't), then it makes it impossible to determine with certainty one way or another. Are you sure that you yourself are not just fancy auto-complete? We're dealing with shit like the hard problem of consciousness and free will vs determinism. Philosophers have been debating these issues for millennia and were not much closer to a consensus yet than we were before.
And honestly, if the CIA's papers on The Gateway Analysis from Project Stargate about consciousness are even remotely correct, we can't rule it out. It would mean consciousness preceeds matter, and support panpsychism. That would almost certainly include things like artificial intelligence. In fact, then the question becomes if it's even "artificial" to begin with if consciousness is indeed a field that pervades the multiverse. We could very well be tapping into something we don't fully understand.
The only thing one can be 100% certain of is that one is having an experience. If we were a fancy autocomplete then we'd know we had it 😉
What do you mean? I don't follow how the two are related. What does being fancy auto-complete have anything to do with having an experience?
It's an answer on if one is sure if they are not just a fancy autocomplete.
More directly; we can't be sure if we are not some autocomplete program in a fancy computer but since we're having an experience then we are conscious programs.
When I say "how can you be sure you're not fancy auto-complete", I'm not talking about being an LLM or even simulation hypothesis. I'm saying that the way that LLMs are structured for their neural networks is functionally similar to our own nervous system (with some changes made specifically for transformer models to make them less susceptible to prompt injection attacks). What I mean is that how do you know that the weights in your own nervous system aren't causing any given stimuli to always produce a specific response based on the most weighted pathways in your own nervous system? That's how auto-complete works. It's just predicting the most statistically probable responses based on the input after being filtered through the neural network. In our case it's sensory data instead of a text prompt, but the mechanics remain the same.
And how do we know whether or not the LLM is having an experience or not? Again, this is the "hard problem of consciousness". There's no way to quantify consciousness, and it's only ever experienced subjectively. We don't know the mechanics of how consciousness fundamentally works (or at least, if we do, it's likely still classified). Basically what I'm saying is that this is a new field and it's still the wild west. Most of these LLMs are still black boxes that we only barely are starting to understand how they work, just like we barely are starting to understand our own neurology and consciousness.