There is no “consciousness.” False belief in “consciousness” is a product of Kantianism, which itself was heavily inspired by Newtonian physics (Kant was heavily inspired by Newton), which we have changed some categories over the years but the fundamentals have not and have become deeply integrated into western psyche in how we think about the world, and probably in many other cultures as well.
Modern day philosophers have just renamed Kant’s phenomena to “consciousness” or “subjective experience” and renamed his “noumena” to “matter.” Despite the renaming, the categories are still treated identically: the “consciousness” is everything we perceive, and the “matter” is something invisible, the true physical thing-in-itself beyond our perception and what “causes” our perception.
Since all they have done is rename Kant’s categories, they do not actually solve Kant’s mind-body problem, but have just rediscovered it and thus renamed it in the form of the “hard problem of consciousness,” which is ultimately the same exact problem just renamed: that there seems to be a “gap” between this “consciousness” and “matter.”
Most modern day philosophers seem to split into two categories. The first are the “promissory materialists” who just say there is a real problem here but shrug their shoulders and say one day science will solve it so we don’t have to worry about it, but give no explanation of what a solution could even possibly look like. The second are the mystics who insist this “consciousness” can’t be reconciled with “matter” because it must be some fundamental force of reality. They talk about things like “consciousness fields” or “cosmic consciousness” or whatever.
However, both are wrong. Newtonian physics is not an accurate represent of reality, we already know this, and so the Kantian mindset inspired from it should also be abandoned. When you abandon the Kantian mindset, there is no longer a need for the “phenomena” and “noumena” division, or, in modern lingo, there is no longer a need for the “consciousness” and “matter” division. There is just reality.
Imagine you are looking at a candle. The apparent size of the candle you will see will depend upon how far you are away from it: if you are further away it appears smaller. Technically, light doesn’t travel at an infinite speed, and so the further away you are, the further in the past you are seeing the candle. The candle also may appear a bit different under different lighting conditions.
A Kantian would say there is a true candle, the “candle-in-itself,” or, in modern lingo, the material candle, the “causes” all these different perceptions. The perceptions themselves are then said to be brain-generated, not part of the candle, not even something real at all, but something purely immaterial, part of the phenomena, or, in modern lingo, part of “consciousness.”
If every possible perception of the candle is part of “consciousness,” then the candle-in-itself, the actual material object, must be independent of perception, i.e. it’s invisible. No observation can reveal it because all observations are part of “consciousness.” This is the Kantian worldview: everything we perceive is part of a sort of illusion created within the mind as opposed to the “true” world that is entirely imperceptible. The mind-body problem, or in modern lingo the “hard problem,” then arises as to how an entirely imperceptible (non-phenomenal/non-conscious) world can give rise to what we perceive in a particular configuration.
However, the Kantian worldview is a delusion. In Newtonian physics, if I launch a cannonball from point A to point B, simply observing it at point A and point B is enough to fill in the gaps to say where the object was at every point in between A and B independently of anything else. This Newtonian worldview allows us to conceive of the cannonball as a thing-in-itself, an object with its own inherent properties that can be meaningful conceived of existing even when in complete isolation, that always has an independent of history of how it ends up where it does.
As Schrodinger pointed out, this mentality does not apply to modern physics. If you fire a photon from point A to point B and observe it at those two points, you cannot always meaningfully fill in the gaps of what the photon was doing in between those two points without running into contradictions. As Schrodinger concluded, one has to abandon the notion that particles really are independent autonomous entities with their own independent existence that can be meaningfully conceived of in complete isolation. They only exist from moment to moment in the context of whatever they are interacting with and not in themselves.
If this is true for particles, it must also be true of everything made up of particles: there is no candle-in-itself either. It’s a high-level abstraction that doesn’t really exist. What we call the “candle” is not an independent unobservable entity separate from all our different perceptions of it, but what we call the candle is precisely the totality of all the different ways it is and can be perceived, all the different ways it interacts with other objects from those objects’ perspectives.
Kant justified the noumena by arguing that it makes no sense to talk about objects “appearing” (the word “phenomena” means “the appearance of”) without there being something that is doing the appearing (the noumena). He is correct on this, but for a different reason. We should not use this to justify the noumena, but it shows that if we reject the noumena, we must also reject the phenomena (“consciousness”): it makes no sense to treat the different instances of a candle as some sort of separate “consciousness” realm, or some sort of illusion or whatever independent of the real material world as it really is.
No, what we perceive directly is material reality as it actually is. Reality is what you are immersed in every day, what surrounds you, what you are experiencing in this very moment. It is not some illusion from which there is a “true” invisible reality beyond it. When you look at the candle, you are seeing the candle as it really is from your own perspective. That is the real candle in the real world. The Kantian distinction between noumena-phenomena (or between “matter” and “consciousness”) should be abandoned. It is just not compatible with the modern physical sciences.
But I know no one will even know what I’m talking about, so writing this is rather pointless. Kantianism is too deeply ingrained into the western psyche, people cannot even comprehend that it is possible to criticize it because it underlies how they think about everything. This nonsense debate about “consciousness” will continue forever, in ten thousand years people will still be arguing over it, because it’s an intrinsic problem that arises out of the dualistic structure in Kantian thinking. If you begin from the get-go with an assumption that there is a division between mind and matter, you cannot close this division without contradicting yourself, which leads to this debate around “consciousness.” But it seems unrealistic at this point to get people to abandon this dualistic way of thinking, so it seems like the “consciousness” debate will proceed forever.
The author engages in magical thinking. Our consciousness is a product of the physical processes occurring within our brains, specifically it’s encoded in the patterns of neuron firings. We know this to be the case because we can observe direct impact on the conscious process when the brain is stimulated. For example, a few milligrams of a psychadelic drug can profoundly change the conscious experience.
Given that the conscious process is a result of an underlying physical process, it follows that these patterns could be expressed on a different substrate. There is absolutely no basis for the notion that an AI could not be conscious. In fact, there’s every reason to believe that it would be if its underlying patterns mirrored those of a biological brain.
The author appears to be focused on the transformer architecture of AI and in that regard, I see nothing wrong with their argument. The way I see it, the important thing here is not that it’s absolutely impossible that an LLM could have anything called consciousness, but that managing to prove it does is not as simple as plugging in some tests we associate on a surface level with “intelligence”, saying “it passed”, and then arguing that that means it’s conscious.
I think it’s critically important to remember with LLMs that they are essentially designed to be good actors, i.e. everything revolves around the idea that they can convincingly carry on a human-like conversation. Which is not to be confused with having actual human characteristics. Even if a GPU (and that’s what it would be, if we’re supposing physicality origins, because that’s what LLMs run on) somehow had some form of consciousness in it connected to when an AI model is running on it:
-
It would have no physical characteristics like a human and so no way to legitimately relate to our experiences.
-
It would still have no long-term evolving memory, just a static model that gets loaded sometimes and runs inference on it.
-
If it was capable of experiencing anything physical akin to suffering, it would likely be in the wear and tear of the GPU, but even this seems like a stretch because a GPU does not have a sensory brain-body connection like a human does, or many animals do. So there is no reason to think it would understand or share in human kinds of suffering because it has no basis for doing so.
-
With all this in mind, it would likely need its own developed language just to begin to try to communicate properly on what its experiences are. A language built by and for humans doesn’t seem sufficient. And that’s not happening if it can’t remember anything from session to session.
-
Even if it could develop its own language, humans trying to translate it would probably be something like trying to observe and understand the behavior of ants and anything said by it with confidence as plain English “I am a conscious AI” would be all but useless as information, since it’s trained on such material and so being able to regurgitate it is part of its purpose for acting.
Now if we were talking about an android-style AI that was given an artificial brain and was given mechanical nerve endings and was designed to mimic many aspects of human biology, as well as the brain, I’d be much more in the camp of “yeah, consciousness is not only possible, but also likely the closer we get to a strict imitation in every possible facet.” But LLMs have virtually nothing in common with humans. The whole neural network thing is I guess imitative of current understanding of human neurons, but only on a vaguely mechanical level. It’s not as though they are a recreation of the biology of it, with a full understanding of the brain behind it. Computers just aren’t built the same, fundamentally, so even in trying to imitate with full information, it would not be the same.
I think we very much agree here. In the strict context of LLMs, I don’t think they’re conscious as well. At best it’s like a Boltzmann brain that briefly springs into existence. I think consciousness requires a sort of a recursive quality where the system models itself as part of the its world model creating a sort of a resonance. I’m personally very partial to the argument that Hofstadter makes in I Am a Strange Loop regarding the nature of the phenomenon.
That said, we can already see how LLMs are being combined with things like symbolic logic in neurosymbolic systems or reinforcement learning in case of DeepSeek. It’s highly likely that LLMs will end up being just one piece of a puzzle in future AI systems. It’s an algorithm that does a particular thing well, but it’s not sufficient on its own. We’re also seeing these things being applied to robotics. I expect that that’s where we may see genuinely conscious systems emerge. Robots create a world model of their environment, and they have to model themselves as an actor within that the environment. The internal reasoning model may end up producing a form of conscious experience as a result.
I do think that from an ethics perspective, we should err on the side of caution with these things. If we can’t prove that something is conscious one way or the other, but we have a basis to suspect that it may be, then we should probably treat it as such. Sadly, given how we treat other living beings on this planet, I have very little hope that the way we treat AIs will resemble anything remotely ethical.
-
specifically it’s encoded in the patterns of neuron firings. Look, if you could prove this, you would solve a lot of problems in neuroscience and philosophy of mind. Unfortunately this doesn’t seem to be the case, or at least there’s not enough information going on in our brain to inequivocably state what you’re stating.
The fact that our consciousness can be mapped onto physical states doesn’t mean it can be reduced to it. You can map the movement of the sun with a sundial and the shadow it generates, but there’s no giant ball of ongoing nuclear fusion in any shadow, even though one requires the other.
I think you’re slightly strawmanning Yogthos
That’s precisely what it means actually. Consciousness is a direct byproduct of physical activity in the brain. It doesn’t come from some magic dimension. Meanwhile, the analogy you’ve made makes a huge assumption that high level patterns are inherently dependent on the underlying complexity of the substrate. There is no evidence to support this notion. For example, while our computers don’t work the same way our brains do, it is a fact that silicon chips are physical things that are made of complex materials, are subject to quantum effects, and so on. Yet, none of that underlying complexity is relevant to the software running on those chips. How do we know this? Because we can make a virtual machine that can implement the patterns expressed on the chip without modelling all the physical workings of the chip. Similarly, there is zero basis to believe that the high level patterns within the brain that we perceive as consciousness are inherently tied to the physical substrate of neurons and their internal complexity.
Furthermore, from an ethical and moral point of view, we would absolutely have to give the AI that claims to be conscious the benefit of the doubt, unless we could prove that it was not conscious.