Michael Graziano has an article at The Atlantic explaining why consciousness is not mysterious. It’s a fairly short read (about 3 minutes). I recommend anyone interested in this stuff read it in full. (I tweeted a link to it last night, but then decided it warranted discussion here.)
The TL;DR is that the hard problem of consciousness is like the 17th century hard problem of white light. No color, particularly white, exists except in our brains. White light is a mishmash of light with different wavelengths, of every color, that our brains simply translate into what we perceive of as white. Our perception of consciousness is much the same:
This is why we can’t explain how the brain produces consciousness. It’s like explaining how white light gets purified of all colors. The answer is, it doesn’t. Let me be as clear as possible: Consciousness doesn’t happen. It’s a mistaken construct. The computer concludes that it has qualia because that serves as a useful, if simplified, self-model. What we can do as scientists is to explain how the brain constructs information, how it models the world in quirky ways, how it models itself, and how it uses those models to good advantage.
I pretty much agree with everything Graziano says in this article, although I’ve learned that dismissing the hard problem often leads to pointless debates about eliminative reductionism. Instead, I admit that the hard problem is real for those who are troubled by it. But like the hard problem of white color, it will never have a solution.
Graziano mentions that there is a strong sentiment that consciousness must be a thing, an energy field, or exotic state of matter, something other than information. This sentiment arises from the same place as subjective experience. It’s a model our brains construct. It’s that model that gives us that strong feeling. (Of course, the strong feeling is itself a model.) When some philosophers and scientists say that “consciousness is an illusion”, what they usually mean is that this idea of consciousness as separate thing is illusory, not internal experience itself.
Why is this a valid conclusion? Well, look at the neuroscience and you won’t find any observations that require energy fields or new states of matter. What you’ll see are neurons signalling to each other across electrical and chemical synapses, supported by a superstructure of glial cells. You’ll see nerve impulses coming in from the peripheral nervous system, a lot of processing in the neural networks of the brain, and output from this system in the form of nerve impulses going to the motor neurons connected to the muscles. You’ll see a profoundly complex information processing network, a computational system.
You won’t find any evidence of something else, of an additional energy or separate state of matter, of anything like a ghost in the machine. Could something like that exist and just not yet be detected? Sure. But that can be said of any concept we’d like to be true. To rationally consider it plausible, we need some objective data that requires, or at least makes probable, its existence. And there is none. (At least none that passes scientific scrutiny.)
There’s only the feeling from our internal model. We already know that model can be wrong about a lot of other things (like white light). The idea that it can be wrong about its own substance and makeup isn’t a particularly large logical step.
Graziano finishes with a mention of machine consciousness. I think machine consciousness is definitely possible, and I’m sure someone will eventually build one in a laboratory, but I wonder how useful it would be, at least other than as a proof of concept. I see no particular requirement that my self driving car, or just about any autonomous system, have anything like the idiosyncrasies of human consciousness. It might be a benefit for human interface systems, although even there I tend to think it would add pointless complexity.
Unless I’m missing something? Am I, or Graziano, missing objective evidence of consciousness being more than information processing? Are there reasons I’m overlooking to consider out intuitions about consciousness to be more reliable than intuitions about colors or other things? Would there be benefits to conscious machines I’m not seeing?