The other day, when discussing Mark Solms’ book, I noted that he is working to create an artificial consciousness, but that he emphasizes that he isn’t aiming for intelligence, just the conscious part, as though consciousness and intelligence are unrelated. This seems to fit with his affect centered theory of consciousness, and it matches a lot of people’s intuition.
But I’ve often wondered why this intuition is so prevalent. I suspect it comes from the impressions we have have about our own experience. Subjective phenomenal experience seems like something that just happens to us. We don’t appear to have to work for it. On the other hand, doing things that are typically considered intelligent, such as working with mathematical equations, handling complex navigation, or dealing with complex social situations, typically require effort, notably conscious effort, at least until we learn them well enough for it to “be natural”.
From this, it seems easy to reach the conclusion that these are very different things. It doesn’t help that the definitions for both concepts are controversial. I’ve talked about the definitional issues for consciousness many times. Intelligence faces similar issues, although its definitions don’t seem to vary across nearly as wide a range as the consciousness ones. It’s worth noting that some of the definitions in its Wikipedia article overlap with particular definitions of consciousness. But intelligence is usually regarded as something that can exist to highly varying degrees, including far below human level intelligence.
The problem is that the impression of separate phenomena is based on introspection. And as I’ve discussed before, introspection, while adaptive and effective enough for many day-to-day purposes, isn’t a reliable source of information about the architecture of the mind. There is extensive psychological evidence showing that our knowledge of our own mind is limited.
We don’t have access to a vast amount of unconsciousness and preconscious processing that happens in the brain. For example, the firing patterns in the early visual cortex are topographically mapped from the retina. The retina has a small central area of high visual acuity (resolution), the fovea, but that acuity falls off dramatically toward the periphery of the retina, as does the number of color receptors. And we have a hole in the center of the fovea where the optic nerve is. That and the eyes are constantly moving reflexively.
But we don’t perceive the world through a constantly shifting acuity tunnel with color only at the center. Our visual system does an extensive amount of work to produce the experience of a stable colorful field of vision. And that’s before we get into detecting things in motion as well as object categorization and discrimination. When we recognize something like a red apple, it seems like something that happens effortlessly to us. In reality, our nervous system has extensive layers of functionality, functionality hidden from introspective awareness. A lot of processing has to take place for us to recognize that apple.
This is also true for the initial quick evaluations the nervous system makes about the results of all that perceptual processing, the initial evaluations we usually call “affects” or “emotional feelings.” Again, all leading to an impression that sentience simply happens to us, rather than something our brain puts in a lot of work to produce.
Artificial neural networks that recognize images or make evaluations are considered artificial intelligence, which implies that the natural versions are also a form of intelligence. In other words, conscious experience is built on intelligence. Most of it is unconsciousness intelligence, but intelligence just the same.
That doesn’t mean that consciousness and intelligence are equivalent. There are plenty of systems that meet many definitions of intelligence but show no signs of meeting typical definitions of consciousness. Often intelligent technological systems involve encyclopedic information, not the world or self models more likely to trigger our intuition of a conscious being. And plants and simple organisms like slime mold often exhibit intelligent behavior that few regard as conscious.
But it does imply that consciousness is a type of intelligence, that’s built on lower levels of intelligence and enables more complex forms of it. As most commonly understood, it seems to be a form of intelligence involving the use of predictive models of the environment, the self, and the relationship between the two.
This is a conclusion many seem loathe to accept. Insisting that consciousness is something separate and apart from functional intelligence inevitably makes it more mysterious, more difficult to explain. It’s notable that in his 1995 paper in which he coined “the hard problem of consciousness”, David Chalmers explicitly noted the functional areas of intelligent processing, labeling them “the easy problems”, but then declared that wasn’t what he was talking about, in essence excluding them as possible answers to the problem he was identifying.
The result is a phenomenon many seem to think science can’t solve. It’s some of the factors that I think lead to outlooks like panpsychism, biopsychism, or property dualism, as well as appeals to theories involving quantum physics, electrical fields, or other highly speculative notions. But if consciousness is just a type of intelligence, then studying its mechanisms is possible, and everything we need should be in the mainstream cognitive sciences, including cognitive neuroscience.
What do you think? Are there reasons I’m overlooking to regard consciousness and intelligence as separate phenomena? If so, what distinguishes them from each other? What properties does consciousness possess that intelligence lacks, or vice-versa?