An interesting paper came up in my feeds this weekend: Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach. The authors put forth a definition of consciousness, and then criteria to test for it, although they emphasize that these can’t be “hard” criteria, just indicators. None of them individually definitely establish consciousness. Nor does any one absent indicator rule it out. But cumulatively their presence or absence can make consciousness more likely or unlikely.
Admitting that defining consciousness is fraught with lots of issues, they focus on key features:
- Qualitative Richness: Conscious content is multimodal, involving multiple senses.
- Situatedness: The contents of consciousness are related to the system’s physical circumstances. In other words, we aren’t talking about encyclopedic type knowledge, but knowledge of the immediate surroundings and situation.
- Intentionality: Conscious experience is about something, which unavoidably involves interpretation and categorization of sensory inputs.
- Integration: The information from the multiple senses are integrated into a unified experience.
- Dynamics and stability: Despite things like head and eye movements, the perception of objects are stabilized in the short term. We don’t perceive the world as a shifting moving mess. Yet we can detect actual dynamics in the environment. The machinery involved in this are prone to generating sensory illusions.
In discussing the biological function of consciousness, the authors focus on the need of an organism to make complex decisions involving many variables, decisions that can’t be adequately handled by reflex or habitual impulses. They don’t equate consciousness with this complex decision making, but with the “multimodal, situational survey of one’s environment and body” that supports it.
This point seems crucial, because the authors at one point assert that the frontal lobes are not critical for consciousness. Of course, many others assert the opposite. A big factor is whether frontal lesions impair consciousness. There seems to be widespread disagreement in the field about this, but at least some of it may hinge on the exact definition of consciousness under consideration.
The authors then identify key indicators:
- Goal directed behavior and model based learning: Crucially, the goal must be formulated and envisioned by the system. People like sex because it leads to reproduction, but reproduction is a “goal” of natural selection, not necessarily of the individuals involved, who often take measures to enjoy sex while frustrating the evolutionary “purpose”. On the other hand, formulating a novel strategy to woo a mate would qualify.
- Brain anatomy and physiology: In mammals, conscious experience is associated with thalamo-cortical systems, or in other vertebrates with their functional analogs, such as the nidopallium in birds. But this criteria largely breaks down with simpler vertebrates, not to mention invertebrates or artificial intelligence.
- Psychometrics and metacognitive judgments: The ability of a system to detect and discriminate objects is measurable and, if present, the organism’s ability to assess its own knowledge.
- Episodic Memory: Autobiographical memory of events experienced at particular places and times.
- Illusion and multistable perception: Susceptible to sensory illusions (such as visual illusions) due to intentionality, the building of perceptual models.
- Visuospatial behavior: Having a stable situational survey despite body movements.
I’m not a fan of relying too much on 2, specific anatomy, at least other than in cases of assessing whether someone is still conscious after brain injuries. As I noted in the post on plant consciousness, I think focusing on capabilities keeps us grounded but still open minded.
I don’t recall the authors making this connection, but it’s worth noting that the same neural machinery is involved in both 1, goal planning, and 4, episodic memory. We don’t retrieve memories from a recording, but imagine, simulate, reproduce past events, which is why memory can be so unreliable, but also why we can fit a lifetime of memories in our brain.
I was initially skeptical of the illusion criteria in 5, but on reflection it makes sense. Experiencing a visual or other sensory illusion means you are accessing a representation, even if not a correct one, so a system showing signs of that experience does indicate intentionality, the aboutness of experience.
The authors spend some space assessing IIT (Integration Information Theory) in relation to “the problem of panpsychism”. They view panpsychism as cheapening the concept of consciousness to the point where the word loses its usefulness, and see IIT as “underconstrained” in a manner that leads to it. (I saw a comment the other day that IIT gets cited as much in neuroscience papers as other theories like GWT, but at least in my own personal survey, most of the citations of IIT seem to be criticisms.)
Finally, the authors look at modern machine learning neural networks and conclude that they currently show no signs of consciousness. They note that machines may have alternate strategies for accomplishing the same thing as consciousness, which raises the question of how malleable we want the word “consciousness” to be.
There’s a lot here that resonates with the work surveyed by Feinberg and Mallatt, which I’ve reported on before, although these indicators seem a bit less concrete that F&M’s. They might better be viewed as criteria for the development of more specific experimental criteria.
Of course, if you don’t buy their definition of consciousness, they you may not buy their criteria for indicators. But this is always the problem with scientific studies of ambiguous concepts.
So the question is, do you buy their description? Or the resulting indicators?