I recently completed Anil Seth’s new book, Being You: A New Science of Consciousness. Seth starts out discussing David Chalmers’ hard problem of consciousness, as well as views like physicalism, idealism, panpsychism, and functionalism. Seth is a physicalist, but is suspicious of functionalism.
Seth makes a distinction between the hard problem, which he characterizes as being about why experience exists in the world at all, with something he calls the “real problem”, which he describes as explaining why a conscious experience is the way it is, why it has the phenomenological properties it has, in terms of physical mechanisms in the brain and body. Seth asserts that the real problem is distinct from both the hard problem and what Chalmers calls “easy” problems, such as the ability to discriminate, categorize, and react to sensory stimuli, reportability, attention, etc.
When considering how consciousness might be measured, Seth notes that it could be like temperature or it could be like life. Temperature is a phenomenon that emerges from particle physics, one that can be described with a single equation and measured with a single value. Many theories of consciousness, such as IIT (Integrated Information Theory), seem to envision consciousness as being like temperature. On the other hand, biological life is a complex phenomenon, not subject to being meaningfully described by a single equation or measurement. Seth’s money is that consciousness is more like life than temperature.
Seth’s own view falls along the lines of predictive coding theories of consciousness. These theories see the brain as a Bayesian processing system, one that is constantly making predictions, receiving error correction from the world, and adjusting. In this view, perception happens from the inside out rather than outside in. The system is constantly making predictions about what is there, receiving sensory information, and adjusting.
It’s important to understand that these predictions are generally related to what the system is doing or planning to do. So the predictions, the inferences, should be viewed as active inferences rather than passive ones. This view has a lot of resonance with Karl Friston’s free energy principle, which Seth explores a bit in the book.
Another important aspect of this view to understand, is it doesn’t just pertain to predictions about the outside world, but also about the self. In other words, the self is just another perception, a prediction, or a set of predictions, ones involving the state of the body, the perspective of the system, and the perception of volition (free will). We primarily perceive ourselves in order to control ourselves. This pertains even to emotions, which Seth sees as control oriented perceptions (predictions) which regulate the body’s essential variables.
All of which comprises something Seth call the “beast machine” theory of consciousness, which gets at the evolutionary purposes of brains, to make movement decisions. (He described this theory in a TED talk a few years ago.)
Seth finishes the book with discussions on free will and artificial intelligence. On free will, he doesn’t accept any “spooky” kind of libertarian free will, but much of his discussion focuses on our perception of free will. In the discussion on AI, Seth’s skepticism about functionalism resurfaces again. He admits he has no evidence for it, but seems to feel that something about biology, perhaps going down to the cellular or molecular levels, will prevent AI from being conscious.
This is a good book. Seth is an excellent and engaging writer and keeps his discussions accessible for the lay reader. And there’s a lot more in it than my brief summary here discusses. I fully recommend it for anyone looking for an introduction to predictive coding theories.
I do have some issues with it however. I find Seth’s skepticism of functionalism pretty puzzling, since most of what he describes in the book seems thoroughly functionalist in nature.
I also have trouble seeing the distinction he makes between the hard problem and his real problem. The real problem seems like details of the hard problem to me. Given what I’ve read from Chalmers, I suspect he’d see Seth’s characterization of the hard problem as too abstract. Much of what Chalmers discusses seems focused on the same issues as Seth, the relation between the phenomenal and the physical.
Seth also seems loathe to admit that any study of the phenomenological has to go through behavior, such as a subject reporting their experience, or through the overall functionality. In that sense, the distinction he makes between studying the phenomenology and studying the functionality seems forced.
I do like Seth’s discussion of the real problem though. Getting into the details can, I think, serve as a bridging mechanism between between Chalmers’ hard and easy problems. But I’m a functionalist who sees any sharp categorical distinction between the hard and easy problems as artificial anyway.
Interestingly, Seth has a brief discussion about the likelihood that early modern thinkers, such as Rene Descartes, had to espouse certain ideas they may not have believed in, conceivably including substance dualism, ideas that may have offered some immunization from persecution by the church, and given them cover to explore other intellectual ideas they cared about. He notes that other thinkers, such as Julien Offray de La Mettrie, who didn’t play this game, got into trouble. I thought this was an interesting observation coming from someone who leads a center on consciousness research, a role that almost certainly has to keep in mind what might offend funding sources.
All that said, I think predictive coding theories have a lot going for them. More broadly than just consciousness, they get at what brains are actually for. And they seem to work at all levels of brain evolution, from explaining why it was adaptive for worm like creatures to develop the first very limited capabilities beyond stimulus-response mechanisms, to a fish figuring out whether that thing in the distance is food or a predator, to me figuring out what that strange looking snack is at a party.
As I’ve noted before, I don’t see theories like predictive coding as alternatives to ones like global workspace or higher order thought theories, but all of them as potential supplements to each other. There’s a tendency among theorists (which I don’t detect from Seth) to regard their theory as the one truth. But like biological life, I think it’s likely the truth will involve a large collection of theories.
What do you think of predictive coding theories? On the right track? Or hopelessly going in the wrong direction?