Someone called my attention to a new paper by Michael Graziano: A conceptual framework for consciousness.
I’ve highlighted Graziano’s approach and theory many times over the years. I think his Attention Schema Theory provides important insights into how top down attention works. But it’s his overall approach that I find the most value in. He’s far from the first person to work on consciousness from a non-mystical perspective. (He cites Dennett, Gazzaniga, and others as precedent.) But I think he manages to bridge the divide more convincingly than many others.
Graziano starts off by establishing two general principles:
- In order to have a belief, and talk about that belief, it’s necessary for that belief to exist as information, as a model in the brain. But, crucially, it is not necessary for that model to be accurate.
- The brain’s models are never accurate. They may be effective in providing adaptive capabilities, but evolution never had a reason to select for accuracy in and of itself. The models often provide crude but useful caricatures of what is being modeled.
Applying these principles leads to the following framework for understanding consciousness:
- The brain contains some physically measurable process: “process A”.
- The brain constructs a model to represent process A.
- The model is not accurate. It’s a simplification, missing a lot of granular details. It’s an effective but caricaturized representation of the reality.
- When the model is accessed by higher cognitive functions, as a result of its inaccuracies and simplifications, it causes us to think we have some non-physical property, an intangible “experienceness”.
It’s 4 which leads us to conclude that there must be a hard problem of consciousness. When comparing 4 with 1, the problem looks hopelessly intractable. But that’s because we’re overlooking 2 and 3.
In other words, introspection is unreliable. It’s not the job of science to explain what it’s telling us, at least not uncritically, but to investigate why it’s telling us those things.
Graziano notes that this is illusionism, but like me, he finds the term “illusion” misleading. It typically refers to the inaccuracy of the introspective model. But the model and what’s being modeled are both real, even if the model implies things that aren’t true about its subject.
Which brings us to the AST (Attention Schema Theory), which has the following components.
- Selective attention. This is the messy competitive process that results in some contents being selectively enhanced and broadcast throughout the brain while others are suppressed. It’s modeled by the various GWTs (global workspace theories). GWTs takes the content coalitions which momentarily win this competition to be the contents of consciousness, and it’s fair to say it is consciousness at a certain level.
- The attention schema. This is a model, or schema of attention. It enables:
- Endogenous or “top down” control of attention by executive systems
- Modeling the attention of others
- Claims / beliefs about consciousness
In terms of evolution, Graziano notes that attention is very ancient, with its competition and lateral inhibition mechanisms probably going back to the earliest nervous systems 600 million years ago. More complex overt attention probably arose with the earliest vertebrates.
But the more controlled and covert form of attention enabled by attention schemas only seem prevalent in mammals, birds, and reptiles. It’s also possible it evolved independently in cephalopods, but the evidence isn’t clear.
Graziano sees this theory as explaining reportable consciousness and many of our deepest convictions about it. It also may provide important insights into what it would take to incorporate consciousness in AI systems. He notes that the future of consciousness research will be less about philosophy and more about technology.
But he admits that are many things the theory doesn’t cover, such as emotions, memory, or how we make decisions. (Although for emotions, see Keith Frankish proposed “response schema” for a proposed add-on.)
I think this final point is important. When I first learned about the AST, I was enthusiastic that it might be the answer. But I’ve since become convinced there won’t be any one answer. AST is essentially an add-on to the global workspace, which is designed to be a framework theory. I think many additional add-ons will be necessary to give us a full accounting of what we typically mean by “consciousness”.
Which isn’t to say that AST isn’t a crucial piece of the puzzle. And as noted above, I find Graziano’s approach to be the most important takeaway. It’s the kind of approach that I think led David Chalmers to realize that in addition to the hard problem he coined many years ago, there’s also the meta-problem of consciousness, the problem of why we think there’s a hard problem. Graziano beat him to this idea by several years, but he didn’t have Chalmers’ knack for catchy names.
What do you think of the Attention Schema Theory? Or Graziano’s overall approach? Are there reasons to trust introspection that he or I overlook?