Graziano’s non-mystical approach to consciousness

Someone called my attention to a new paper by Michael Graziano: A conceptual framework for consciousness.

I’ve highlighted Graziano’s approach and theory many times over the years. I think his Attention Schema Theory provides important insights into how top down attention works. But it’s his overall approach that I find the most value in. He’s far from the first person to work on consciousness from a non-mystical perspective. (He cites Dennett, Gazzaniga, and others as precedent.) But I think he manages to bridge the divide more convincingly than many others.

Graziano starts off by establishing two general principles:

  1. In order to have a belief, and talk about that belief, it’s necessary for that belief to exist as information, as a model in the brain. But, crucially, it is not necessary for that model to be accurate.
  2. The brain’s models are never accurate. They may be effective in providing adaptive capabilities, but evolution never had a reason to select for accuracy in and of itself. The models often provide crude but useful caricatures of what is being modeled.

Applying these principles leads to the following framework for understanding consciousness:

  1. The brain contains some physically measurable process: “process A”.
  2. The brain constructs a model to represent process A.
  3. The model is not accurate. It’s a simplification, missing a lot of granular details. It’s an effective but caricaturized representation of the reality.
  4. When the model is accessed by higher cognitive functions, as a result of its inaccuracies and simplifications, it causes us to think we have some non-physical property, an intangible “experienceness”.

It’s 4 which leads us to conclude that there must be a hard problem of consciousness. When comparing 4 with 1, the problem looks hopelessly intractable. But that’s because we’re overlooking 2 and 3.

In other words, introspection is unreliable. It’s not the job of science to explain what it’s telling us, at least not uncritically, but to investigate why it’s telling us those things.

Graziano notes that this is illusionism, but like me, he finds the term “illusion” misleading. It typically refers to the inaccuracy of the introspective model. But the model and what’s being modeled are both real, even if the model implies things that aren’t true about its subject.

Which brings us to the AST (Attention Schema Theory), which has the following components.

  1. Selective attention. This is the messy competitive process that results in some contents being selectively enhanced and broadcast throughout the brain while others are suppressed. It’s modeled by the various GWTs (global workspace theories). GWTs takes the content coalitions which momentarily win this competition to be the contents of consciousness, and it’s fair to say it is consciousness at a certain level.
  2. The attention schema. This is a model, or schema of attention. It enables:
    • Endogenous or “top down” control of attention by executive systems
    • Modeling the attention of others
    • Claims / beliefs about consciousness

In terms of evolution, Graziano notes that attention is very ancient, with its competition and lateral inhibition mechanisms probably going back to the earliest nervous systems 600 million years ago. More complex overt attention probably arose with the earliest vertebrates.

But the more controlled and covert form of attention enabled by attention schemas only seem prevalent in mammals, birds, and reptiles. It’s also possible it evolved independently in cephalopods, but the evidence isn’t clear.

Graziano sees this theory as explaining reportable consciousness and many of our deepest convictions about it. It also may provide important insights into what it would take to incorporate consciousness in AI systems. He notes that the future of consciousness research will be less about philosophy and more about technology.

But he admits that are many things the theory doesn’t cover, such as emotions, memory, or how we make decisions. (Although for emotions, see Keith Frankish proposed “response schema” for a proposed add-on.)

I think this final point is important. When I first learned about the AST, I was enthusiastic that it might be the answer. But I’ve since become convinced there won’t be any one answer. AST is essentially an add-on to the global workspace, which is designed to be a framework theory. I think many additional add-ons will be necessary to give us a full accounting of what we typically mean by “consciousness”.

Which isn’t to say that AST isn’t a crucial piece of the puzzle. And as noted above, I find Graziano’s approach to be the most important takeaway. It’s the kind of approach that I think led David Chalmers to realize that in addition to the hard problem he coined many years ago, there’s also the meta-problem of consciousness, the problem of why we think there’s a hard problem. Graziano beat him to this idea by several years, but he didn’t have Chalmers’ knack for catchy names.

What do you think of the Attention Schema Theory? Or Graziano’s overall approach? Are there reasons to trust introspection that he or I overlook?

55 thoughts on “Graziano’s non-mystical approach to consciousness

  1. I very much disagree with this approach.

    Beliefs are social, particularly if being able to talk about them is part of what we mean by “belief”. And accuracy is social. We judge accuracy by comparing with the views of others.

    Consciousness is between the individual and his environment. It is not social. Consciousness is prior to any possibility of social engagement. Connecting belief and accuracy or inaccuracy to consciousness seems like a mistake.

    Liked by 1 person

    1. You might be operating with a narrower conception of “belief” than the sense Graziano or I are using it, which just means a conception held by the mind about something, which could be something in the environment or about ourselves.

      In that sense, you made a statement about consciousness being between the individual and their environment. Isn’t that a belief about consciousness? Where is it instantiated? Where did it come from? And how do we ascertain its accuracy?

      Liked by 1 person

      1. I will readily admit that I’m uncertain as to what Graziano means by “belief”. Lots of people talk about beliefs, but it is never clear what they mean. From my perspective, beliefs are rather dubious entities.

        I was reacting more to the comment about models and about inaccurate information. I doubt that the brain has models, although it might seem to us that it has.

        As I see it, perception requires very accurate information. Our 3D vision depends on subtle differences between information from the two eyes. And it would not work with inaccurate information. And perceptual experience is a significant part of what people see as “the hard problem”.

        Liked by 1 person

        1. On the word “belief”, I have to admit that I boiled down what Graziano said to that word. And my use of it wasn’t made with any of the issues you seem to load it with. Here’s the relevant quote from the paper.

          Principle 1.

          Information that comes out of a brain must have been in that brain.

          To elaborate: Nobody can think, believe, or insist on any proposition, unless that proposition is represented by information in the brain. Moreover, that information must be in the right form and place to affect the brain systems responsible for thinking, believing, and claiming. The principle is, in a sense, a computational conservation of information.

          On models, how do you know your way to your kitchen?

          I think there’s extensive evidence that our vision (and other senses) isn’t nearly as accurate and comprehensive as you think it is. Evolution makes use of a lot of short cut heuristics. It’s why there are so many visual illusions out there.

          Liked by 1 person

          1. I am inclined to disagree with “Principle 1”.

            On knowing the way to the kitchen — it seems to me that has more to do with mapping than with modeling. And maps are made on the basis of pragmatics (what works) rather than on the basis of accuracy.

            On vision: I have heard that self-driving automobiles slow down when there is a snowman on the side of the road. They slow down because of the possibility that the snowman might run out into the road. Our vision is a lot better than that.

            Liked by 1 person

          2. It seems like a map is a type of model. So we could say that there’s a map of attention instead of a model of it, or of anything else out in the world. Some neuroscientists use the term “image map”.

            I think knowing that a snowman won’t jump out is more about a world model (map) than vision per se. Admittedly, the distinction isn’t a sharp one in the brain, but identification of a snowman and what it means seems to happen far from the visual cortex.

            Liked by 1 person

  2. [begin-rant]
    I think Graziano’s approach is correct, but I have some issues with the AST and with his paper.

    First, I think he needs to establish exactly what he means by “information”. As in, he should be able to explain what he means without using the word “information”. What does it mean that a belief exists as information in the brain?

    Second, I think he falls prey to the same desire to say something striking/astounding/shocking as do Frankish (“It’s an illusion”), Seth (“it’s hallucination”), and Hoffman (“it has nothing to do with reality”) when he says “The brain’s models are never accurate.” Of course they are sometimes accurate. If I see an apple, and there really is an apple, the model is accurate. What he means to say, and notice that in other parts of the paper he actually does say, is that the model is never fully accurate, i.e. completely accurate. The model does not depict every detail of the object. Fine. And some models really are inaccurate (color is not a feature painted on the surface of things). But the statement without the modifier is just jarring (and inaccurate).

    *
    [end-rant]
    [more in a separate response]

    Liked by 2 people

    1. Yeah, there’s a difference between accuracy, precision, and comprehensiveness. A model can be accurate without perfect precision or complete comprehensiveness. Which is a good thing, because (quantum no-cloning theorem, anyone?) getting all three at once seems to be physically impossible.

      Like

  3. FWIW, I like the framework because it fits (mostly) my own understanding.
    1. the measurable processes A are the outputs of unitrackers, pattern recognition units.
    2. these outputs can be represented in a semantic pointer architecture (a convolutional neural network) whose activation pattern “points” to specific unitrackers.
    3. The inaccuracy of the model refers to the inaccuracy of the specific unitrackers, not to their representations in the semantic pointer architecture (a difference from the AST).
    4. the semantic pointer architecture represents those unitrackers which have achieved Dennett’s “fame in the brain”, and these pointers are accessible to other systems such as the language system and the memory system. Unitrackers in the PFC respond to and have some control over which unitrackers get represented in the SPA (semantic pointer architecture).

    So in this understanding the Attention Schema is essentially the semantic pointer architecture. This would change some of the suggestions in the paper, but in an explicable way.

    *
    [really-end-rant]

    Liked by 1 person

    1. I actually don’t think I’ve ever seen Graziano weigh in on the philosophy of information. I suspect it’s a rabbit hole he’s not particularly interested in going down. What are some issues you think he misses by not doing that? Or how would his meaning change by the various conceptions of “information” out there?

      On the “accuracy” language, as you note he clarifies a lot of the nuances in the paper. I think we owe him some interpretational charity for the point he’s trying to make, that the models aren’t accurate in a scientific sense. They’re accurate enough for the adaptive functionality they evolved to fulfill, but not to provide us good information on the architecture of the mind. And while a model of an apple is accurate enough for our purposes, there’s a lot about an apple it doesn’t tell you, and which most of the time you don’t care about.

      (You’ll probably say we should give the illusionists the same interpretational charity. I agree, and I think Graziano would as well since he used to be okay with the label. But like most illusionists, he likely found himself spending too much time clarifying what the term means. I think Susan Blackmore has a good handle on making those types of clarifications, but most illusionists seem to struggle with them.)

      I’m mostly onboard with the semantic pointer discussion, although I see that as happening at a lower level than what Graziano is discussing. In that sense, I see his theory and the semantic pointer one as completely compatible. (Although as usual, I wonder if we’re using that phrase in the same manner. For me it just means references in one region to concepts in another region.) All that said, I’m sure the details of the theory will have to be adjusted over time as new evidence comes in.

      Like

      1. I guess my issue with information is that certain aspects of consciousness, namely “about-ness”, subjectivity, and intentionality, can’t be explained without invoking an understanding of information processing. And phrases like “information existing as a model in the brain”, “information flowing” from here to there, etc., are not conducive to that understanding.

        *

        Like

        1. Consider this. What if you didn’t care about explaining subjectivity. All you cared about was explaining the causal processes and states involved from sensory stimuli to us talking about our subjectivity and intentionality. Do the same issues arise?

          Like

          1. Kinda not sure what you’re asking. I’m inclined to say the same issues arise except for any issues about subjectivity (because a priori we don’t care about that).

            Like

          2. So if we’re trying to understand why a robot does what it does, do the issues arise? If not, then as we add more sophistication to the behavior, at what point do the issues arise?

            Like

          3. Again, not sure what you’re asking. Trying get a handle on “the issues”. The issues, and the consciousness, of the robot are the same, depending on the type of information processing. More sophisticated processes could lead to more sophisticated consciousness, or not.

            Like

  4. When the model is accessed by higher cognitive functions, as a result of its inaccuracies and simplifications, it causes us to think we have some non-physical property, an intangible “experienceness’

    Since thoughts come and go, and beliefs can change, would that imply that, according to this theory, one could at least in principle, become unconscious or stop having any experience by stopping in having this thought/belief of having this intangible “experience”? That is, just by a thought or change in belief system, could I switch between becoming a sentient and non-sentient zombi-like being?

    Liked by 2 people

    1. I hadn’t thought of that objection. I have a different one: the explanation of people’s belief in non-physical experiences is usually that they make other mistakes. For example, the Mary’s Room thought experiment – which is just a philosopher’s crystallization of common thoughts – confuses ontological claims with conceptual ones. Or more simply, some people fail to see the difference between “I am absent any awareness of any physical cause of my thoughts” versus “I am aware of the absence of any physical causes of my thoughts”.

      Also, physicalists and non-mystical thinkers about consciousness have simplified models of world and self too, being just as human as the next person. How come we don’t believe in these non-physical properties?

      Like

    2. The attention schema’s creation and access actually happen below the level of consciousness. It just affects our internal perception of how our mind works. You can’t simply elect to stop doing it. Although in the paper, Graziano does look at brain injured patients who seem to lose the ability. The effect is that their consciousness is damaged. I wouldn’t describe it as extinguished (they still have exogenous attention), but it is missing important aspects.

      Like

    3. It seems to me that “accessing the model with higher cognitive functions” is invoking exactly what consciousness is about and why it seems intangible. His explanation invokes and requires exactly what he is trying to explain. His model doesn’t actually explain anything but it might be descriptive, although how well it does at that would need some empirical investigation. Does the model make any useful predictions and that only it can make?

      Liked by 1 person

    1. Doesn’t this show again that it is almost impossible to talk about consciousness without referring to consciousness itself?

      Accessing models, executive systems, attention, higher cognitive functions – aren’t all of these simply new terms (somewhat pseudoscientific in fact) for what we might simply call “thinking”? In fact, they comprise a model based on the very same introspection which is claimed to be flawed.

      Liked by 1 person

      1. I have exactly the opposite conclusion. I think Graziano demonstrates how to relate consciousness to non-conscious processes in the brain. As we discussed before, that unavoidably requires linkage language so people understand that this is about consciousness, but then if I tried to describe the game of Tetris to you only in terms of the technicalities, without any reference to user level concepts, you’d have a hard time realizing what I was talking about. No metaphysical dilemma involved.

        Liked by 1 person

        1. Does he think that non-conscious processes are the reason why consciousness seem intangible? Is there any evidence for that?

          I don’t recall that frequently I model my own brain processes. I recall modeling various aspects of the world but not my own brain processes. Which ones to do you model? If this happens unconsciously, then how would I know if the model was inaccurate without bringing the model into consciousness. What do I compare the model with to know it is inaccurate? How does any of this happen without the conscious activity itself?

          Accessing the model with higher cognitive functions already involves exactly the “experienceness” he is trying to explain.

          I don’t see this as anything more than a one-trick pony designed to explain why consciousness seems non-physical but it much too cute an explanation. I think the intangibleness of consciousness comes not from discrepancies and simplifications in an internal brain model but from the direct contrast between thought and the world.

          In our daily life most of us have little problem in how we conceive mind and matter. Mind is that which goes on inside us, other people, and probably animals. Minds seem real but unsubstantial. We can imagine things in our mind which have never happened or ever will happen. With our mind we feel a degree of control over ourselves and the things in the world we can affect, but the control is contingent on our actions outside it. Matter is that thing out there that consists of you and me in our bodies, and everything else down to rocks and stones. Matter seems substantial and solid. It has the stubbornness of being what it is. We may alter it, but it always behaves in predictable ways. Melted ice becomes water. It doesn’t become gasoline. It is firm and hard. Even the wind (think about tornados) can be firm and hard.
          This commonsensical view of mind and matter completely eludes some philosophers and a surprising number of scientists

          Liked by 1 person

          1. You answered a lot of your own initial questions. The model happens below the level of consciousness. It surfaces in our ability to know what we’re attending to and control what we focus on. Control requires a model to compare with.

            And no, we can’t see around it. It is the very knowing of what happens in our attention. Without it, we can’t perceive it. And like many visual illusions, we can’t try really hard and see through or around it, because it’s our only (internal) way of getting information about it.

            So how do we know it’s inaccurate? Evidence. Scientific study of the brain. I didn’t go into it in the post, but in the paper Graziano discusses the evidence, including what happens when brain injury damages a person’s ability to monitor and control their attention. He goes into a lot more detail in his books.

            Much of the rest of your comment is an intuitive description of what the theory attempts to explain. But apparently with a refusal to consider that there might be an underlying explanation.

            Like

          2. Underlying explanation isn’t needed. I can imagine in consciousness all sorts of things – impossible things, future things, past things – but none of them come into existence by consciousness alone. As long as they exist only in consciousness, they are intangible from a perspective of the real physical 3+1 dimensional world. In other words, the intangibility is obvious and doesn’t require explanation.

            The rest of the argument is just simply that the brain causes the intangible things that we think.

            If that’s it, then the view is worthless.

            Like

  5. I find functionalism to be the shortest path to absurdity of all the materialistic models of consciousness out there.

    “When the model is accessed by higher cognitive functions, as a result of its inaccuracies and simplifications, it causes us to think we have some non-physical property, an intangible “experienceness’.”

    Talk about a lame and woefully short-sighted evaluation: that explanation sounds more like an answer a sixth grader would provide on a pop-quiz hoping to win the favor of his teacher than a well thought out assessment. But then again, Michael Graziano works in adult day at Princeton; so go figure…..

    To repeat a previous comment: a commonsensical view of mind and matter completely eludes many philosophers and a surprising number of scientists.

    Liked by 1 person

      1. Functionalism creates its own long dark tunnel of contradiction by denying that consciousness exists. “Oh, excuse me” said the functionalist, “I’m not denying that consciousness exists, I’m just saying that consciousness is not what we think it is… consciousness is what the function of computationalism looks like through the dissociative boundary of error.”

        It’s the antithesis of the same dark tunnel of contradiction created by the rationale of idealists. “Oh, excuse me” said the idealist, “I’m not denying that matter exists, I’m just saying that matter is not what we think it is…. matter is what consciousness looks like through the dissociative boundary of error.”

        Go figure🧐

        Like

        1. I think that the problem I was having grasping Graziano is that I thought what he was trying to explain was so much more than what he does explain.

          I thought he was trying to climb a mountain when he just climbed a molehill.

          The mountain is explaining why and how the physical brain needs to generate subjective experience.

          The molehill is explaining why our subjective experience seems somewhat mysterious and not able to be explained by a physical brain.

          Like

  6. I am totally on board with Graziano’s view that the key is to understand the nitty gritty of what information content and structures need to be present to support what our experience and content of consciousness is, in order to strip out the mystery and get us to a manageable reverse engineering problem.

    While I’m also happy with AST – Attention Schema Theory – (or semantic pointers to unitrackers as James frames it), for me that is only addressing (literally addressing in computing terms) the ‘read’ side of accessing the content of our mind. There is also a ‘write’ side that is missing in his account. I am of the view that just like attention is crucial to select what data will influence what we are aware of, what I am currently calling ‘routing’ is necessary to send a pattern to a selected location, in the equivalent on ‘write’ in computer memory. Where ‘attention’ would have originated for selecting sensor data, ‘routing’ would have originated in selecting what part of the body to move. Each is now co-opted to support reading and modifying selected content of mind as the top loop of consciousness, and this lets us compute, in a Turing sense.

    The reason I want to include both ‘read’ and ‘write’, or input and output, in an account of consciousness is that it is all about control of the mind and body, and control has to embrace both the input and output.

    Liked by 1 person

    1. I don’t know if you caught my reference to another post that covered Keith Frankish’s idea of a response schema, which might be getting at what you’re talking about. It was a short response paper, so it doesn’t rise to any kind of developed theory, but I thought it was pretty interesting.

      The response schema


      Joseph Ledoux also talks about various schemas that may exist in the prefrontal cortex, like a fear schema, which somewhat resonates with this idea.

      Although from the perspective of the higher level systems, these models of potential responses can be seen as just more reading. So it may not be getting at what you’re looking for. But if we get into the details of motor output, we start getting into the motor cortex, the basal ganglia, and then down into the midbrain regions.

      Or do you just mean getting information to the various subsystems? If so, then I think that’s largely handled with the global workspace level attention dynamics. The fame-in-the-brain dynamics gets it to action systems as well as ones doing more analysis of the content. Of course, the global workspace is a broad description that papers over a lot of complexity. But the cortex being as heavily interconnected as it is, with recurrent connections between most regions, seems to provide the infrastructure for that type of distribution.

      Speaking of control of the body, what did you think of Graziano’s discussion of the body schema? He really just uses it as a ladder to the attention schema, but it resonates with Damasio’s proto-self, which he sees playing a role in homeostasis. From a control theory perspective, not to mention the phantom limb issues Graziano mentions, that type of model has to exist, and there seems like plenty of evidence for it.

      Like

      1. Thanks for the pointers; I went back to re-read the section about body schema more carefully, and it does talk about motor control, although doesn’t go far down that road. I then followed the link to your blog on Frankish’s response schema which seemed promising, although the mixing in of feelings seemed an additional factor rather than a core feature of the response schema, to me.

        For me there are 3 different things here:
        – attention, whether to sensory data or to information within mind. This is addressing and selecting things on the ‘input’ side of control. Going along with it are the patterns to be detected in the attended data, each detected pattern generating a different output which can be used for multiple purposes.
        – routing, which sends the generated patterns where they are needed, whether to a motor system or to a process within mind. This is addressing and selecting things on the ‘output’ side of control, just like writing to an address in a conventional computer.
        – feelings, bottoming out at valence, which determine what good looks like for the organism, enabling a single coordinated set of attention and actions to be selected from multiple possibilities.

        To give an example. I am hungry (feeling with negative valence). I look around and see an apple and an orange. I prefer the apple (expected higher future valence) so I pay attention to the apple, which enables me track its location relative to me, and guides my eyeballs accordingly. I could reach for it with my right or left hand, but it is closer to my left hand, so I route the apple tracking information to my left hand and trigger the ‘reach for’ action set.

        A possibility with the ‘routing’ mentioned above is that it could be boiled down to attention. That is to say, my left hand control system pays attention to the apple tracking data and my right hand ignores it. However this doesn’t get around the need to only task one motor system to lead the ‘reach’ action, so ‘routing’ seems better to capture that selection process.

        Like

        1. For getting at what you’re looking for, your best bet might be to delve into a straight neuroscience textbook, or one on cognitive neuroscience. These tend to be outrageously expensive, although you can usually find used copies of older editions for a reasonable price. I reviewed a couple of them a while back.

          Sources of information on neuroscience

          Another paper worth checking out is Bjorn Merker’s paper on “Consciousness without a cerebral cortex”. I think his definition of consciousness is too low level for most people’s understanding of it, but that definition may be getting at piece of what you’re looking for: integration for action. He points out that the final integration for action takes place in the midbrain region.

          Is the brainstem conscious?

          But the action in this region is all reflexive. The thing is, there are connections from this going up into the forebrain. Most of the connections coming down are inhibitory, indicating that the main role of the forebrain relative to this region is to decide which actions to allow or inhibit.

          There’s a second level of this in the basal ganglia, which is where learned habits are thought to be instantiated. Again, a lot of connections with the frontal lobe cortex, indicating that this final layer can allow or inhibit actions at either the basal ganglia or midbrain layer.

          So at the lowest level, we have the final integration for action. Above that is the sub-cortical forebrain, the integration for habitual responses. And above that in the frontal lobes, the integration for planning. At least that’s how I currently understand it.

          Another book worth checking out that’s pretty good is Elkhonon Goldberg’s book on the frontal lobes: https://www.amazon.com/New-Executive-Brain-Frontal-Complex/dp/0195329406/

          The communication between the frontal lobes and the rest of the brain, I think, are likely best described by the global workspace / selective attention dynamics. Crucially, the frontal lobes are part of that dynamic. They can help push a particular coalition into the workspace.

          Hope some of this helps.

          Liked by 1 person

  7. Perhaps consciousness boils down to this question: what does it take for a system be fully aware that something is the case, that there is something if feels like for that to be the case.

    Simplistically computer data could include a single bit that is 0 when that thing is not the case, and 1 when it is the case, and other programmed functions could take that bit into account….but that seems nowhere near enough to claim consciousness.

    I think the core difference here is that the significance of that bit, its meaning and purposeful use in a software system, is held in the mind of the programmer. The brain (or an artificial neural network) has no programmer and has to add the meaning in another way.

    That is by making that bit addressable for reading or writing in terms of its contextual meaning, in a way that addresses questions like: if that bit is set to 1, what other things are likely to be the case? how am I likely to feel, good or bad?, what actions could I take? how might those actions change how I feel or what I can sense? what do I need to attend to in order for those actions to be successful? This spectrum of consequences is the ‘feels like’ of consciousness.

    Attention Schema Theory seems to be the right sort of thinking, but too narrowly focused on the input or sensory side of the problem. We also need the schema or model to link the addressing of the bit to feelings (affect, valence), actions that can be taken to change outcomes, and the probability of other things also ‘being the case’ or ‘becoming the case in the future’.

    This broader schema must enable a single coordinated action and attention set for the organism as a whole, that is capable of taking into account the setting of the bit. That’s necessary to achieve unity of consciousness and purposeful action by the entity. Furthermore it must be able to set and read bits that are about its own choices, acting as a single thing. This imposes a hierarchical nature on the schema, and on the way it translates into coordinated attention and actions sets intended to maximise valence at whole organism level.

    Liked by 2 people

    1. My answer to this kind of question is usually to refer to functional hierarchies that include reflexes, predictive models of the body and environment, memory, deliberation (action and sensory simulations), and introspection. Or we can talk in terms of dimensions of functionality, evaluative richness, unity, memory, temporality, etc.

      What we call consciousness depends on a vast array and hierarchy of functionality, most of which is not conscious, but which is necessary for it to happen. It’s why I think theories of consciousness have to be built on top of cognitive neurosciences ones.

      Liked by 1 person

    2. Well said. The pathological tendency of philosophy to over-emphasize perception and under-emphasize action is depressingly familiar, but it’s really discouraging to see some scientists going down the same path.

      Liked by 1 person

  8. BTW have you ever looked at Susan Blackmore’s take on Frankish which might also apply to Graziano.

    “Frankish’s illusionism aims to replace the hard problem with the illusion problem; to explain why phenomenal consciousness seems to exist and why the illusion is so powerful. My aim, though broadly illusionist, is to explain why many other false assumptions, or delusions, are so powerful. One reason is a simple mistake in introspection. Asking, ‘Am I conscious now?’ or ‘What is consciousness?’ makes us briefly conscious in a new way. The delusion is to conclude that consciousness is always like this instead of asking, ‘What is it like when I am not asking what is it like?’ Neuroscience and disciplined introspection give the same answer: there are multiple parallel processes with no clear distinction between conscious and unconscious ones. Consciousness is an attribution we make, not a property of only some special events or processes. Notions of the stream, contents, continuity and function of consciousness are all misguided as is the search for the NCCs”.

    https://www.susanblackmore.uk/articles/delusions-of-consciousness/

    I would add that most of the stuff about models and schemas is likely wrong too except as extremely high-level description with no explanatory power.

    Liked by 1 person

    1. I actually had read that paper, but it’s been a while and it was good to refresh. Blackmore does have some differences in emphasis from Frankish’s focus on phenomenal consciousness. (It’s worth noting that Graziano himself had a paper in that same volume which also had some issues with Frankish’s view.) Her take is a weaker form of illusionism, albeit very radical in its own way, but it fits well with Graziano’s views.

      Interestingly, she criticizes global workspace theory, but endorses Dennett’s fame in the brain view, which Dennett himself describes as a variant of the global workspace. Now that I think about it, it was her description of GWT in the 2005 edition of her book: “Consciousness: A Very Short Introduction” that turned me off on it for a long time. But in the 2017 edition (published after this paper) she acknowledges that there are interpretations of GWT more in line with her views.

      Anyway, I think her views resonate well with Graziano’s. This snippet is worth noting, particularly for its discussion of models:

      Returning to my example, as I asked the first question a self-model was briefly constructed of me climbing, counting and looking at the ground but not including looking at my watch. If we could look inside the brain in sufficient detail I guess we would see some of the hill-climbing processes linked to the body schema, and to self-modelling and questioning processes while the watch-looking and many other processes were going on separately. When I asked the second question, lots more processes, including the watch-looking, were combined to make an even more complex whole. When I stopped asking about consciousness and got on with climbing the hill the temporary coherence dissolved and normality resumed. The multiple parallel processes just carried on, none linked to a model of self as observer; none either in or out of consciousness; none either conscious or unconscious.

      I think the only issue Graziano would have with this is that the models she says only come into being on demand, he’d probably argue are actually always there as one of those parallel processes, available when needed.

      If you don’t think there are models and schemata, what do you think is there instead?

      Liked by 1 person

      1. Frankish’s views in particular seem more and more absurd the more I think about it.

        We administer an anesthetic to Frankish then ask him his views on consciousness. For some reason, we get no response. He doesn’t even move. The anesthetic wears off and Frankish recovers. Maybe he is slightly groggy. We ask him his views on consciousness.

        “It’s an illusion”, he replies.

        What could that possibly mean? It must be referring to a definition of consciousness (or an assumption or belief about it) that is far beyond the mental state of alertness, awareness of environment, and ability to act and move as a physical entity. Surely it isn’t a coincidence that these abilities seem to be accompanied by subjective experience. But if that subjective experience is an illusion then how does it make any difference in the external abilities? If it doesn’t make any difference, then why does it exist?

        I think it is not consciousness per se that Frankish is confused about but something more like meta-consciousness which is the same as what Blackmore is talking about. The problem I see is that there really is something it is like when I am not asking what is it like. Consciousness isn’t an illusion because it would made no difference if it were an illusion. We might have wrong or incorrect beliefs about it but those beliefs are in the meta realm. In fact, as soon as we attempt to make any determination at all about its nature (illusion or not), we are immediately forced to invoke some euphemism (“accessing the model”) for a conscious process itself.

        Regarding models, schemata, even global workspace, I think those are just not actually not much different from naïve ideas about consciousness. Although they may claim empirical support, I think they derive mostly from introspection. I’m not sure there has been much advance since William James. There is a churn of terminology and information-based terms seem in vogue today. I don’t know what will replace these approaches but I think we are talking about dynamical systems which will require a different approach.

        Liked by 1 person

        1. I think you’re ignoring the nuances in Frankish’s view, which is trivially easy to do with illusionists. He doesn’t think phenomenal consciousness exists. But he does think access consciousness and wakefulness exist. I have my differences with this view, but they’re mostly terminological.

          I do think we’ve made a lot of progress. Most of it isn’t controversial if we remove the c-word from the discussion, but focus on perception, discrimination, memory, affects, attention, reportability, etc. To the extent someone’s view of consciousness includes those things, it’s progress on consciousness.

          But for someone who only counts progress as explaining consciousness as some kind of objective property, there’s been no progress. I’m not expecting that to change.

          Liked by 1 person

          1. Yeah, right. Consciousness is an illusion but (wait) some kinds of consciousness exist; otherwise, how could anybody understand what I mean when I say “Consciousness is an illusion”. What I mean it’s not my kind of consciousness that is an illusion. It’s the other kind over there. It’s the kind that hasn’t studied enough science to know that the world is physical and mental things don’t exist (except that kind of consciousness that knows mental things don’t exist).

            Yeah, I get the nuances.

            Liked by 1 person

  9. Mike,

    When you say introspection is unreliable, what do you mean by introspection?

    I’m curious because this notion is often used as an argument to dismiss the notion that a meditator, for instance, can be considered able to reliably report on accessing a non-dual, fundamental state of awareness. And I want to leave my mystical tendencies on the shelf for a moment, and try to work with a tangible example.

    It seems to me that it’s reasonable to note that practiced meditators can repeatedly achieve states marked by certain brain wave conditions and/or heartbeat rhythms or other physiological markers that I believe have been reasonably correlated to a “meditative” state. I don’t even know which waves are in the literature but that’s not super important here.

    My question would be this: if we leave aside the interpretations of those states, is it reasonable to conclude that persons can become able to sense/experience these physiological conditions and identify consciously when they are / are not occurring?

    I ask because I think it is likely persons can learn to become reliably aware of a great many subtle states that likely have physiological correlates—some we know about and some we likely do not yet—and that further persons can likely distinguish between ideas and thoughts that come into consciousness while in such states versus ideas and thoughts generated by thinking processes that occur in other states.

    Would this be considered plausible or unreliable and implausible?

    As an example, creative states invoked when solving a problem, doing fiction writing, trying to “come up with” (interesting term, that, as in coming up to higher conscious levels) a more efficient computer code for a certain problem, etc., all feel to me as they can be distinguished from the type of thinking that happens when assembling IKEA furniture for instance, or doing long division by hand, or following a recipe to prepare a meal.

    Do you think these proposed differentiations of “types” of conscious processes could be considered reliable and hallmarks of truly different brain states or physiological states? And do you think all people would have the same level of ability to make such distinctions accurately?

    Michael

    Liked by 1 person

    1. Michael,
      Whenever I use a term without qualification or clarification, I generally mean it in its most common usage. In this case, by “introspection”, I mean the examination of one’s own mental or emotional state, self reflection, examining our thoughts and feelings.

      In your case, I think I’d ask what you mean by “mediator” and “fundamental state of awareness”.

      But to your specific questions, I don’t know. When I say introspection is unreliable, I mean it’s unreliable as a source of information on the architecture of the mind. There’s just too much it doesn’t have access to. I think there’s also plenty of psychological research showing that it has its limits on understanding our own motivations.

      Which isn’t to say that it’s useless. When used in its evolved roles, it gets the job done. It usually works pretty well, for instance, when assessing our confidence in how well we know something.

      But for the specific meditative scenarios you’re discussing, I’m not sure. One writer to maybe check out on this is Susan Blackmore. https://www.susanblackmore.uk/
      She has a long history of engaging with meditative practices. She criticizes HOT theories for their inability to account for the states you’re describing, and touts meditation for helping to realize how illusory many of the properties we assign to consciousness are.

      Sorry. Wish I had a more definitive answer.

      Liked by 1 person

  10. I agree with your take on AST as an add-on to GWTs, so just a couple of brief more general observations…

    (a) The nature of “belief” has been much discussed by philosophers and equating it with having a model is, at best debatable.

    (b) The notion of consciousness as a kind-of self-model is hardly new. It was certainly around in AI circles in 1970s (sorry, don’t have any references to hand). The difficulty is that any kind of self-model must include much of what is in fact non-conscious – e.g. proprioception. Also, I used to know a biologist with an interest in philosophy, who delighted in pointing out that even bacteria have to have some sense (in some sense! :-)) of the limit of their “bodies”.

    Liked by 1 person

    1. Thanks Mike!

      (a) What would you say it is instead?

      (b) I’d say that the construction of the model draws on a lot of information, including a lot that never make it into consciousness, and of course that construction isn’t itself conscious, only the results, and even then only if they happen to be accessed.

      Hey, I featured a book a few posts back that said bacteria have minds (for a particularly loose sense of “mind”), so I know what your biologist friend means. 😉

      Like

      1. Yes, I saw the bit re bacteria having “minds”. To me that’s a misuse of a properly good term.

        I have no doubt that we have a kind-of model of ourselves, but I cannot see how the conscious aspect of it can be disentangled from it without being able to define consciousness by some other means in the first place. For me, proprioception is a particularly interesting edge case. It must be a part of the overall model, but I only really become aware of it as such, on rare occasions when it fails.

        What is a belief if not a model? That’s a difficult question. Generally, I side with dispositionalists. To believe something is to be disposed to aver by thought, word or action a particular state of affairs. I think that’s a fairly orthodox philosophical position these days. Unfortunately, it would seem that most dispositionalists view this in representational terms and I don’t hold with representationalism. Specifically for models, I feel that proceeding to notion of a belief as a “something” (e.g. a model) is a case of unwarranted reification.

        One striking feature of beliefs, well known to pollsters (both reputable and less reputable ones) is that what one avers as one’s belief very much depends on framing (to use Kahneman’s term) – on how one asks the question and/or on preceding questions. One can, of course square this with a notion of a belief as a model, but then one has to pile up so many modal considerations that the proverbial man on the Clapham Omnibus would look bewildered and retort that when he says he believes something he does not believe any of *that*.

        The way I envisage my own beliefs is in terms of a continuous landscape, which represents the totality of my dispositions and in which I can easily move in some directions, can be prodded by framing into others and find some directions (almost) inaccessible. (NB: it is an analogy, not a model, so is allowed to have serious flaws. 🙂 ) I certainly cannot point to a particular patch of that landscape and say that it corresponds to a particular belief. The landscape channels the way my mind tends to move — it is not a representation of anything.

        Like

        1. I don’t have an issue with the dispositionalist stance, as far as it goes. And I’ve long been uneasy with the word “representation” due to it inclining us to view it as some compact thing somewhere in the brain, when in reality it’s likely to be a network of conclusions spread throughout functional regions. (For the same reason, I’m not wild about the word a lot of neurobiologists prefer, “image” or “image map” which has even stronger connotations along those lines.)

          I also don’t have an issue with the landscape you describe, but it strikes me as a lower level description of something we will eventually be able to find higher level descriptions of, where functional words like “model” or “schema” may well be appropriate, with the caveat that the models are really diffuse networks of converging neural firing patterns, which can combine, overlap, or even subtract in various ways depending on how the system is currently being stimulated.

          Put another way, a model is a constellation of conclusions (or dispositions, predictions, or inferences if you prefer). It makes sense to call it a model due to the associated causal effects the constellation generates.

          Liked by 1 person

          1. While “representation” certainly does suggests some localisation, I also have another issue with it. While I am glad that philosophy has moved away from seeing everything through the lens of “the linguistic turn”, I would hate to lose sight of LW’s central lesson: language (and conceptualisations in general) is not about representations. The whole “mirror of nature” concept is deeply flawed – language is a tool which we adapt to suit *our* purposes in the world. It is shaped both by the world and by our concerns/needs/desires/wishes and its “representations” are diverse, mutable and at least partly contingent, reflecting shifts in our culture(s) and our knowledge.

            By the same token, I am not at all convinced that a higher-level descriptive language of the dispositional landscape is feasible, except as a set of heuristics and more-or-less vague generalisations. That’s the lesson I take from Davidson, supported by the failure of the original “symbol twiddling” wave of AI and by the difficulty of grasping what exactly it is that deep neural network successors are doing, despite the fact that we have a complete grasp and control of their detailed configurations.

            So, no real disagreement — I am just less optimistic than you seem to be. 🙂

            Liked by 1 person

  11. Your post has reminded me of a topic that actually interests me but that I have not considered for a long time Quick answers are not what I am about, so it takes me a long time to come up with my answer, but I hope that it will be read anyway.

    I’ll start with a few offhand remarks about the basics of AST as reflected in your post.

    “In order to have a belief, and talk about that belief, it’s necessary for that belief to exist as information, as a model in the brain.” What is that supposed to mean? To have a belief goes along with certain brain processes, that is common, as well as that information is stored and processed in the brain. But what is it that renders it a belief?

    “The brain’s models are never accurat.” That’s just as obvious as it is trivial. If we knew everything exactly, we would have much less problems. Only the pope claims to be infallible.

    “The brain constructs a model to represent process A.” But this is nothing else than another process A’. Consequently, the brain would have to construct another model representing A’ and so forth.

    “When the model is accessed by higher cognitive functions […] it causes us to think we have some non-physical property, an intangible “experienceness”. Where do the higher cognitive functions originate from and what are they about? Aren’t these also processes in the sense of the above mentioned definition?
    In his paper, Graziano writes: “[…] as a result of the simplified and imperfect information in that model reaching higher cognition, people believe, think, and claim to have a physically incoherent property.”

    But if we think that we have experience, this is already an irreducible experience. It is quibbling to make a difference between being conscious and believing to be conscious. Having experience goes necessarily with believing that I have this experience. And if I believe that I have experience, this already presupposes having consciousness. Experience is not explained by making it a belief. The irreducible fundamental is then that we believe to have experience.

    Graziano claims that his attentional schema “can explain why we believe, think, and claim to have a subjective experience associated with selected objects that changes from moment to moment.”

    However, I consider that his theory cannot solve these issues. What bothers me most about Graziano is his hubris when he claims to have found the definitive truth and calls on his fellow scientists to “to choose between an explanation that is fundamentally magical and an explanation that is mechanistic and logical.”

    Liked by 1 person

    1. On not being about quick answers, no worries. It often takes me a lot of time to formulate them as well. Typically when I do respond quickly, it’s only because it’s on a topic I’ve already given a lot of thought to.

      A lot of people have focused on my use of the word “belief”. I really boiled down multiple terms that Graziano uses (thoughts, beliefs, claims, etc). If I’d known people would fixate on it so much, I would have chosen a different term, or just repeated the list Graziano uses. But as to what a belief is, I’d say it’s a prediction, or more typically, a set of related predictions.

      On it being obvious that the brain’s models are never accurate, Graziano has a note somewhere in the paper that these points are often considered trivially true, but people seem prone to forget them in their deliberations about consciousness. Put another way, they might be trivially true, but the implications aren’t obvious.

      I wonder why you think the brain needs to construct a model of process A’ (the model of A). If A’ gives it what it needs to control A, it’s not clear why an additional model, A’’, might be adaptive. It seems like we’d only need A’’ if A’ itself needed to be controlled.

      Of course, the fact that we’re now talking about A’ means we have a version of A’’ in our heads. But our brains didn’t construct this A’’ directly from A’ (assuming A’ exists). We have it because Graziano described the theory to us. And he has it because of the theoretical and empirical work he’s been doing. A’’ might still turn out to be wrong, although we are aware of our attention and have some ability to control it, so it seems hard to imagine it’s completely wrong.

      I don’t think Graziano is claiming we don’t have experiences. Just that our introspective knowledge and judgment of those experiences isn’t accurate, because it’s done with those brain models that are never accurate. A consequence of that trivially true observation. And those inaccuracies give us an impression of something irreconcilable with physics.

      Graziano makes the statement on choosing mechanistic explanations over magical ones in relation to his general approach, not his theory in particular. As I noted in the post, I agree with that approach. But for those who think any causal explanation is impossible, it probably is going to seem hubristic. Yet the explanation, and ones like it, exist.

      Like

  12. You wrote: “Just that our introspective knowledge and judgment of those experiences isn’t accurate, because it’s done with those brain models that are never accurate. A consequence of that trivially true observation. And those inaccuracies give us an impression of something irreconcilable with physics.”
    However, claiming that something cannot be derived from the physical laws we use to describe the tangible world is not the same as claiming it is irreconcilable with them.

    In his paper, Graziano writes “[…] as a result of the simplified and imperfect information in that model reaching higher cognition, people believe, think, and claim to have a physically incoherent property. The property they claim to have is an intangible experienceness, the hard problem, the feeling of consciousness.”

    However, it is not evident why this follows from the principles he postulates. Further assumptions are required for this, as they are assumed in the AST. According to Graziano, these explain why we believe, think, and claim to have a subjective experience.

    But there is no reason why an attention schema would result in any kind of consciousness. And there is no need for an attention schema to explain why we believe, think, and claim to have a subjective experience. Rather, as David Rosenfeld points out, “our consciousness is simply the way our mental life subjectively appears to us. The stream of consciousness is a stream of appearances of psychological states that we seem to be in. And those appearances are themselves real; it is not illusory that we seem subjectively to be in various psychological states.”

    Like

    1. Graziano does note in the paper that while his approach could be considered a form of illusionism, he’s not onboard with that terminology. I don’t think he would contest Rosenthal’s description. But he would note that the appearances Rosenthal mentions, while effective in our day to day lives, can be misleading in ways that affect our judgments about them. These effects are not an error on our part, nor are they anything we can see around. They are just part of the reality.

      I do think Graziano at times oversells AST as if it’s the solution, although he seemed careful at the end of the paper to admit it wasn’t a full accounting.

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.