The magic step and the crucial fork

Those of you who’ve known me for a while may remember the long fascination I’ve had with Michael Graziano’s attention schema theory of consciousness.  I covered it early in this blog’s history and have returned to it multiple times over the years.  I still think the theory has a lot going for it, particularly as part of an overall framework of higher order theories.  But as I’ve learned more over the years, it’s more Graziano’s approach I’ve come to value than his specific theory.

Back in 2013, in his book, Consciousness and the Social Brain, he pointed out that it’s pretty common for theories of consciousness to explain things up to a certain point, then have a magic step.  For example, integrated information theory posits that structural integration is consciousness, the various recurrent theories posit that the recurrence itself is consciousness, and quantum theories often assert that consciousness is in the wave function collapse.  Why are these things in particular conscious?  It’s usually left unsaid, something that’s supposed to simply be accepted.

Christof Koch, in his book, Consciousness: Confessions of a Romantic Reductionist, relates that once when presenting a theory about layer 5 neurons in the visual cortex firing rhythmically possibly being related to consciousness, he was asked by the neurologist Volker Henn how his theory was really any different from Descartes’ locating the soul in the pineal gland.  Koch’s language and concepts were more modern, Henn argued, but exactly how consciousness arose from that activity was still just as mysterious as how it was supposed to have arisen from the pineal gland.

Koch said he responded to Henn with a promissory note, an IOU, that eventually science would get to the full causal explanation.  However, Koch goes on to describe that he eventually concluded it was hopeless, that subjectivity was too radically different to actually emerge from physical systems.  It led him to panpsychism and integrated information theory (IIT).  (Although in his more recent book, he seems to have backed off of panpsychism, now seeing IIT as an alternative to, rather than elaboration of, panpsychism.)

Koch’s conclusion was in many ways similar to David Chalmers’ conclusion, that consciousness is irreducible and fundamental, making property dualism inevitable, and leading Chalmers to coin the famous “hard problem” of consciousness.  These conclusions also caused Chalmers to flirt with panpsychism.

Graziano, in acknowledging the magic step that exists in most consciousness theories, argued that such theories were incomplete.  A successful theory, he argued, needed to avoid such a step.  But is this possible?  Arguably every theory of consciousness has these promissory notes, these IOUs.  The question might be how small can we make them.

Graziano’s approach was to ask, what exactly are we trying to explain?  How do we know that’s what needs to be explained?  We can say “consciousness”, but what does that mean?  How do we know we’re conscious?  Someone could reply that the only way we could even ask that question is as a conscious entity, but that’s begging the question.  What exactly are we talking about here?

It’s commonly understood that our senses can be fooled.  We’ve all seen the visual illusions that, as hard as we try, we can’t see through.  Our lower level visual circuitry simply won’t allow it.  And the possibility that we might be a brain in a vat somewhere, or be living in a simulation, is often taken seriously by a lot of people.

What people have a much harder time accepting is the idea that our inner senses might have the same limitations.  Our sense of what happens in our own mind feels direct and privileged in a manner that outer senses don’t.  In many ways, what these inner senses are telling us seem like the most primal thing we can ever know.  But if these senses aren’t accurate, much like the visual illusions, these are not things we can see through, no matter how hard we try.

Cover of 'Rethinking Consciousness' by Michael GrazianoIn his new book, Rethinking Consciousness: A Scientific Theory of Subjective Experience, Graziano discusses an interesting example.  Lord Horatio Nelson, the great British admiral, lost an arm in combat.   Like many amputees, he suffered from phantom limb syndrome, painful sensations from the nonexistent limb.  He famously claimed that he had proved the existence of an afterlife, since if his arm could have a ghost, then so could the rest of him.

Phantom limb syndrome appears to arise from a contradiction between the brain’s body schema, its model of the body, and its actual body.  Strangely enough, as V. S. Ramachandran discussed in his book, The Tell-Tale Brain, the reverse can also happen after a stroke or other brain injury.  A patient’s body schema can become damaged so that it no longer includes a limb that’s physically still there.  They no longer feel the limb is really theirs anymore.  For some, the feeling is so strong that they seek to have the limb amputated.

Importantly, in both cases, the person is unable to see past the issue.  The body schema is simply too powerful, too primal, and operates are a pre-conscious level.  It can be doubted intellectually, but not intuitively, not at a primal level.

If the body schema exerts that kind of power, imagine what power a schema that tells us about our own mental life must exert.

So for Graziano, the question isn’t how to explain what our intuitive understanding of consciousness tells us about.  Instead, what needs to be explained is why we have that intuitive understanding.  In many ways, Graziano described what Chalmers would later call the “meta-problem of consciousness“, not the hard problem, but the problem of why we think there is a hard problem.  (If Graziano had Chalmers’ talent for naming philosophical concepts, we might have started talking about the meta-problem in 2013.)

Of course, Graziano’s answer is that we have a model of the messy and emergent process of attention, a schema, a higher order representation of it at the highest global workspace level, which we use to control it in top down fashion.  But while the model is effective in providing that feedback and control, it doesn’t provide accurate information for actually understanding the mind.  Indeed, it’s simplified model of attention, portraying it as an ethereal fluid or energy that can be concentrated in or around the head, but not necessarily of it, is actively misleading.  There’s a reason why we are all intuitive dualists.

At this point we reach a crucial juncture, a fork in the road.  You will either conclude that Graziano’s contention (and similar ones from other cognitive scientists) is an attempt to pull a fast one, a cheat, a dodge from confronting the real problem, or that it’s plausible.  If you can’t accept it, then consciousness likely remains an intractable mystery for you, and concepts like IIT, panpsychism, quantum consciousness, and a host of other exotic solutions may appear necessary.

But if you can accept that introspection is unreliable, then a host of grounded neuroscience theories, such as global workspace and higher order thought, including the attention schema, become plausible.  Consciousness looks scientifically tractable, in a manner that could someday result in conscious machines, and maybe even mind uploading.

I long ago took the fork that accepts the limits of introspection, and the views I’ve expressed on this blog reflect it.  But I’ve been reminded in recent conversations that this is a fork many of you haven’t taken.  It leads to very different underlying assumptions, something we should be cognizant of in our discussions.

So which fork have you taken?  And why do you think it’s the correct choice?  Or do you think there even is a real choice here?

60 thoughts on “The magic step and the crucial fork

  1. “The body schema is simply too powerful…”

    “If the body schema exerts that kind of power, imagine what power a schema that tells us about our own mental life must exert.”

    Power is a word that we all throw around with reckless abandonment and without any regard to its meaning or its origin. For me, power is not only the fork in the road, power is the road. Before I can understand anything about causation let alone consciousness, I need to know if power is an objective state of the world or a subjective state of mind? If power is an subjective state of mind, then the entire universe becomes irrelevant and ultimately collapses into the paradigm of solipsism. In contrast, if power is an objective state of the world…. well, what next?

    Peace

    Like

  2. Mike,
    Let me add a few comments to clarify the ambiguous meaning of subjective so your viewing audience can understand what the word subjective means, exactly. Mind is not a subjective experience, mind is an objective experience, one that is radically indeterminate. Mind uses rationality, the only tool in its toolbox to make sense of that indeterminate-ness. It does this by building intellectual constructions. Most of those constructions are handed down through one’s immediate culture. Or, as one matures, one can choose from a variety of constructions intrinsic to cultures worldwide, with which, one can build one’s own personally tailored world view.

    Rationality is a discrete binary system just like the immune system. The immune system is not mind, even though it is a form of consciousness. As a system, the mind possesses a greater intensity of power and therefore a higher degree of self determination build into that system. As a result of this dynamic, mind has the capacity to be prejudiced, bigoted and biased, whereas the immune system does not.

    As with the system of mind, when confronted with something new, the immune system contrasts that unknown against it’s own catalogue of what is known. Those knowns are objective knowns which correspond to an objective reality. If the unknown does not match anything within the catalogue of knowns, the immune system immediately begins creating an architecture of diverse anti-bodies until a match is found with which to fight of the unknown pathogen.

    Mind works exactly the same way with one fundamental distinction, mind possesses a greater degree of self determination. When mind is confronted with something new, the first thing mind does is contrast that unknown against a catalogue of perceived knowns. Unfortunately, those knowns are not objective knowns, they are subordinative knowns, which simply means that those perceived knowns are subordinate to the “power” of interpretation, i.e., a degree of self determination which the mind possesses, hence the word subjective. As a species, homo sapiens are completely incapable of being objective even when confronted with overwhelming, objective, evidence. That in a nutshell is James’ subjective gap. It’s called prejudice, bigotry and bias, the infamous subjective gap.

    Peace

    Like

    1. Lee,
      On “power”, my use of that word wasn’t meant to be particularly precise or rigorous. It just seemed the right way to describe how much these models dominate our self awareness (either at the body or mental level). Indeed, these models arguably are that awareness.

      But more broadly, I think the word “power” means different things in different contexts. It can be subjective in some cases, like what I just described, or it can refer to something like power coming from an electrical circuit, which is obviously much more objective. I don’t think there is any one true definition for that word (or any word). Language is utterly relative to its culture. It’s one of the reasons scientists generally prefer mathematics.

      I agree that humans struggle to be objective. Indeed, even when we think we’re succeeding at it, we’re frequently just fooling ourselves. We can overcome our individual blind spots by checking our observations and conclusions with others, and our cultural blind spots by involving people outside of our culture or niche, but overcoming species level blind spots is more difficult. A lot of scientific methodology is aimed at compensating for these biases.

      Like

      1. “I don’t think there is any one true definition for that word (or any word).”

        Mike, I didn’t ask for a definition, I asked whether power is an objective state of the world or a subjective state of mind.

        “It can be subjective in some cases, like what I just described, or it can refer to something like power coming from an electrical circuit, which is obviously much more objective.”

        What make power coming from an electrical circuit “more” objective than any other use of the term other than an arbitrary distinction created by the science of physics? This is not a difficult question: Power is either an objective state of the world or it’s a subjective state of mind. If power is a subjective state of mind, then power is whatever we say it is just like everything else is whatever we say it is. Agree or disagree, that position is irrevocable solipsism.

        Peace

        Like

  3. Is your argument that, because our personal experience of consciousness can suffer illusions that, therefore, consciousness must be scientifically tractable?

    I really don’t think that conclusion follows from that premise. The clauses don’t even seem particularly related.

    And why would our ability to use our consciousness to do science not then also be unreliable or prone to illusion? If I can’t trust the most fundamental thing I know, why would I trust science?

    Liked by 3 people

    1. Consider that there is the reality, and then there is our perception of that reality, a model of it. If the model is known to not always be accurate, should we take what it’s telling us, in the absence of any corroboration, as something to be explained? Or should we focus on why it’s telling us that?

      An older example Graziano used to use was that of a patient who believed there was a squirrel living in his head. The doctor wanted to figure out why the patient believed there was a squirrel in his head. But the patient insisted that they really needed to figure out how the squirrel got there. Which approach was likely to be productive?

      Our ability to use our consciousness, just like our ability to use our outer senses, actually is unreliable and prone to illusion. We can’t accept either uncritically. We can use introspective data, just as we can use sensory data, but neither can be accepted uncritically. Both need corroboration.

      Ultimately, I think we trust science, or any other knowledge, because it increases the accuracy of our predictions of future conscious experiences. Is there is any other metric by which we can judge knowledge that ultimately isn’t an elaboration of that standard?

      Like

      1. “Ultimately, I think we trust science, or any other knowledge, because it increases the accuracy of our predictions of future conscious experiences.”

        What you are referring to here needs to be clear and succinct. Whether you realize it or not Mike, you are referring to synthetic a priori judgments. And in full agreement with your assessment, there is no any other metric by which we can judge knowledge that ultimately isn’t an elaboration of that standard? That’s a really cool, accurate and definitive conclusion. But that conclusion itself begs for an explanation. How are synthetic a priori judgments possible? It is a “matter of life and death” to science, metaphysics and to human reason that the grounds of this kind of knowledge be explained.

        Peace

        Like

        1. I have one final anecdote to my previous post: It is a “matter of life and death” to science, metaphysics and to human reason that the grounds of this kind of knowledge be explained.

          One does not have to explain synthetic a priori judgements if one is content playing in the sandbox of discourse with all of the other adolescents.

          Peace

          Like

        2. Lee,
          I don’t think a synthetic a priori judgments can exist in isolation. They are built using premises from a posteriori knowledge, and tested by how well it predicts future experience.

          To make sure I didn’t mangle that with incompetent use of philosophical terminology, I’ll restate it that theory is built on observation (conscious experience) and tested by how well it predicts future observations. All observation is theory laden, and all theory begins and ends in observation. They form an epistemic loop.

          Like

          1. “I don’t think a synthetic a priori judgments can exist in isolation. They are built using premises from a posteriori knowledge, and tested by how well it predicts future experience.”

            Absolutely. Nevertheless, there is one caveat missing from your assessment. Synthetic a priori judgments express conditions on the possibility of experience, possibilities which are often outside the scope of posteriori knowledge, such as the ontological distinction of power.

            Do you care to address my original comments on power?

            Peace

            Like

          2. Lee,
            On power, I would but I have to admit that I’m not sure what you’re looking for. On the question about what makes electrical power objective, I’d say it’s the fact that many observers agree on what happens with it, and that that consensus is part of a model that consistently makes accurate predictions.

            I should mention a view that you might strong disagree with. As far as I can see, all we ever experience is the subjective, that is, reality modeled in the manner our own mind models it. We can only form theories about the objective. When those theories consistently make accurate predictions, we say it’s an objective fact.

            Like

          3. “On power, I would but I have to admit that I’m not sure what you’re looking for.”

            It’s not a trick question, you know exactly what I’m looking for. Here it is again: Is power a subject, or is power an object? No offense Mike, deflection might be an effective tactic to distract from an obvious point being asserted, but defection is not a winning strategy in the art of discourse.

            “As far as I can see, all we ever experience is the subjective, that is, reality modeled in the manner our own mind models it. We can only form theories about the objective. When those theories consistently make accurate predictions, we say it’s an objective fact.”

            You’re right, I do disagree and here’s why: The only thing we ever experience is the objective, to believe otherwise reduces our own experience to the paradigm of solipsism. Contrary to what you might choose to believe Mike, solipsism is exactly what your model asserts.

            One final anecdote: I do not necessarily believe that homo sapiens are too dumb to figure out causation, because causation never fails. Causation is flagrantly obvious to any mind that refuses to surrender to prejudice, bigotry or bias. In contrast, I do believe that homo sapiens are too self absorbed and too arrogant to admit to themselves that reality is anything other than what we as solipsistic self-models say it is, for no other reason than the notion of an objective reality is beyond the reach of our own control.

            Our primary experience is all about power and control, it always has been and it always will be. Unless or until one is willing to address the genetic defect in the underlying form of reasoning and rationality nothing will change, because nothing can change.

            Have fun my friends in the sandbox of discourse, for my research is now finished.

            Peace

            Like

        3. Actually, it’s not true that all theory ends in observation. Some, string theory comes to mind, make predictions that can’t be observed. Which cripples our ability to assess those theories, and makes whether they’re science controversial.

          Like

      2. I entirely agree on the efficacy of science and, in particular, rational thought! But I fear it throws the baby (of irreducible, irrefutable phenomenal experience) out with the bathwater (of deceived perceptions and thoughts).

        You can’t fool something that isn’t there to be fooled.

        Like

      3. re squirrels in the head…

        You know the old Woodie Allen joke about the man who goes to his doctor for advice, because his brother believes he’s a chicken. The doctor, of course, suggests the brother should see a psychiatrist.

        “Oh, no,” the man protests, “we can’t do that. We need the eggs!”

        Like

        1. What needs to be there for something to be fooled? If a hacker manages to fool a security system into letting them in, is that just a metaphorical use of “fooled”? Or is anything capable of making a decision sufficient?

          Woodie Allen’s logic is, as always, impeccable. 🙂

          Like

    2. Wyrd,

      This is Hoffman’s view that our consciousness is a reality hack and unreliable for understanding the world so maybe even less reliable for understanding itself or what underlies itself. Or you could take Strawson’s view that consciousness is actually all we do know and the rest of the world is the mystery. I think they are both right in their way.

      Chalmer’s hard problem is a manufactured philosophical problem that is unsolvable scientifically because science operates on common experience. You can’t study the subjective using common experience so I’m surprised somebody would try to develop a “scientific” theory for the subjective.

      I don’t see how this exactly has to do with the “limits of introspection.” The usual discussion we have here is about the science of consciousness. The science of consciousness must be more than introspection or I guess we can forget about ever have an objective understanding of it.

      Like

      1. FWIW, for me, Chalmers’ “hard problem” is just a statement to the effect that we don’t (yet) have the physics to describe how phenomenal experience arises from an information processing system.

        What makes it “hard” is that we have no real idea what those physics might be.

        Like

  4. If you come to a fork in the road, don’t take it. It is an imaginary fork. We need neither to discredit subjective experience nor to accept the first interpretation of it that pops into our heads. More later when I have access to a real keyboard.

    Liked by 2 people

    1. “exactly how consciousness arose from that activity was still just as mysterious as how it was supposed to have arisen from the pineal gland”

      This, no doubt, is the “magic”. Only there’s nothing magical about it! It says that something is mysterious, meaning we can’t understand it, meaning we can’t reason our way from *other* premises to conclusions like “being subject to that brain activity would feel like THIS (call up a subjective-experiential memory here).” It says that there’s an *conceptual* gap between objective and subjective concepts.

      Which is perfectly compatible with the absence of an ontological/metaphysical gap. In fact, at least some versions of materialism *predict* the existence and stubbornness of the conceptual gap. A few don’t, like traditional Functionalism. So much the worse for Functionalism.

      Neither our perceptions of the objective nor subjective world should be discarded when it is not necessary to do so – and here it is not necessary to do so. But we need to distinguish between perceptions and interpretations/stories (even if these do lie on a continuum). If an interpretation runs into trouble, try another! For example: instead of saying “there is no physical cause of my thoughts” say “I have *no immediate access* to any cause of my thoughts”.

      Like

      1. Paul, you’ve discussed your type B materialism before, the idea that phenomenal consciousness can’t be reduced to physical processes, that they can only be shown to correlate with certain processes. I’m a type A materialist, I think we can develop an a priori link between the physical and the phenomenal.

        But doing so means realizing that phenomenality is a construction, not something to necessarily be accepted at face value. It’s a bit like asking that the light sabres in Star Wars be explained. There’s no real explanation if the light sabres are assumed to be real laser swords that can deflect off each other. But there are explanations if all we seek to explain is the appearance, their phenomenality.

        “instead of saying “there is no physical cause of my thoughts” say “I have *no immediate access* to any cause of my thoughts”.”

        I agree. It could also be worded that we have no access to the causes and construction of our phenomenal experiences. But this seems like just another way of saying the same thing to me.

        Like

        1. Type B materialism (plus some neuropsychology) implies that there should be a persistent conceptual gap. Type A predicts that it should go away. So far, score one for my team.

          Like

        2. Can you unpack “phenomenality is a construction”? In psychological terms, construction is a type of phenomenality, but here phenomenality itself is being created.

          Like

          1. I wasn’t aware of the psychological term, and the wikipedia on it isn’t very clear, so any resemblance would be pure coincidence.

            By saying it’s a construction, I only meant that it’s prepared, assembled by lower level machinery in the brain, and what is prepared may be effective in helping us make adaptive decisions, but not in understanding how the mind works.

            Like

  5. Isn’t this Graziano the same who thinks we will be able to upload minds?

    I hope whatever my mind gets uploaded to is based on something better than a model that “doesn’t provide accurate information for actually understanding the mind”.

    Like

      1. So we could have a model of the mind that is almost perfect but we wouldn’t be able to understand it ourselves?

        If all he is saying is that sometimes we can fool ourselves in regard to our own mind, then I’m not seeing anything new in the argument or anything to draw any broader conclusions from.

        Like

  6. I don’t know if I’ve taken a path – can you help me determine whether I have? My current leaning is that there is some property of the material in brains (whether at the atomic, molecular, or cellular level) which is necessary for phenomenal experience (1st person) but which is not detectable in the same way from outside the system (3rd person). I don’t want to affirm that this property is separate from the material and properties we observe from the outside, and I don’t want to affirm that it is accumulative, such that the smallest bit of material has it in some sense, and I do want to affirm that the outside perspective could feasibly develop sufficient correlations to be able to describe what the inside perspective is like, or even manipulate other material to have the same phenomenal experience. Have I taken a fork or am I still stuck at the junction?

    Liked by 1 person

    1. This is why I’ve been leaning toward some variation of the electromagnetic theories.

      https://broadspeculations.com/2019/12/01/em-fields-and-consciousness/

      Here’s another article:

      https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3139922/

      The abstract:

      Local field potentials and the underlying endogenous electric fields (EFs) are traditionally considered to be epiphenomena of structured neuronal network activity. Recently, however, externally applied EFs have been shown to modulate pharmacologically evoked network activity in rodent hippocampus. In contrast, very little is known about the role of endogenous EFs during physiological activity states in neocortex. Here we used the neocortical slow oscillation in vitro as a model system to show that weak sinusoidal and naturalistic EFs enhance and entrain physiological neocortical network activity with an amplitude threshold within the range of in vivo endogenous field strengths. Modulation of network activity by positive and negative feedback fields based on the network activity in real-time provide direct evidence for a feedback loop between neuronal activity and endogenous EF. This significant susceptibility of active networks to EFs that only cause small changes in membrane potential in individual neurons suggests that endogenous EFs could guide neocortical network activity.

      Like

      1. I saw your posts and comments on that. It’s interesting. But even if true, it would still be the case that the outside view of the EM fields does not directly expose the phenomenal properties of the inside view, would it not? To be clear, I don’t think it’s surprising that this would be the case under any theory, but it does create a barrier that forces us to at some point just say either (a) the internal property is so highly correlated with the external description that we should just accept that they’re completely identical, or (b) the lack of 100% identity between the internal and external descriptions still requires more explanation.

        Like

        1. “…does not directly expose the phenomenal properties of the inside view..”

          Ultimately there will always be the subjective gap of Chalmer’s hard problem. No scientific theory can bridge that gap. That is a philosophical issue, not a scientific one. It very much like the question of what is an electron like internally? We can measure an electron’s external properties but we can’t know what it is like internally. What is it really? Consciousness is the same except we do have a subjective view of our own consciousness.

          Like

    2. Travis,
      Your description seems aspirational, to achieve a physical explanation for phenomenal experience. But the fact that you describe it as an aspiration makes me wonder whether you see obstacles to achieving it. If you do, what would you say those obstacles are? Do they come from your first person assessment of phenomenal experience?

      If so, does science need to explain the obstacles? Or does it just need to explain why you think they’re there? If the former, then I think you’re intuitively privileging introspection. If you’d accept the latter, then I think you’re open to the implications of introspection not being reliable.

      Like

      1. The primary obstacle is my inability to conceive how a description could fully capture subjective experience in objective terms. I’ve even had the thought that maybe this barrier is nothing more than a limitation of our cognition. So I think I would be content if science just explained why I think the obstacle is present. But it’s going to have to be a damn good explanation with plenty of evidence – and I’m a bit skeptical that this will happen. So I feel like maybe I’m standing at the junction and leaning toward the path you’ve taken, but am unwilling to commit to it.

        Like

        1. On explaining why we think the obstacle is present, that’s largely what Graziano’s attention schema theory is all about. Is there conclusive evidence for it yet? There is evidence, but we’re far from a full accounting. Every theory of consciousness has to issue promissory notes in terms of explanation, but I perceive the attention schema, and similar types of theories, to have the smallest IOUs right now. Only time will tell.

          Like

      2. Mike,
        A few things I wanted to share after having stewed on this a bit more:
        1. I was confused as to why you referred to my description as “aspirational”, but upon re-reading it I realize that came from my use of the word “want”. To clarify, that was not intended to reflect aspiration, but rather trepidation about committing to any claims. So “I do want to affirm that…” is probably more accurately phrased as “I suspect that ….”
        2. I’m finding myself increasingly agreeing with the notion that answering the meta-problem is tractable and would be sufficient. Though I was previously aware of the concept, I honestly hadn’t thought much about the implications, so thanks for putting it in terms that lead us to compare and contrast that path with the alternatives. A few considerations which have contributed to this perspective are:
        a) I’m not seeing why a distrust of introspection is as important as your post suggests. It seems that our introspection about the intractability of explaining the subjective in objective terms would actually just be confirmed if we were to arrive at an explanation for that intractability. Am I misunderstanding what you meant here?
        b) Gödel has set an example of what it could mean to demonstrate how some solutions are conceivable but still not available to us (and maybe this is one way to interpret Hofstadter’s position on consciousness?). So there’s precedent for something akin to a rigorous solution to Chalmer’s meta-problem.
        c) Solving the meta-problem really is an end-game. Once explanation X shows why we aren’t able to explain Y, there’s no point is trying to explain Y unless you see problems with explanation X. That’s why (most) mathematicians are no longer pursuing completion. Chalmers seems to think there would still be a hard problem, but at that point wouldn’t it be the impossible problem?

        So I think I’m a bit further down that 2nd path, but I really don’t like the idea of putting too much credence into any one conclusion at this stage of the game, so I suspect I’ll be hanging out near the junction for a while.

        Liked by 1 person

        1. Travis,
          Thanks for sharing those thoughts. It sounds to me like you have a good handle on the issues.

          On a), it comes down to what we’re trying to explain. If introspection tells us that we have a glow or aura around our brain that is channeled by attention and provides non-physical qualia, then no physical explanation will work. A magic step will always be necessary.

          On the other hand, if we focus on why we think we have that glow or aura, essentially why we think there is a ghost in the machine, solving that issue seems to admit to purely physical information processing explanations. (I realize my verbiage here, using words like “glow”, “aura”, or “ghost” may be seen as a caricature of the actual views, but I’m just using it to quickly get the idea across.)

          On c) to be fair, solving the meta-problem isn’t guaranteed to be the end of the game. I personally think the most straightforward solutions to it do end it, but until one of those solutions racks up enough empirical success, we can’t rule out that solving it might lead to something actually more like the traditional hard problem and we’re stuck with some panpsychism like situation. Again, very much not where my money is.

          I can understand your skepticism. Although I personally have very little doubt about introspection’s unreliability. But if evidence arose showing that the more problematic intuitions it produces actually are pointing to some truth, I’d be prepared to change my views, but only with that evidence.

          Like

          1. I don’t doubt that introspection can be unreliable. But I also don’t doubt the existence of the introspective experience itself. So my observations were primarily in reference to potential Gödel-like explanations that don’t do away with the experience but rather show us why we cannot explain it using the “formal system” of objective descriptions.

            Like

          2. I don’t doubt the introspective experience either. While I often agree with the illusionists, I also agree with the argument that if experience is an illusion, then the illusion is the experience. But I do doubt what the experience is telling us, particularly in cases where its our only source of information.

            I’m not familiar with Gödel-like explanations. I have heard of his incompleteness theorems, and the many arguments based on it (and did a post a while back on why they don’t rule out AGI), but I’m not sure if that’s what you’re referring to.

            Like

          3. People were trying to identify a complete set of axioms and theorems to address the entirety of mathematics (and logic) and instead of doing that, Gödel showed that it can’t be done. So the thought is that something similar could happen with consciousness. Instead of figuring out how to explain the subjective in objective terms (language, math, etc…) somebody could come along and provide a clear explanation of why that’s impossible (as suggested by my intuitions). Chalmers has shined a spotlight on the difficulty, but nobody has yet been acclaimed as having done anything like what Gödel did and that kind of resolution seems feasible to me.

            I agree that the incompleteness theorems don’t rule out AGI, but I’m open to the possibility that they are relevant to the question of consciousness in some way, perhaps as a component of the kind of explanation noted above.

            Like

          4. Maybe so. The mysterians could ultimately be proven right. But it seems to me that once Gödel is ruled out for AGI, there needs to be something extra about consciousness in particular that would make it vulnerable. The issue is that the theorems involve a system’s ability to make precise proofs about itself, but if we involve other external systems and the possibility of probabilistic knowledge, then the limitations seem to be compensated for.

            But it might be that we’ll never have an intuitive first person understanding of how we can have first person experiences. We may eventually have an objective account of it, but it’ll never feel right. I think that’s entirely plausible. I wouldn’t see it as a failure though, since we have to leave our intuitions at the door for many scientific theories.

            Like

          5. To say that incompleteness does not prevent AGI is not to say that it is not relevant. My take is that this is generally what Hofstadter suggests – that consciousness requires a self-referential system and the subjective \ objective boundary is tautological. That does not preclude us from building systems which achieve this.

            To be clear, I’m neither advocating nor rejecting this position. My primary intent was to use incompleteness as an exemplar of the kind of solution to the meta-problem which would confirm introspection about the inability to bridge the gap without simultaneously requiring something that isn’t empirically accessible. Incompleteness does not infer some new domain, it just says that there are limits to what we can do.

            Like

          6. Ah, ok, I think I see what you’re saying now. You’re using Gödel specifically related to introspection. I do think incompleteness theorems could put limits on the system’s ability to know itself.

            My feel is that the system is far more limited even than that. I don’t think it’s internal access even gets it to the point where incompleteness theorems might be limitation. But I certainly agree that if it did, those theorems would be an issue.

            Like

          7. As noted, my original intent was not to apply incompleteness in any specific way, but rather to just use it as an exemplar – and your interpretation is one way that it could be applied.

            Liked by 1 person

          8. If you can’t explain it in terms of the physical, then you are pretty close to the idealist position that mind/consciousness is more fundamental than the physical.

            Like

  7. Hi Mike!
    I’m afraid I may be taking the other road…I think that what you’re calling “inner” sense is largely reliable, primarily because I see it as foundational for all types of knowledge. (Wyrd made a similar point above).

    One thing I think we need to be careful of is making a judgement about all of introspection based on the unreliability of the senses—after all, this is where Descartes begins, and you know where he ends up. Even there, our senses aren’t unreliable all the time, they couldn’t be. I think the same must be true of introspection…or at least we might be forced to take the most general aspects of it as reliable to avoid undermining all possibility of knowledge, even scientific knowledge.

    Liked by 2 people

    1. Hi Tina,
      I certainly wouldn’t argue that our inner senses are unreliable in all things. Obviously similar to the outer senses, they wouldn’t be very adaptive if they were. They’re reliable enough to effectively allow us to survive, to get us through day to day activities.

      But there’s a lot of psychological research that shows we often don’t have as good of insight into the workings of our mind as we think we do. We do know that introspection can be wrong. It evolved to tell us some things about about the state of the mind, not to tell us how the mind itself works.

      No source of knowledge is infallible. All knowledge is ultimately probabilistic, beliefs with varying levels of certitude. Ultimately our only real measure is whether a belief increases or decreases the accuracy of our predictions of future experiences.

      When introspection provides information that can be corroborated with other sources, we should accept it. (And a lot of scientific research does.) But when it’s the only source of information, we should be cautious.

      Liked by 1 person

  8. I took the same fork as you, and I did it intuitively without even knowing that I’d made a choice. It seems obvious to me – just as obvious as knowing that God doesn’t exist. I find it really hard to imagine the world in any other way.

    Liked by 1 person

    1. That’s pretty much the same for me. Science long ago required me to discount so many intuitions, that discounting the introspective ones felt, well, intuitive. When I first read about consciousness, I felt like I must be missing something. For me, it remains a fascinating problem, but not the intractable one the Philip Goff’s of the world find it to be.

      Like

  9. I was very excited when I first read about Dr. Graziano’s Attention Schema proposal. It was the first theory of consciousness that I had come across (and I had read about many!) that actually offered some kind of explanation of how the subjective feeling of awareness comes about. However, I think it falls short of being a full explanation in a number of ways: first because self-awareness is not the same as consciousness, and second the theory doesn’t seem to tell us much about how the schema is built and how it is used. I agree that self-awareness is the basic requirement for my consciousness, but I think attention, memory, emotion and feeling are all part of my consciousness as well. I think my brain builds schemas of all brain processes, not just attention, and that these form part of the overall schema that describes my whole self, which is what makes “me”.
    I have been developing a set of proposals over the last 8 years that I have documented on a new website hierarchicalbrain.com

    Liked by 1 person

    1. I agree about the promise of Graziano’s theory, and that it can’t be the full answer. Are you familiar with Higher Order Thought theories? They include a lot of the schemas you’re pondering. (Although many HOTs might have commitments you’re not onboard with.) Graziano, a few years ago, talked about a “standard model” of consciousness that’s been developing over time, at least among functional theories. I think he was a little too keen to put his own theory as the center, but I agree with the overall idea.

      A standard model of consciousness?

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.