The difficulty of subjective experience

As I indicated in the Chalmers post last week, phenomenal consciousness has been on my mind lately.  In the last few days, a couple of my fellow bloggers, Wyrd Smythe and James Cross, have joined in with their own posts.  We’ve had a lot of interesting discussions.  But it always comes back to the core issue.

Why or how do physical systems produce conscious experience, phenomenality, subjectivity?  Why is it “like something” to be certain types of systems?

On this blog, when writing about the mind, I tend to focus on the scientific investigation of the brain.  I continue to believe that the best insights to the questions above come from exploring how the brain processes sensory information and produces behavior.

For example, we got into a discussion on the Chalmers post about color perception and its relation to the different types of retinal cones, processing in the retinal layers, and processing in the brain.  These dynamics don’t yet provide a full accounting of the experience of color, but it does provide insights.

But while these kinds of explorations do narrow the explanatory gap, and I think they’ll continue to narrow it, they’ll probably always fail to close it completely.  The reason is that, the subjective never completely reduces to the objective, nor the objective to the subjective.

In the case of the subjective, we can only ever reduce phenomenal properties so far, say down to the individual quale or quality, like redness.  These qualities are like a bit in computer software.  A bit, a binary digit, is pretty much the minimal concept within the software realm, being either 1 or 0, true or false.  No matter what the software does, it can’t reduce any further.

On the other hand, looking from the outside, a bit maps to a transistor (or equivalent hardware) which absolutely can be reduced further.  Software can have a model of how a transistor works, but it can’t access the details of the transistors it is currently running on, at least not with standard hardware.

Similarly, once we’ve reduced experience to fundamental phenomenal properties, we can’t reduce any further, at least not subjectively, no matter how hard we introspect.  But looking from the outside in, these phenomenal properties, these qualia, can be mapped to neural correlates, which can then be further reduced.

Of course, most phenomenal primitives are much more complicated than a bit, and will involve far more complex mechanisms.  Some philosophers argue that we’ll never know why those complex correlates map to that particular subjective quality.  But I think that gives up too quickly.  The fact is that different experiences will have many overlapping neural correlates.  The intersections and divergences can be analyzed for whatever functional commonalities they show or fail to show among phenomenal primitives, enabling us to get ever tighter correlations.

Something like this has already been happening for a long time, with neurologists mapping changes in patient capabilities and reports of experience to injuries in the brain, injuries eventually uncovered in postmortem examinations.  Long before brain scans came along, neurologists were learning about the functions of brain regions in this manner.  Increasingly higher resolutions of brain scans will continue to narrow these correlations.

But as people have pointed out to me, all we’ll then have are correlations.  We won’t have causation.  Which is true.  Causation will always have to be inferred.  However, that is not unique to this situation.  In empirical observation, correlation is all we ever get.  As David Hume pointed out, we never observe causation.  Ever.  All we ever observe is correlation.  Causation is always a theory, always something we infer from observed correlations.

Nonetheless, the explanations we’re able to derive from these correlations will never feel viscerally right to many people.  The problem is that our intuitive model of the mental is very different from our model of the brain.  And unlike the rest of the body, we don’t have sensory neurons in the brain itself to help bridge the gap.  So the intuitive gap will remain.

Similarly, in the case of the objective, we can never look at a system from the outside and access its subjective experience.  The objective cannot be reduced to the subjective.  As Thomas Nagel pointed out, we can never know what it’s like to be a bat.  We can learn as much as we want about its nervous system, and infer what its experience might be like, but it will always be from the perspective of an outsider.

So the subjective / objective gap can’t be closed completely.  But it can be clarified and bridged enough to form scientific theories.

But, some will say, none of this answers the “why”?  Why does all this neural processing come with experience?  Why doesn’t it just happen “in the dark”?  Why aren’t we all philosophical zombies, beings physically identical to conscious ones, but with no experience?

I think the best way to answer this is to ask what it would actually mean to be such a zombie.  Obviously when we say “in the dark” we don’t mean that it would be blind.  But what do we mean exactly?

Such a system would still need to receive sensory information and integrate it into exteroceptive and interoceptive models.  That activity would still have to trigger basic primal reactions.  The models and reactions would have to be incorporated into action scenario simulations.  And to mimic discussions of conscious experience, such as system would need some level of access to its models, reactions, and simulations.

In other words, if it didn’t have experience, it would need something very similar to it.  We might call it pseudo-experience.  But the question will be, what is the difference between experience and pseudo-experience?

The contents of experience appears to have physical causes and appear crucial to many capabilities.  That makes it functional and adaptive in evolutionary terms.  In the end, I think that’s the main why.  We have experience because it’s adaptive.

But the intuitive gap will remain.  Although like the intuitive gap between Earth and the planets, between humans and animals, or between life and chemistry, I think it will diminish as science makes steady progress in spite of it.

Unless of course, I’m missing something?

111 thoughts on “The difficulty of subjective experience

  1. “We have experience because it’s adaptive.”

    Which explains why it’s valuable (which it clearly is), but not how it exists in the first place.

    “In other words, if it didn’t have experience, it would need something very similar to it.”

    Chalmers point is that all those things are functional. Computers already do many of them (without, we presume, phenomenal experience).

    Take it seriously or unseriously, the question remains. [shrug]

    Liked by 1 person

    1. “Chalmers point is that all those things are functional.”

      But what does that mean? He never describes exactly what non-functional thing he’s talking about. Just some hazy indefinable something that can never be explained functionally. But without clearly defining it, it’s impossible to even discuss.

      “Computers already do many of them (without, we presume, phenomenal experience)”

      I think we agree that computers still have a long way to go. For them to get, say, to the navigational intelligence of a field mouse, will that require a level of experience similar to the mouse’s? (Granting that machine experience would be qualitatively very different from a mouse’s.)

      “Take it seriously or unseriously, the question remains.”

      I am trying to take it seriously. I wouldn’t have posted on it if I weren’t.

      Like

      1. “But without clearly defining it, it’s impossible to even discuss.”

        Chalmers tries to define exactly what he means. I tend to agree with his points, so naturally I feel he made his case (hence my post on his papers). You disagree with some of his key premises, so equally naturally you don’t.

        As we agreed recently, you and I don’t have much new territory here. Be nice if we both live long enough to see some big advance that shifts the scale, eh?

        “For them to get, say, to the navigational intelligence of a field mouse, will that require a level of experience similar to the mouse’s?”

        I LOL because last night I watched the last episode of Black Mirror — the Miley Cyrus one with the father and his robot mouse hunter. (Which, I know, makes no sense if you haven’t seen it.)

        I know what you’re saying, that a system complex enough to fully implement a field mouse would necessarily, on some level, be a field mouse in any regard that matters.

        One thing I’ve said from the beginning: we will build machines (or fail to build machines), and the matter of computationalism will be decided. But it’s possible we’ll go on debating phenomenal consciousness for another two-thousand years.

        “I am trying to take it seriously. I wouldn’t have posted on it if I weren’t.”

        Sorry, it wasn’t meant as a pointed point. Just a(n attempt at a cute) reference to Jame Cross’ post. 🙂

        Like

        1. “Chalmers tries to define exactly what he means.”

          I’ll grant that he tries to convey his meaning. But from my perspective, he discusses a lot of functionality, then says that clearly there is something more than functionality here. He’s referring to an intuition for that supra-functional aspect of experience. But as far as I can tell, he never describes it, only alludes to it.

          I’m looking for an explicit statement. If you know where in any of the papers that happens, I’d be grateful if you could quote it or direct me to it. I wasn’t able to find it.

          “Be nice if we both live long enough to see some big advance that shifts the scale, eh?”

          I think we’ll continue to see AI make progress, but human level intelligence is probably past our horizon. We might see something like mouse level intelligence in our lifetimes.

          “But it’s possible we’ll go on debating phenomenal consciousness for another two-thousand years.”

          There might be AIs and uploaded entities debating whether natural humans were conscious in the way they are.
          https://www.smbc-comics.com/comic/2013-01-25

          “Just a(n attempt at a cute) reference to Jame Cross’ post.”

          James did call it the “unserious” problem, and I guess I did agree with that. But I really do take it seriously, if only because so many people are troubled by it.

          Like

          1. “He’s referring to an intuition for that supra-functional aspect of experience. But as far as I can tell, he never describes it, only alludes to it.”

            I’m not quite sure what you’re asking. An intuition? He doesn’t use the word at all in the first paper. In the second, “intuit” appears 22 times; I didn’t look closely at all of them. Does this approach what you’re asking:

            In any case, I think the basic intuitive divide in the field is not that between “materialists” and “skeptics”, but that between those who think there is a phenomenon that needs explaining and those who think there is not: that is, between type-A materialists and the rest.

            “James did call it the ‘unserious’ problem, and I guess I did agree with that. But I really do take it seriously, if only because so many people are troubled by it.”
            I really didn’t mean anything by it. If you look at what I said, I was referring to James (“unseriously”) and you (“seriously”).

            All I’m saying is that in either outlook, there is still the question of “what the heck is going on?”

            Like

          2. “but that between those who think there is a phenomenon that needs explaining and those who think there is not”

            I do remember that passage. But my question is, what is the phenomenon that requires explaining? Just saying the word “experience” doesn’t cut it, because that includes a lot of functionality. So what is this non-functional component that he takes to be obvious? He somewhat admitted the issue in this snippet.

            Dennett might respond that I, equally, do not give arguments for the position that something more than functions needs to be explained. And there would be some justice here: while I do argue at length for my conclusions, all these arguments take the existence of consciousness for granted, where the relevant concept of consciousness is explicitly distinguished from functional concepts such as discrimination, integration, reaction, and report.

            I can’t find anywhere where he “explicitly distinguished from functional concepts”. As far as I can tell, it’s simply asserted.

            So, I’ll ask you or anyone else who cares to answer. What about experience is non-functional?

            Like

          3. How about the breath-taking awe I felt when I looked through a telescope and photons the Sun bounced off Saturn entered my eye.

            Or the astonishing feeling of watching, and hearing, a shuttle launch. I just saw it in an IMAX theatre once. It made me weep and I couldn’t speak for five minutes.

            Or the joy I feel reading GEB and going through Hofstadter’s little puzzles and explorations. It’s been making me LOL with delight. I get similar joy from exploring math in general.

            Or that delightful chill Terry Pratchett’s Discworld stories create upon reading the final paragraph and watching him, once again, seriously stick the landing. His stories are nearly perfect narrative arcs.

            Or the thrill of a really good thunder storm. Or a really good book or movie.

            Or how my favorite candy tastes.

            Or what the desire to design and build something just for fun feels like. (Or how fun climbing a tree is. Or dancing. Or singing.)

            These are the kinds of things Chalmers means.

            Like

          4. Thanks for taking a shot at it. I fear my answer is going to make you groan from its philistine nature.

            First, I can see where you’re coming from in designating these as non-functional. In utilitarian terms for your life, getting food, surviving, finding mates, they might not be efficacious. (Although some prospective mates and friends might be attracted by your passionate responses. It might increase your social standing. So who knows.)

            But I think regarding this as non-functional in terms of the nervous system is a category error, a mismatch between layers of abstraction. In terms of nervous system function, all of these, as far as I can tell, are 100% functional. Due to the radical difference between modern environments and the ecological niche we evolved in, the affects may not always be adaptive.

            But misfiring affects are still affects. We still have situations where sensory patterns trigger associations that in turn trigger various affects, all of which are integrated for potential decision making. Nothing non-physical or exotic is needed to explain them. It is an interesting exercise to try to map our reactions in the modern environment to adaptive ones we might once had held, but again, no expansion of physics, or necessarily even neurology, are necessary for it.

            Unless I’m still missing something?

            Like

          5. “First, I can see where you’re coming from in designating these as non-functional.”

            Yes, I think it’s good to highlight that we’re talking two different kinds of “functional” here.

            The sense of a behavior being functional — an evolutionary adaptation of some kind. Elsewhere you noted, for instance, how dogs playing has a function of practice for them. (I agree, but in my experience there is also a component of joy, which I think might be a component you’re missing.)

            There is also the sense of “there is some function that accomplishes X” — this one that seems to matter in the context of phenomenal experience. This is the question I most took you to ask and which I tried to answer.

            But I did also try to name things that seemed to have low evolutionary value. In particular, the sense of awe or intellectual delight. (The awe is fairly intellectual, too, when you take a close look at it.)

            “But I think regarding this as non-functional in terms of the nervous system is a category error, a mismatch between layers of abstraction. In terms of nervous system function, all of these, as far as I can tell, are 100% functional.”

            (I don’t really understand the first sentence.) What functions do you imagine accomplish intellectual delight?

            “But misfiring affects are still affects. We still have situations where sensory patterns trigger associations that in turn trigger various affects, all of which are integrated for potential decision making.”

            Delight comes after any decisions, though, doesn’t it? It’s a result affect.

            In way, doesn’t talking about “affects” kind of admit to the phenomenal properties of consciousness? (You avoid this trap, but Chalmers mentions in his book that materialists often use a phrase like “mind arises from the brain’s operation” which kind of admits to phenomenal properties.)

            Mike, you’re a determined Type-A materialist, so you’re going to slot any argument I can make into that worldview. Given that I’ve not been able to convince you that the brain is “special” (that is, unique and highly potent in the natural world) nor that what brains do (i.e. mind) is an objective property of reality, I don’t see any argument I could ever make that would sway you away from what you believe is true.

            Chalmers is arguing, kind of off the idea that zombies (p- or b-type) are a coherent idea, that the phenomenal content of thought is “extra.” Since it doesn’t appear required to explain behavior (functionality) — a point you’re making strongly here — then there needs to be some extra account for why there is something it is like to be us.

            The bottom line is that materialism does not yet account for it, although there are many theories from denying it to finding it in particles. We seem to agree (because you take the problem and discussion seriously) there is something to be discussed.

            As an aside about animal behavior: There is (AFAIK) some question as to why cetaceans breach. I think part of it must be for joy. Imagine being a water creature with an awareness of a different realm on the other side of an easy to penetrate barrier. How cool would it be if you could get up some speed and leap into that strange environment for just a moment. I think they do it for fun.

            Like

          6. “This is the question I most took you to ask and which I tried to answer.”

            Well, I was looking for non-functional in the sense that physics, or at least known physics, can’t deliver it, and functional in the sense that it can. That’s why I focused on functionality at the nervous system level. We can narrowly define “functional” to exclude things that physics can provide, but then we’re losing sight of the original reason for this discussion.

            “What functions do you imagine accomplish intellectual delight?”

            Intellectual delight strikes me as a reward coming from the neural reward systems.
            https://en.wikipedia.org/wiki/Reward_system
            Delight in learning something new or coming to understand something seems very adaptive to me, a specific instance of the primal emotion circuit Jaak Panksepp labeled “SEEKING”. But even if it sometimes fires in non-adaptive ways, as I noted above, that doesn’t mean it isn’t functional in the sense of being a physical mechanism.

            “Delight comes after any decisions, though, doesn’t it? It’s a result affect.”

            It does, but it also influences future decisions. What gets rewarded gets repeated.

            “In way, doesn’t talking about “affects” kind of admit to the phenomenal properties of consciousness?”

            An affect definitely is a phenomenal property. That admission doesn’t mean that phenomenal properties aren’t physical mechanisms. We also know that affects come from the limbic system and brainstem underneath. If someone’s amygdalae get destroyed, they lose a substantial portion of their sense of fear. With sufficient damage to their ventromedial prefrontal cortex, they lose enough affective functioning (i.e. sentience) that their ability to make decisions is generally destroyed. Affects are deeply rooted in the physics of the brain.

            “You avoid this trap, but Chalmers mentions in his book that materialists often use a phrase like “mind arises from the brain’s operation” which kind of admits to phenomenal properties.”

            Which is why I don’t like that kind of language. It’s inherently dualistic. We all use it to some degree, and I’m sure if you look through my history you’ll find cases where I do. But in my case, it’s metaphorical, in the same sense as talking about evolutionary developments as “innovative”.

            “Mike, you’re a determined Type-A materialist, so you’re going to slot any argument I can make into that worldview.”

            Wyrd, I’m a skeptic, who applies the same rule you’ve given to other people recently: extraordinary claims require extraordinary evidence. Saying that there is exotic physics or supra-physical things going on in the brain is an extraordinary claim. There are far more grounded explanations for feeling awe when looking at Saturn or why whales beach themselves. We shouldn’t go for the exotic stuff until the more mundane explanations have been thoroughly eliminated.

            “We seem to agree (because you take the problem and discussion seriously) there is something to be discussed.”

            For me, the issue is worth discussing because so many intelligent people are troubled by it. But in my personal case it’s more the meta-problem than the hard problem that’s the issue.

            Like

          7. “Intellectual delight strikes me as a reward coming from the neural reward systems.”

            Yes, well, as I said, you’ll find a way to slot it into your worldview. 😀

            “Wyrd, I’m a skeptic, who applies the same rule you’ve given to other people recently: extraordinary claims require extraordinary evidence. Saying that there is exotic physics or supra-physical things going on in the brain is an extraordinary claim.”

            On the one hand: The idea that phenomenal experience comes from principles we haven’t yet discovered and therefore don’t understand.

            On the other hand: The idea that phenomenal experience comes, somehow, from physical principles we already know pretty well and which give no basis whatsoever for phenomenal experience.

            As I’ve said before: I think your skepticism is defensive. I don’t think anything will ever penetrate it. You don’t seem to apply it much to your own position.

            Like

          8. James,
            Just realized this was in reply to one of my comments. Do you mean teleonomic meaning is the non-functional thing Chalmers is referring to? If so, this seems close to the platonic interpretation of his views I mentioned at the end of last week’s post.

            Like

  2. An aspect that is sometime not considered enough is that it is quite odd that a bunch of physical stuff is able to act as though it was a single entity. While it might get by at a basic level with some straightforward algorithms transforming sensory inputs into muscular outputs, it gets a big boost in capability if alongside the sensory data it can also work with a representation of itself acting in the world with continuity over time; rather like seeing itself and its capabilities in a mirror.

    The difficulty of getting inside its subjective experience is real, but here’s a possible way. Supposing I was to generate a display board showing everything the entity was able to be conscious of – what it senses, did before, could do next, feels now, might feel next and so on. Now stand yourself in front of that display and push buttons to modify what happens next to optimise how you will feel, rather like playing a computer game. I think this could give you quite a strong intuition for what it’s like to be the entity (be it Nagel’s bat or something else).

    Like

    1. Such a dashboard would be an interesting experience. And I think you’re right, it might give us a strong intuition that we understood its experience. But it wouldn’t give us its actual experience.

      For example, suppose it told us the entity was a bird experiencing magnetoreception and building models based on it. Or like Nagel’s bat, echolocation. Or perhaps a type of emotion humans never feel, such as a lion’s urge to kill all the cubs of the previous alpha male. How could we map those to our own experience?

      Likewise, in the case of an intelligent bomb clearing robot, if the robot had a strong affect to find buried bombs, even if it resulted in its destruction. We might understand its compulsion on the dashboard. But could we ever actually grasp what it’s experiencing?

      Like

      1. Hey James, see if you can dig out my comment today at your site. For some reason I have problems on sites initially until approved or whatever. WordPress doesn’t seem to like me. I suppose it knows more about me than I’d like it to…

        Like

  3. But it always comes back to the core issue. Why or how do physical systems produce conscious experience, phenomenality, subjectivity?

    Even though this does seem to be where conversations tend to fail, to me that’s just a sign of us not having a good explanation for something that people are quite interested in. Does this make it “core”? Well for general interest I suppose it does. But the following is a question that I personally consider a great deal more interesting. This is the “What?” of consciousness. Without such an answer, psychology (or the anchor to our mental and behavioral sciences) remains extremely soft. Develop an effective answer for the “What?” of consciousness, and I suspect that science will find the “Why?” and “How?” questions at least far more sensible. It will surely be more difficult to search for anything which has more dimly defined “What?” answers.

    I’ve developed a “What?” answer for consciousness that I consider pretty good. And from this I believe that I’m able to go a bit beyond Mikes “it’s adaptive” explanation for the “Why?” as well. (I believe Chalmers considers this to be a second hard problem of consciousness, not that I do.)

    I consider the brain to effectively function as a massive non-conscious computer which outputs a relatively tiny conscious computer. One of the three varieties of input to the tiny computer, is subjective experience.

    So then why is it that this second form of computer was needed to augment the first? What could it do that a neuron based computer alone could not? The only reason that consciousness exists, I suspect, is because it provides natural autonomy by means of an agent. Without existence mattering to an entity, an agent should not exist. I consider this stuff (or qualia, sentience, value, what it’s likeness, and so on), to be the most amazing stuff in the universe. Apparently in more “open” environments, evolution is unable to otherwise program for productive enough contingency function. Thus I believe it’s impossible for philosophical zombies to function as we do.

    Liked by 1 person

      1. Eric, in what sense does the first computer output the second computer.

        Great question James! Ultimately this computer exists in the sense that I experience existence, and I presume you do. And fortunately this gets into the “What?” of consciousness, or the very thing which I believe I have a pretty good answer for.

        I was going to say outright “No” to your simulation suggestion, as in “The Matrix”, but then upon reflection, I couldn’t. Indeed, all this phenomenology that I experience is real to me, though physics tells me that it’s ultimately just a show. The Matrix is simply a second order simulation, or even third order given that it’s (1) a movie based account of (2) people living simulated lives, whose (3) input senses have been subverted.

        Regardless of all that, imagine a causal place very much like our universe, though here nothing feels good or bad to anything. Then “life” evolves, though there is still nothing it is like to be anything. Life then evolves central organism processors, and yet still no teleological function exists. Well I’m saying that something is able to happen in these processors which creates something other than it, that feels good/bad. I consider “this most amazing stuff”, essentially the fuel which drives the simulation of “real” existence. And what’s the evolutionary use of it? Here the “agent” rather that standard pre-programmed information, decides what to do. Far greater autonomy should thus become realized.

        Beyond this “value” input I consider there to be “senses” and “memory”. The conscious processor interprets them (which I call “thought”) and scenarios are constructed to figure out what will make it feel best each moment. The only conscious output I know of, is “muscle operation”.

        Like

    1. Eric,
      If by “the what”, you mean the definition of consciousness being investigated, then I agree. I’ve made that point many times. But which definition of consciousness in particular does your model shed light on? And how?

      ” I consider this stuff (or qualia, sentience, value, what it’s likeness, and so on), to be the most amazing stuff in the universe.”

      You seem to take this “most amazing stuff” as a starting point. Aren’t you interested in how it comes about? Does your model shed any light on it? If so, how?

      Liked by 1 person

      1. Mike,

        But which definition of consciousness in particular does your model shed light on? And how?

        Well my own dual computers model of course! And it does so by describing the model that I’ve developed and consider generally useful.

        You seem to take this “most amazing stuff” as a starting point. Aren’t you interested in how it comes about? Does your model shed any light on it? If so, how?

        Well I’m not exactly uninterested in the “How?” of it. It’s more that I’m not actually optimistic that such an answer will be achieved, or an least not any time soon. But let’s say that an experimentally verified answer were somehow achieved tomorrow. How much biochemistry and neurology would I need in order to grasp it? I have far too much respect for such matters to imagine that this answer would make sense to me. Could any human today make sense of such an answer? Perhaps, though I wouldn’t bet on it.

        Like

        1. Eric,
          What I think you should be thinking about is how you theory adds epistemic value. What does it offer that Global Workspace, Higher Order, Attention Schema, or other theories don’t? Each addresses their own specific definition of consciousness, just as your does, although these others are deeply enmeshed in cognitive neuroscience, which yours isn’t. So, what are you offering to spark the revolution you want to begin?

          “Could any human today make sense of such an answer? Perhaps, though I wouldn’t bet on it.”

          That seems rather defeatist for someone offering theories of consciousness. You seem to be assuming that because you haven’t done the work to understand the nervous system, that no one else is capable of doing so.

          Liked by 1 person

      2. Mike,
        I believe that my theory creates epistemic value, by describing the nature of human function (among other beings). I’m not aware that Global Workspace, Higher Order, Attention Schema, or anything else provide answers which are at all similar to my dual computers model.

        For example, by chance yesterday someone gave me a “like” in a situation where I used my theory to account for a circumstance that Paul Torek proposed a couple weeks ago.
        https://selfawarepatterns.com/2019/06/16/what-is-it-about-phenomenal-consciousness-thats-so-mysterious/#comment-30423
        I doubt that any of these other models get into such architectural detail regarding our function, let alone provide my specific answers. Regardless, that is where my ideas are to be assessed, or the “What?” rather than “How?” of our function.

        You imply that my psychological models need to be grounded in neuroscience. I consider this backwards however. Psychology “supervenes” upon neuroscience, which is to say that neuroscience is the more basic stuff which constitutes psychology. Essentially psychology is an end result which neuroscience must account for rather than the converse.

        Notice that we don’t learn about neuroscience by means of better understandings of chemistry. Neuroscience supervenes upon chemistry. Instead we learn about neuroscience by observing the brain, though chemistry does make it up. Similarly we don’t use neuroscience to learn about psychology, though psychology is made up of brain function.

        This gets into my “psychology as architecture” versus “neuroscience as engineering” discussion. The architect of grand buildings needn’t understand anything about engineering such structures, and perhaps even shouldn’t. Such understandings are thought to compromise architectural vision. Instead this person’s job is to design what’s needed exclusively. A specialized engineer then tries to figure out how to make such designs work. So architecture supervenes upon engineering.

        Similarly the psychologist must observe our behavior in order to understand us at this higher level. Then with valid models the neuroscientist can try to develop brain based explanations for such function.

        (Lisa Feldman Barrett’s theory of constructed emotion is a sorry example of an engineer attempting to do architecture. She decided that we don’t feel emotions unless we are taught to, so for example newborn babies have no emotions. And why did she decide this? Because she couldn’t find neural correlates to emotion.)

        If you suspect that my psychological models aren’t very good, then challenge them on those grounds. My ignorance of neuroscience is, if anything, beneficial for my own particular role in this business.

        Liked by 1 person

        1. Eric,
          “You imply that my psychological models need to be grounded in neuroscience.”

          It’s completely up to you whether it needs to do that. If your theory is only in the realm of psychology, then that’s fine. Pure psychological theories can have a lot of value, for the domains they address. I get the impression that your model of the mind is really a launching point for a moral philosophy (or a philosophy addressing matters normally addressed by moral philosophy). In that domain, it might have insights. (Although that’s a whole other topic.)

          But you do keep bringing it up in discussions about neuroscience, or on posts like this one on the intersection between the phenomenal and neuroscience. If you don’t want the neuroscience tie in questions, then you might consider which topics your model is relevant for.

          “Psychology “supervenes” upon neuroscience”

          Can’t say I’m a fan of the word “supervene”. It expresses something in reverse of the actual causal flow. Psychological phenomena are neural phenomena from a high level (at least in the mainstream scientific account). Yes, that means that psychology supervenes on the neurology, but it doesn’t determine what we’ll find there. The intersection is what the entire field of cognitive neuroscience is all about. But it’s a field that requires knowledge of both psychology and neuroscience (and biology, and chemistry, and…well you get the point).

          “A specialized engineer then tries to figure out how to make such designs work.”

          I think you may be conflating a construction practice with scientific investigation. In scientific investigation, we don’t rigidly start from the top and only explore the lower levels in terms of that top level. We explore all levels and attempt to reconcile them. Someone only looking at the top level can’t dictate what will be learned about the lower level.

          ” My ignorance of neuroscience is, if anything, beneficial for my own particular role in this business.”

          For fundamental theories of the mind, I disagree, but this is an old topic between us.

          Liked by 1 person

      3. If you don’t want the neuroscience tie in questions, then you might consider which topics your model is relevant for.

        Mike,
        I do consider my model relevant to this particular post. I’ve been quite clear that it doesn’t address the “How?” whatsoever, but it does get into the second issue, or the “Why?” Regardless the point of my initial comment was more general to each. Without an effective “What?” answer, it would seem that the “How?” and “Why?” should be quite arbitrary matters. You agreed about that. From then on I’ve been fielding your questions and concerns regarding my project. Your inquiry has helped me illustrate some of my ideas’ subtleties. (And with an assist from JamesofSeattle!) But when might we get beyond lecture level understandings, and into practical level understandings? That’s what I desire most. For example you said:

        I get the impression that your model of the mind is really a launching point for a moral philosophy (or a philosophy addressing matters normally addressed by moral philosophy).

        (By the way I’m pleased that you didn’t bill it as straight “morality”, since I consider my ideas “amoral”.) My project is substantially more ambitious however. For one thing it’s about improving science through generally accepted principles of metaphysics, epistemology, and axiology. Then it’s also about psychology. Then finally it’s about my dual computers brain architecture. Regardless of whether or not my model wins out in the end, it seems to me that effective brian architecture must come from above neuroscience.

        (Of course a neuroscientist might develop effective brain architecture, though I believe that such a model would need to be psychological in nature. Otherwise I don’t think it would be effective to call it “brain architecture”.)

        I hear you regarding this “supervenes upon” phrase. This came up quite a bit back at Massimo’s, though I’d groan since I don’t think I always grasped what was being said. And in truth the only time I’ve ever used this phrase myself was to you just above. Apparently you taxed my little mind so hard, that this tool somehow emerged for me. Of course evolution builds things “bottom up”, though the only way that we can figure things out is “top down”.

        Liked by 1 person

        1. Eric,
          “But when might we get beyond lecture level understandings, and into practical level understandings?”

          Well, one step might be to more explicitly connect your ideas to the topic of the post. What additional information does it provide for why people have subjective experience? Specifically, what information does it provide that we don’t already get from the conventional scientific understanding? For example, in neurobiological circles it’s already accepted that consciousness gives an organism the ability to respond to novel situations, and that the valence punishment / reward systems feed into that. How can your model help us move forward on this?

          “(By the way I’m pleased that you didn’t bill it as straight “morality”, since I consider my ideas “amoral”.)”

          I’ve slipped up there before and remember your objections to it. 🙂

          Liked by 1 person

      4. Mike,
        What explicit information does my model provide for the “Why?” of subjective experience’s evolution? Hmm…. Well this is a secondary model which fits in quite nicely with my other models. Let’s see what I can come up with for you about that…

        Non-conscious life possesses no agent, no teleology, nothing that figures anything out. Therefore in order for such life to better survive, it essentially needs to be “programmed” to effectively deal with relevant circumstances — just as a robot can be programmed to function in all sorts of ways. (And of course here I merely mean that natural selection tends to promote the survival of the stuff which is programmed more effectively.)

        I consider genetic material to be reality’s first “computer”, and here is where natural selection should have emerged. The second would be the central organism processor. This machine should have evolved programming to deal with more and more situations under the dynamics of natural selection, and yet still without agency. Under my dual computers model an agent is outputted by this “brain”, and by means of creating the most amazing stuff in the universe — value itself. So why does it do so?

        I suspect that as environments became more “open”, the less effective contingency responses could be programmed for. (Here by “open” I mean the opposite of a quite limited domain, such as the game of chess.) If more open environments naturally demand exponentially more extensive programming, then there should have been a point where circumstances weren’t dealt with all that effectively by means of non-conscious computation.

        This might have been the end — a natural road block. A topping out. Apparently not however. Evolution found a way to get around the need for exponentially greater non-conscious computation, or something which couldn’t forever be matched in a progressively more open world. This seems to have occurred by outputting value, or agency, or something with purpose. Thus this other entity was charged with teleologically figuring things out under those problematic open environments.

        Regarding your question of evidence, this is displayed by the computers that we build. Where do they function well? In more closed environments such as the game of chess. Where do they function poorly? In more open environments that they thus couldn’t be programmed to sufficiently address.

        This is a non-neuroscience based “Why?” explanation that I’ll stand up against any neuroscience based account, such as the one proposed by Todd Feinberg and Jon Mallatt. As I recall they propose that consciousness was needed for distance senses. What they seem to have overlooked is that even our idiot robots have distance senses. So my model remains consistent with standard evidence, while their’s does not.

        Unfortunately I’m still just lecturing here however. You’ll need to give my models a try yourself in order to gain a practical level grasp of them. To do so you could try to predict the sorts of things that I’d say about a given matter, and then check to see what I actually have or would said about it. That approach should help refine your understandings in ways that lectures simply do not. It’s like how a physics student must refine mere lecture level understandings by attempting actual physics questions.

        Liked by 1 person

        1. Eric,
          It doesn’t sound like you understand Feinberg and Mallatt’s point about distance senses. (Possibly because my summarized retelling doesn’t do it justice.) The reason they focus on distance senses is because such senses imply complex image maps, models of the environment, inner worlds. And their criteria for affect consciousness: global operant learning, value based trade offs, and others, is slanted toward novel situations. In other words, to meet exactly what you’re talking about, dealing with open environments.

          So they’d agree with you about why consciousness arose, and could point to a rich history of evolutionary theories along those lines. They’d also add a lot of analysis of the anatomical brain structures, at least in vertebrates, that are involved in generating what you call “value”.

          My point is that there is a rich literature dealing with this stuff, and much of it has what you’re discussing, but also ties it in with known psychological and neurobiological models. I think you’re handicapping yourself by not engaging with it. Granted, doing so takes a lot of work and a lot of expensive books, so it’s not for everyone, but it seems hard to offer new insights if you’re not thoroughly familiar with the ones that are already there.

          Liked by 2 people

      5. Mike,
        In my last comment I took your suggestion to heart and explained why it is, according to my own brain architecture model, that consciousness evolved. Once again, it’s essentially that more open environments demand exponentially increased computational complexity that evolution couldn’t forever match, but got around by “subcontracting out” certain things to an agent. The first computer effectively creates value for something other than it to experience, and it’s that special second computer’s job to figure out what to do under various more “open” circumstances. Whatever computations that the first computer does, I suspect that the “virtual second” which it creates (thanks James) does less that 1000th of one percent as many. (That could be my own “Moore’s Law” type of rule.)

        Then I got a bit spicy by matching up my answer to what I perceived to be F&M’s answer, or “distance senses”. You told me that this was not actually their answer, and even said that they’d agree with mine! Thanks! Furthermore you went on to imply that my project would fail if it weren’t tied to “known psychological and neurobiological models”. Thus I’ve gone through some of our related discussions over the years, as well as yesterday read “Consciousness Demystified”. Could it be that F&M would indeed endorse my own “Why?” answer, and perhaps even my project in general? Well um….

        Now that I’ve read their book it seems clear to me that their “engineering” could be reinterpreted to support my own project. The issue however is that the structure which they’ve actually built, was done through standard “single computer” architecture. Essentially they did the leg work, though without my dual computers model for guidance.

        For example here’s one stab that they made at the “Why?” question:

        “But life does not have a single cause, nor can life be simply reducible to its parts. In a similar way, sensory consciousness is ultimately a natural feature first of life, then of reflexes, and finally of complex brains. In this regard, why consciousness exists is no more mysterious than why life or embryonic development exists, but with the added feature of subjectivity.”

        Beyond their theme that “life” is special (which doesn’t sit well with either of us), here their “Why?” had an apparent answer of “Because.”

        If their project were redone to my own specifications, would F&M endorse it? That is possible. We’re all self interested products of our circumstances, and so with a compelling enough case they’d surely consider my project to also be in their own best interests. I doubt that I have the muscle alone to present a case that they’d find compelling enough however. Of course I don’t need their endorsement or permission to reinterpret their information to my own uses, though that would always be my goal. In the meantime however I’ll continue trying to help people like yourself gain a working level grasp of my ideas. Without this particular attribute, my project is definitely hopeless.

        And thanks for that “Why?” endorsement, as well as your continued insights. I feel like this has been a hopeful turn of events. For good or ill, hope is what drives my passions.

        Liked by 1 person

        1. Eric,
          Good to hear you read Consciousness Demystified, although even though it’s relatively short, running through it in a day seems awfully fast. It’s got a lot of heady concepts in it.

          I can’t resist widening the quote you provide, because I think it provides important context.

          First, from the standpoint of neurobiology, wondering why the neural processes that create consciousness don’t go on in the dark without consciousness is a bit like asking Ernst Mayr why the biological processes that create life don’t go on without the integrated system feature of life.14 Or like asking why the millions of multiplying embryonic cells that develop into a human body don’t do so without development. These questions seem tautological at best and absurd at worst. Life is a naturally created and integrated system feature of atoms, molecules, tissues and organs, and so on. But life does not have a single cause, nor can life be simply reducible to its parts. In a similar way, sensory consciousness is ultimately a natural feature first of life, then of reflexes, and finally of complex brains. In this regard, why consciousness exists is no more mysterious than why life or embryonic development exists, but with the added feature of subjectivity. And we have explained the subjectivity part as stemming from embodiment and the ontological irreducibilities

          Feinberg, Todd E.. Consciousness Demystified (The MIT Press) . The MIT Press. Kindle Edition.

          Note that the “ontological irreducibilities” are the subjective / objective divide I discuss in the post.

          I’d forgotten about the point early in the paragraph, that asking why the components of the system don’t function without providing the behavior of the whole system, is similar to asking why 2+1 must equal 3.

          “Essentially they did the leg work, though without my dual computers model for guidance.”

          Based on our prior discussions, don’t you equate the second computer with subjective experience? If so, then aren’t they discussing the second computer when they discuss subjectivity? (Not that they spend a lot of time on it in particular in the book.)

          I guess my question is, other than not calling it “a computer”, what are they, or similar neurobiological accounts, missing? Or maybe asked another way: what work or insights is the concept of the second computer providing?

          “And thanks for that “Why?” endorsement, as well as your continued insights.”

          Thanks, but I really just noted that the explanation you provided matches other widely accepted theories.

          Liked by 1 person

      6. Mike,
        You and I will naturally read this book with different sets of eyes given our different sets of interests — you from the perspective of an “engineer”, and me from the perspective of an “architect”. And indeed, most of the heady stuff for you, I presume, was matching their accounts of brain anatomy with your own account of it. That should have been working level stuff for you to test practically, though simply lecture level stuff for me. Architecture is my own forte, and indeed they did provide me some to assess. Fortunately it’s the very thing that you’re now questioning me about.

        To my knowledge Ernst Mayr did not provide my own “Why?” answer for the emergence of “life”, and certainly F&M have not provided my own “Why?” answer for the emergence of subjective experience. (To turn the shoe around on F&M, is it not tautological to reply to a “Why?” question, with a “Because.” answer?)

        You have my non-tautological account for why evolution created subjective experience. As for “life”, I suspect it emerged given the natural relationship between reality’s first form of computer, or genetic material, and the process of evolution which I’d say genetic material effectively spawned. If some of these computers could reproduce themselves better than others could, then that’s what should naturally happen. And of course it’s quite rare for planets to host life, though our planet seems deeply infected with the stuff.

        There is a subject / object divide in the sense that reality must objectively exist, and yet any subject which perceives reality can thus only do so subjectively rather than objectively. This is to say that perceptions can only exist through caricatures or perhaps cartoons of what’s real. What my dual computers model does for this well known conundrum, is provide specific parameters regarding subjectivity. Here I propose that it’s possible for a computer without subjectivity, to output a punishment / reward dynamic, or all that’s valuable to anything anywhere. This “strangest stuff in the universe” shall exist as the agent, and even if there is no associated functionality. The simulated conscious computer is modeled in terms of a valence input, a sense input, and a memory input. (F&M instead place these as separate varieties of consciousness itself, though without realizing that a “memory” form could have been included as well.) Such inputs are then interpreted and scenarios are constructed (“thought”) in the quest to promote instant valence. The only non-thought output that I’ve been able to discern is “muscle operation”.

        I consider this an effective “What?” of consciousness explanation, or something which is needed in order to effectively consider those supposed hard problems of consciousness. Furthermore once broad general psychology yields associated brain architecture, neuroscientists in general should have a better platform from which to build. Like it or not, the architect must inform the engineer. If we had a group of professionals with generally accepted principles of epistemology, among others, this understanding should become more clear in science, or another problem that I’d like to help fix.

        Thanks, but I really just noted that the explanation you provided matches other widely accepted theories.

        I think what you mean here is that my position remains consistent with what you know of science, not that you know of various people who provide my own “Why?” of subjective experience explanation. If you do ever come across anyone who proposes that or any of my other presumably unique positions, please do let me know! The work of F&M may be quite helpful to me, though I’ll need as many potential allies as I can find.

        Liked by 1 person

  4. “We have experience because it’s adaptive.”

    I don’t know if you realize it or not Mike, but you have just stumbled onto an aspect of consciousness that is ground breaking, it’s called adaptive-ness. The reason, and the only reason we see the novelty of our phenomenal world which is expressed through the evolutionary process is because of adaptive-ness, and that adaptive-ness is underwritten by experience itself. And experience itself is only realized because of the underlying architecture of consciousness. An architecture of consciousness will accommodate a limited degree of self-determination within an otherwise determinate boundary, a boundary which is defined by the properties of any given discrete system. A limited degree of self-determination within any discrete system will result in novelty. In the evolutionary process, self-determination is the heart and soul of novelty.

    Like

    1. Lee,
      We definitely agree on the importance of adaptiveness. But I get the impression it might mean more to you than it does to me.

      When you say “self-determination”, what do you mean? Are you talking in some contra-causal free will sense? Or just in the compatabilist sense of a system that can predict outcomes and make decisions based on those predictions?

      Like

      1. “Are you talking in some contra-causal free will sense? Or just in the compatabilist sense of a system that can predict outcomes and make decisions based on those predictions?”

        Determinism verses free-will is a raging debate in it’s own right Mike. So, I would say that both of those scenarios are the dynamics of consciousness, both of which contribute to the novelty of expression.

        Liked by 1 person

      2. “We definitely agree on the importance of adaptiveness. But I get the impression it might mean more to you than it does to me.”

        Here’s my point Mike: Do you think the adaptive-ness which defines the entire evolutionary process would be adaptive if it wasn’t conscious? And if so, how? (other than magic, because you’ve already stated that science is magic that works)

        Like

  5. “But, some will say, none of this answers the “why”? Why does all this neural processing come with experience? Why doesn’t it just happen “in the dark”? Why aren’t we all philosophical zombies, beings physically identical to conscious ones, but with no experience?”

    Yes, I think that is still a question.

    I’m not sold on the field mouse analogies or that some complex navigational capability would require consciousness. As I pointed out elsewhere, we have a cars or other devices that can navigate already complicated environments without signs of consciousness and I am not sure some internal subjectivity light would need to come on for these to have field mouse or better capabilities.

    The answer may be more simple and not so exotic. We don’t think that current machines are conscious but we do think various biological forms are. We can disagree about which ones but, at least, we think humans are conscious (for the most part ). So what is it about biological entities? Of course, they are the product of millions of years of evolution. But what else? They do not require a high amount of energy to operate. Energy is at a premium because it requires a lot of work to obtain it and metabolize it. Consciousness may be the cheapest biological solution in terms of energy to implement complex adaptive behavior.

    This also ties to Hoffman’s ideas that our consciousness is a hack or a bunch of hacks on reality. Hacks are the cheapest way to implement consciousness, although if we extrapolate from software development, they usually come with a long-term cost.

    Liked by 1 person

    1. “we have a cars or other devices that can navigate already complicated environments without signs of consciousness”

      I think you might be overestimating the sophistication of those cars. They do build exteroceptive models of their environment. But those models appear to be hooked to rules based engines. In other words, they’re roughly doing what our midbrain does.

      What they’re missing is imagination, the ability to run through multiple action scenarios and assess each one in terms of valance, enabling global operant learning and valued based trade offs. I think that’s what’s needed to convert the automatic rules based actions into affects (felt emotions).

      “So what is it about biological entities? Of course, they are the product of millions of years of evolution. But what else?”

      I actually think it’s even less exotic than what you’re describing. Our intuition of a fellow consciousness is triggered because we see them having similar urges and reactions that we have. We feel a commonality with them that we don’t feel with machines. It’s why some people seem more consciousness in a c-elagans worm than in a self driving car.

      “This also ties to Hoffman’s ideas that our consciousness is a hack or a bunch of hacks on reality.”

      I think Hoffman’s right about that part. Although it’s not even hacks, but a series of accidents that are selected for or against.

      But the leap he then takes to idealism is, I think, unfortunate.

      Like

      1. You did kind a gloss over my main point: Consciousness may be the cheapest biological solution in terms of energy to implement complex adaptive behavior.

        I wasn’t trying to make any point about why we think C. elegans might be conscious.

        But since you raised that point, I would argue another reason why we might think our organisms were conscious (although not particularly C. elegans). If consciousness is the evolutionary solution to complex adaptive behavior then we expect organisms when they reach a certain level of complex behavior have some degree of consciousness. This conforms very much to our intuitions about other species. We don’t normally think of snails or worms as conscious but cats, dogs, and crows we might more likely think so. On the other hand, we wouldn’t think a self-driving car to be conscious because it isn’t biological and would have no reason to have implemented a cheap solution.

        Like

        1. Biological systems are far more efficient than technological ones. The brain reportedly operates on about 20 watts. But I’m not sure, in and of itself, that relates to our intuition of a conscious system, except perhaps in some of the behavior of the system in conserving its energy when it can. But a dog playing is not really conserving energy, even if we can find evolutionary explanations for play (practicing and honing techniques, etc).

          In terms of which species we tend to regard as conscious, distance senses, particularly eyes, seem like a big part of it. Todd Feinberg and Jon Mallatt, in their book ‘The Ancient Origins of Consciousness’, point out that this isn’t bad criteria, since camera like eyes imply mental imagery, although for simple animals, I’m not sure it necessarily implies imagination and sentience.

          Like

          1. You’re still not quite grasping what I am saying. At least, I think you are not.

            Let’s go back to the question of why consciousness exists at all.

            We probably will be able to create a machine that can do everything a human being can do and it might appear conscious. But since we designed and built it, we would know if we built into a capability for subjective experience (assuming we would know how to do that). We probably wouldn’t build this in because it would be redundant and unneeded. The machine could do everything we want without subjective experience.

            In the same way, evolution could have developed complex organisms without subjective experience. Likely plants and simple animals do not have subjective experience. They interact with the environment through instinct and reflexes. Evolution could have continued in this vein to create more and more complex animals by introducing new neurological capabilities, but not consciousness, to the point that eventually human beings (or what looks like and acts like human beings) evolve without consciousness.

            So why didn’t evolution take this path? It is related to the metabolic/energy requirements of bodies and brains. Consciousness must be a cheaper biological solution in terms of energy to implement complex adaptive behavior than lack of consciousness. And hacks and short cuts must be the primary way it is implemented to save even more on energy.

            The trade-off between metabolic requirements and brains is somewhat understood in human evolution. Chimpanzees, and presumably distant human ancestors, have a smaller brain and a longer digestive tract than humans because their diet is not calorie rich and requires a lot of energy to extract calories from it. Humans by learning to cook starches and consuming meat developed a shortened digestive tract which enabled them to develop a larger brain that consumes more calories for body size than any other animal.

            This same sort of trade-off must have been made in the consciousness/no consciousness “decision”. Consciousness must require less energy than implementing the same behaviors without consciousness since any energy required for complex behavior must be balanced against the energy requirements to keep the body itself alive. Evolution would select for the more efficient way of implementing complex behaviors and consciousness must exist because it is more efficient than the alternatives.

            Like

          2. You’re making the assumption that we could build a system that could do everything we can do without it being conscious. If such as system could do all that, we could never be sure it wasn’t conscious via some alternate architecture, particularly if it had the ability “mimic” discussing its own subjective experience for an extended period of time.

            That’s not to say I don’t agree that consciousness is an energy saving mechanism. In principle, organisms could have accrued increasingly more reflexive instincts to handle situations, essentially an increasingly vast lookup table. But eventually the substrate to hold that would have been too large, not to mention processing speed would degrade as it grew. By providing increasingly sophisticated prediction mechanisms, the brain found a way to more efficiently handle the diversity of decisions motile organisms needed to make.

            But robots are going to face similar constraints. We can only add so many programming directives to a self driving car. Eventually to make it able to handle all the situation on the road, it will likely need the ability to simulate various scenarios to see which best meet a relatively compact set of goals. Or if we manage to avoid it for cars, we probably won’t for robot maids.

            Like

          3. I don’t doubt we will eventually be able to build it without designing in consciousness. What capability do you think a machine could never have that a human has?

            If we are building it, we would know if we had built in consciousness or not. Unless you are saying, it just could magically appear a la IIT, Skynet, or some other mechanism. It became conscious. We don’t know how.

            But consciousness is not an obvious energy saving mechanism compared to “increasingly sophisticated prediction mechanisms” that do not require consciousness. Actually consciousness seems more like a superfluous add-on.

            Like

          4. “What capability do you think a machine could never have that a human has?”

            Did I give the impression that I thought there were human abilities a machine could never have? I don’t, at least not principle. It’s worth noting that humans are machines, just evolved ones. So the only question is can we design to do everything an evolved one can do. In principle, I can’t see any reason why not.

            “If we are building it, we would know if we had built in consciousness or not.”

            I think we’ll definitely know which capabilities we’ve given it. There doesn’t seem to be consensus on which capabilities are necessary and sufficient for consciousness.

            ” Actually consciousness seems more like a superfluous add-on.”

            I don’t agree with this. It’s basically the planning subsystems from the inside. It’s an integral part of the information processing.

            Like

  6. Re “But while these kinds of explorations do narrow the explanatory gap, and I think they’ll continue to narrow it, they’ll probably always fail to close it completely. The reason is that, the subjective never completely reduces to the objective, nor the objective to the subjective.” I think you are wrong to conclude this as such a conclusion cannot be made honestly until you have exhausted all possibilities and we are nowhere near that stage. I would also guess that you would agree that the gap between the subjective and objective (at least in bodily functions) has closed significantly over the last centuries. And, what does that indicate? That the gap will never close? Or that the gap will be closed soon? How the heck could we know until we have exhausted all of the possibilities.

    Like

    1. First, note the “probably” in that statement. I leave open the possibility that we might be able to close it.

      But it’s worth noting that there are serious obstacles. The brain simply doesn’t have the wiring for us to reduce our subjective experience down to the operations of neurons. Actually completing the close might require an emulated brain where we can test the effect of removing or altering ever smaller sections of circuits to see what the the emulated mind reports. Obviously ethics would be an issue here.

      But as I noted in the post, we can probably close it enough to bridge it with operational theories, theories that could then be tested. That might seem like a limitation, until you remember that empirical observations are conscious experiences and scientific theories are tools to predict experiences. Theories of the brain-experience connection would just be scientific theories predicting experiences.

      Like

  7. Nonetheless, the explanations we’re able to derive from these correlations will never feel viscerally right to many people. The problem is that our intuitive model of the mental is very different from our model of the brain. And unlike the rest of the body, we don’t have sensory neurons in the brain itself to help bridge the gap. So the intuitive gap will remain.

    You’re starting to sound less like a Type A materialist and more like a Type B (or C, makes no difference here; that’s for future scientists and philosophers to resolve). Come to the Light Side of the Force, Mike!

    But the question will be, what is the difference between experience and pseudo-experience?

    If you only value human or sufficiently human-like experience, it’s tempting to use a different word for the internally accessible pre-linguistic cognitive and appetitive states of a radically different being. Hence, “pseudo-experience”. But if the radically different beings are intelligent, and can talk about their internal states, it would be awfully convenient to extend the word “experience” to their states as well, while bearing in mind that we have no reason to think theirs are like ours. Either way of talking is fine, as long as you know the big picture.

    Like

    1. In truth, the distinction between Type-A and Type-B seems a bit hazy to me. The way Chalmers describes them, it almost seems like Type-B are simply more willing to talk about the subjective-objective divide, whereas Type-A tend to focus on the objective side. Although if you read them at length, Type-As do discuss the subjective.

      Someone pointed out to me that Chalmers noted that the difference between his views and Type-B might be terminology. Sometimes I wonder if that’s not the difference between all the non-substance-dualism views. Although I still think subliminal Cartesian dualism permeates most consciousness talk.

      I totally agree that whether a particular system is having experience is ultimately not a fact of the matter. Ultimately, I think it all comes down to how much of us we see in it.

      Like

  8. James, […] Do you mean teleonomic meaning is the non-functional thing Chalmers is referring to? If so, this seems close to the platonic interpretation of his views I mentioned at the end of last week’s post.

    Mike, sorta. I think Chalmers would say meaning is a necessary part of the non-functional thing, but is not sufficient. (He talks about representation, which I think is the same thing. In my terms, a representation is a representation of a meaning, aka concept). I say meaning is necessary and sufficient.

    *

    Like

    1. Normally the representation is discussed as being isomorphic with some aspect of reality. But saying it’s a representation of meaning seems like you’re adding something new in the relationship. Or maybe we’re just hitting the limitations of language?

      This is why I think representations are best thought of as predictions, or more accurately, prediction frameworks. It seems to break us out of the intentionality recursion.

      Like

      1. [sorry for slow response … having bandwidth … and brain explosion issues]

        A large problem in this discussion is language. We’re at a point where we need to be precise, and so using words like representation gets tricky. Using the language of semiotics could help, because more of the parts are broken out, so I’m going to try to put things in that context.

        Quick (possibly flawed) semiotics review: a “sign” has three parts (because Charles Peirce had a thing for three’s): object, sign vehicle, interpretant. The object is the “intentional object”, the concept, that the sign is “about”. So for the word “apple”, the object is the concept of apple. The sign vehicle is the physical thing, so, pixels on your screen. The interpretant is the result of interpretation, whatever that might be.

        So you know my paradigm:
        Input —> [mechanism] —> Output

        The semiotics version translates like this:
        [object] – – – > SignVehicle —> [?] —> Interpetant

        When I think of “representation”, I equate it with the SignVehicle. I say the pixels represent the idea of apple. But when I read Chalmers’ description of representationalism, I think he equates representation with the whole thing, what Peirce call’s the sign, what I call an event in which a representation is interpreted.

        So the “functional” part is just the Input—>Output, SignVehicle—>Interpretant. The “object”, (what I call “meaning”) is a necessary part, but it’s not in the functional part. And again, Chalmers thinks there must be some additional non-functional part.

        Now when you say “representation is isomorphic with reality” I’m not sure what you mean. Are you saying the representation shares some physical morphology, or shares some other physical relation to some other physical thing? Does the above clarify a difference in how I (and Chalmers) are using the term?

        *

        Like

        1. JamesOf Seattle
          In your terms, I would say consciousness results when all of (SignVehicle-> Interpretant) is treated as an ‘Object2’ which can either be itself processed through (SignVehicle2->Interpretant2), or can be modifed by the Interpretant of another object.

          Therefore in these terms the subjective experience referred to in the blog post arises from an entity that is able to read and write its own structure – that being its own configuration of (SignVehicle -> Interpretant) components.

          My own effort now is to implement something conscious in software, as I think we are at the limits of what can be discussed well using words.

          Like

          1. Peter, I think I understand what you’re saying, and I think that makes you a Higher-Order-Thought theorist. But I want to be clear on your understanding, and you make it harder by putting into the abbreviated semiotics version. The thing that bothers me about the semiotics version is that it leaves out the mechanism, and i think that’s what you’re talking about when you speak of an entity that read/writes its own structure: an entity that “writes” new mechanisms that it can then use.

            So when you mention “Object2”, I see semiotics objects as abstractions (patterns), and I would not say that such an object can be processes. I think I would translate what you said as this:

            Given a psychule1 (pre-psychule?): Input1—>[mechanism1]—>Output1, where the “meaning” of the Input is Object1, the entity is conscious if there is a mechanism2 such that some part of Output1 is the Input2 and the Object2 of that Input2 is the whole of psychule1. It’s also possible that mechanism2 is actually part of Output1, i.e., mechanism2 did not exist until Output1 happened.

            Do you think my interpretation of what you wrote is correct?

            *
            [I’m also gonna try to program this into my robot, if and when it comes]

            Like

        2. JamesOfSeattle
          Very close to that, although I would see Object2 not as part of Output1 but as a separate output representing the Psychule 1 as a whole, or the parameters of Mechanism1. That means I can read the state of Mechanism 1 and write to it to change it.

          Those changes might be things like:
          – Psychule 1 is switched on or switched off
          – Psychule 1 is taking inputs from a different set of other outputs
          – Psychule 1 is making different distinctions between it inputs in order to generate outputs

          For example, if I see a dog, my brain is wired up to deal with there being a dog present, and I can read the wiring up of my brain to see that fact and to read its existence, colour or location.

          Like

          1. This is very cool, so I’m going to pick some nits. Object2 is an abstraction, and an output is a physical change, so saying Object2 is a separate output bugs me.

            Another point which is not obvious: you can make a physical change to mechanism1, but then it is no longer mechanism1. You can talk about a system with states such that the system in state1 embodies mechanism1 and in state2 embodies mechanism2. It’s then possible that the “meaning” of input1 for mechanism1 is object1 whereas the meaning of that same input1 for mechanism2 (i.e., the system in state2) is a different object2.

            Make sense?

            *

            Like

        3. James,
          I’m finding much of this a bit abstract. So I’m going to describe how I see a perception of an apple working, and maybe you can tell me which parts are the object, signal vehicle, and interpretent?

          Light bounces off an apple and enters the eye, creating a two dimensional pattern on the retina, exciting the red retinal cones (if it’s a red apple) in a certain pattern. This results in ganglion cells firing in a topologically related pattern, which send spikes up the optic nerve to the LGN (lateral geniculate nucleus) in the thalamus, which synapse onto neurons going to the V1 region of the occipital lobe. At this point, the firing pattern could be said to be topologically mapped to the pattern that was excited on the retina, which is caused by the pattern from the photons that bounced off the apple. So we can say that the pattern in V1 is isomorphic, structurally related, to the actual apple out in the world.

          From there, the firing pattern in V1 cascades further into the visual cortex, with each processing layer becoming more selective. Some neurons in the ventral (what) strream become excited by the redness property, others by the shape, others by the texture etc.

          All these patterns eventually converge on a point which, when excited, signals the apple concept. This in turn signals retro-activation back down through the hierarchies which, if they aren’t mismatched by incoming sensory information resulting in an error signal, confirms the prediction of the apple concept.

          There are lots of things here which someone could use the label “representation” for. I think many people intuitively mean that isomorphic pattern in V1. But really the overall mental concept is the entire hierarchy activated by sensory information and then retroactivated by the concept convergence.

          And we could also speak about other “higher order” patterns. Probably at least one in the parietal region for the multi-sensory consolidation of the apple concept, and another in the prefrontal regions related to any action we might entertain taking with the apple. This is probably a vast oversimplification, since there are probably many intermediate concepts throughout.

          So what in this would be the object? What would be the signal vehicle? And what would be the interpretent?

          Sorry if this is just too much into the weeds. Just trying to map your concept to what I understand.

          Like

          1. “So we can say that the pattern in V1 is isomorphic, structurally related, to the actual apple out in the world.”

            Compare Hoffman:

            “The basic claim of Interface Theory is that our representational spaces need not be isomorphic or homomorphic to the objective world W (or to a subset of W). Hence when we observe some structure in a representational space X (e.g., three dimensionality), we cannot simply infer from this observation that W must also have that same structure. ”

            https://link.springer.com/article/10.3758/s13423-015-0890-8

            Like

          2. There’s definitely no requirement that the mental image be accurate. Yet it needs to be close enough for survival. So we may not ever be able to know an apple in and of itself. Our perception of it will always be through the lens of a sensorium tuned for survival rather than accuracy. The ultimate nature of reality remains beyond our senses.

            On the other hand, we can form theories about the apple. Either it is or isn’t something we can eat, and our theory can make predictions about how it will feel if we touch it, lift it, throw it, taste it, etc. We can further refine those theories by additional careful examination, by examining which predictions are accurate and which aren’t. It may never get us to the ultimate nature of appleness, but it still has to be related to the reality in some way.

            My issue with Hoffman is he says, our senses are limited and we can’t know reality, therefore reality is an illusion. Perhaps, but if it is, the illusion is our reality, and it appears to be consistent and obey certain rules. In other words, the illusion and objective reality are observationally equivalent and it makes no real difference.

            Like

          3. “My issue with Hoffman is he says, our senses are limited and we can’t know reality, therefore reality is an illusion.”

            I don’t think that it was he is saying in the paper I linked. His mainly saying that our representations of the world evolved because

            “… we do not simply passively view the world, but also act on it, and moreover we perceive the consequences of those actions. In other words, it is possible to interact with a fundamentally unknown world if (1) there are stable perceptual channels; (2) there is a regularity in the consequences of our actions in the objective world; and (3) these perceptions and actions are coherently linked.”

            So the representations do not need to be isomorphic to the world.

            “An observer’s perceptual experiences can have a rich structure, e.g., a 3D structure that is locally Euclidean, and that transforms predictably and systematically as the observer acts, but this entails absolutely nothing about the structure of the objective world. This is wildly counterintuitive. We naturally assume that the rich structure of our perceptual experiences, and their predictable transformations as we act, must surely be an insight into the true structure of the objective world. The Invention of Symmetry Theorem shows that our intuitions here are completely wrong.”

            It isn’t a matter just of limited senses or reality being an illusion. It is just that our representations of reality will not conform to how reality actually is.

            Like

          4. Yeah, I think he’s overselling his thesis in an attempt to slide in his philosophical one. As I said above, it has some basis in truth. But if the geometry of our perceptions have no relation whatsoever to the actual geometry out there, then they’d be useless, and we wouldn’t be able to say anything about reality, including that our senses aren’t accurately representing it.

            It’s worth noting, based on his TED talk, that Hoffman reaches conclusions about animal perceptions not matching reality (such as a beetle trying to mate with a beer bottle) by observing their behavior with his own senses . But if his senses have no relation to reality, how is he even observing animals making those mistakes? That he can make that judgment implies a difference in accuracy between the beetle and himself, which implies a relation of some sort between both his and the beetle’s senses as to what’s out there.

            In the end, we come back to whether our theories, including personal primal ones, are predictive. It’s the only measure of reality we actually get.

            Like

          5. In my understanding, there are objects and vehicles and interpretants every step of the way. The vehicles are physical things, in this case, neurotransmitters mostly. The interpretants are also physical things and also mostly neurotransmitters. The objects are concepts, but as the process moves along the result is combined concepts, and more combined concepts.

            Another problem with understanding this is that mechanisms and sub-mechanisms can be combined somewhat arbitrarily. So let’s try this :

            (Note: arrows inside brackets represent sub-mechanisms)

            Pix1: LightFromApple (one pixel’s worth) —>[cone->ganglion->thalamus->V1]—>nt1(neurotransmitter)
            Object1=thing out there that reflects/scatters red light
            SignVehicle1=red photons
            Interpretant=neurotransmitters from V1 neurons

            If we want we could look inside the mechanism, so
            LightFromApple (one pixel’s worth) —>[cone]—>nt.01
            Object1=thing out there that reflects/scatters red light
            SignVehicle1=red photons
            Interpretant=neurotransmitters from Cone cell to ganglion (ignoring that cell inbetween)

            Pix2: ditto, for next pixel over

            Edge1:
            Input=(photons from Pix1,Pix2,…Pixn)—>[retina->thalamus->edge-detector]—>nt.e.1
            ObjectE1: an segment of the border of an object out there
            SignVehicle: photons from the vicinity of the “edge” of an apple
            Interpretant: neurotransmitters from the edge-detecting mechanism

            Edge2: ditto for next edge over

            Generic Object concept (not semiotics object):
            [we could go back to the photons, but let’s not]
            Input=(edge1nt +edge2nt …+edgeNnt)—>[objectDetectionMechanism]—>nt
            ObjectO: object out there
            SignVehicle: nt’s from edge detectors
            Intepretant: yet more nt’s

            I think you can imagine how this continues to get to “apple” and maybe even “apple out there located in space”. From what little I know, and you seem to indicate, all that could happen via a cascade through the cortex. Also, all of that would be “subconscious”.

            What happens next is where my (more than the above) rampant speculation begins and your intuition parts. Here’s the short version:

            Input= AppleOutThereObject nt’s —>[PFC-controlled attention mechanism]—>workspaceFiringPattern (note: not just nt’s, but firing pattern of a set of neurons)
            ObjectA: Apple out there
            SignVehicle: nt’s From AppleOutThereObject (plus various things which affect attention)
            Interpretant: a firing pattern in a global workspace that can serve as input to subsequent processes and will “represent” the AppleOutThere to such processes, or will “represent” any of the subconcepts available, including, for example, objectApproximatelyHandSized.

            Easy peasy, right?

            *

            Like

          6. Thanks James. This makes more sense. You’re basically accounting for the meaning of each signal.

            But I think it’s far more granular than what most people are talking about when they discuss representations. I agree that it all rolls up eventually into a convergence point with all the top level attributes. But I don’t think that’s the representation, at least as usually conceived.

            That said, when philosophers discuss representations, they usually stay clear of implementation details, so who knows?

            One nit: the detection of edges starts in the retina. Indeed, there’s interpretation of all visual signals already happening in the retinal layers. So it seems like the first interpretents might be in there.

            Like

          7. One nit: the detection of edges starts in the retina.

            Bah! Details! 🙂

            I think it’s far more granular than what most people are talking about when they discuss representations. I agree that it all rolls up eventually into a convergence point with all the top level attributes. But I don’t think that’s the representation, at least as usually conceived.

            That said, when philosophers discuss representations, they usually stay clear of implementation details, so who knows?

            The thing is that when most philosophers discuss representations, they are referring to representations to a specific mechanism, and that’s the mechanism Damasio refers to as the autobiographical self. All of those mechanisms prior to this big one are considered unconscious. That’s okay, because in fact they are unconscious relative to the autobiographical self Mechanism. But they are not unconscious relative to the mechanism that interprets them. [Note: most of mechanisms have a degenerate form of consciousness, just like a single water molecule is a degenerate form of a glacier, and also a degenerate form of a snowflake, etc.].

            This was the source of Chalmers’ beef. There is evidence that this “representing” is happening, but Unconsciously. He is just missing the multiple mechanisms idea.

            For myself, I think this (my) model reconciles all the other theories. Attention mechanisms control which concepts get represented in the global workspace. The global workspace is the one place (or set of places) where a smallish group of neurons can represent an astronomical number of concepts.(Semantic Pointers) Such representation constitutes an integration of information. The autobiographical self is the combination of the many mechanisms that can take those representations and do appropriate things with them, like report them. The doing of such an appropriate thing constitutes a higher order thought.

            Did I miss any?

            *

            Like

          8. I’ll have to give this some more thought. But there are some novel things I noted in your description:

            You regard all of the mechanisms as “conscious”, just that the one that the one you equate with the autobiographical self is our consciousness. Is that correct? If so, it fits with my own growing suspicion that what we call “consciousness” are the circuits that can influence the language centers. I say that knowing many people will cry “eliminativist!”

            I actually don’t think the autobiographical self is conscious. Or perhaps more accurately, I don’t think most of it is conscious. I’m not thinking about every memory, belief, or value all the time. At any one time, I only have a few loaded. But then I’m not even sure what Damasio calls the core self is conscious. (And Damasio himself doesn’t regard the proto-self as conscious.) What seems to be in consciousness is my planning, not my necessarily my immediate sensations, most of which seem handled habitually or reflexively.

            On the global workspace as a small number of neurons, that doesn’t match Dehaene’s description. His notion includes larges swaths of the parietal region, prefrontal cortex, and cingulate gyrus: the fronto-parietal network.

            Under your description, the apple enters consciousness when a semantic pointer is created in the small number of neurons making up your version of the workspace. But what makes it conscious? If I’m looking at the apple, I could see the semantic pointer maybe triggering its appleness, but how do I consciously perceive the details of the apple?

            My own answer to this is that the appleness gets registered in the prefrontal cortex, in a higher order representation, which requests more information from lower order representations in the parietal, temporal, or occipital lobes in an ongoing conversation, a conversation which manifests as long range recurrent signalling. And the integration is close enough to the introspection mechanisms in the anterior prefrontal cortex that they can notice it.

            Like

          9. I read most of the Hoffman paper at the link, and I think Hoffman’s methodology may be flawed. I don’t think the environment he did his testing in was sufficiently close to a natural environment to draw the conclusions he does. Mainly, I didn’t see where he considers a changing environment where one learned strategy might be good in environment A bit not in environment B. Consider seasons. Maybe he addresses it somewhere else?

            *

            Like

          10. “But if the geometry of our perceptions have no relation whatsoever to the actual geometry out there, then they’d be useless, and we wouldn’t be able to say anything about reality, including that our senses aren’t accurately representing it.”

            No because:

            “In other words, it is possible to interact with a fundamentally unknown world if (1) there are stable perceptual channels; (2) there is a regularity in the consequences of our actions in the objective world; and (3) these perceptions and actions are coherently linked.”

            “That he can make that judgment implies a difference in accuracy between the beetle and himself, which implies a relation of some sort between both his and the beetle’s senses as to what’s out there.”

            Human and beetles occupy different evolutionary niches so the human can perceive something the beetle cannot. Also, note this statement:

            “We use evolutionary games to show that natural selection does not favor veridical perceptions. This does not entail that all cognitive faculties are not reliable. Each faculty must be examined on its own to determine how it might be shaped by natural selection.”

            He uses examples of logic and math as faculties that might be more accurate or reliable.

            Like

          1. Peter, you’ve brought in interesting and intelligent conversation. And I’m totally good with you occasionally linking to your book and mentioning that your ideas are more fully developed there. But I’d ask that you not engage in direct sales pitches.

            I very much hope you’ll stick around and continue to discuss your ideas.

            Like

          2. Sorry all if linking to an ebook counts as inappropriate selling, the intent was genuine as the apple example raised is very close to something there and difficult to explain in short text. I will avoid doing this going forward.

            Liked by 1 person

  9. But while these kinds of explorations do narrow the explanatory gap, and I think they’ll continue to narrow it, they’ll probably always fail to close it completely. The reason is that, the subjective never completely reduces to the objective, nor the objective to the subjective.

    I believe trying to reduce the subjective to the objective, or vice versa, is impossible because the attempt creates a category error. You can know everything there is to know and explain everything there is to explain about my broken foot, but that doesn’t reduce anything to my broken foot. Likewise with my experience of my broken foot.

    *

    Like

      1. You start getting into deep philosophy here. It’s the difference between understanding everything about something and being something. It’s the difference between knowing everything Mary knows and being Mary. It’s what Heidegger called “Dasein”, which translates roughly (by me) as BeingThat (i.e., being that thing we’re talking about, that subject). The difference between knowing everything about the broken foot and being that subject that has the broken foot.

        One cannot be reduced to the other.

        *

        Liked by 1 person

        1. Okay, I think I understand the distinctions you’re making here.
          1. There is the actual broken foot.
          2. There is information in our brain/mind about the broken foot.
          3. There is the experience of the broken foot.

          Does that sound about right?

          I do think there are causal relationships between all three. But mapping 2 to 3 is the issue. We can map ever tighter correlations between the neural firing patterns and the pain of the foot, but ultimately we have correlations. We can probably bridge the correlations with theories, perhaps even high certitude theories, but they will remain theories.

          Of course, outside of our subjective experience, theories is all we ever get.

          Like

          1. But when a theory explains everything you need to predict all the behavior, what else is there? We have theories about electrons, and adding something extra, like a guardian angel for each electron, doesn’t do anything for us. Representationism seems capable of mapping 2 (information) to 3 (qualia).

            *

            Like

          2. If we get a theory that predicts behavior, I’d agree. (Those troubled by the hard problem won’t, but as I noted in the post, we’ll likely never close the intuitive gap, so they may never get satisfaction.)

            But the phrase “explains everything necessary to predict” sets off red flags for me. How could anyone demonstrate that they would actually have reached that stage? Or put another way, is there any actual explanation that doesn’t improve the confidence and/or precision of predictions?

            Like

        2. So on this interpretation, Dasein amounts to self-reference?

          If that’s even close to what Heidegger meant, I can see why he thought it was so damn important. Because self-reference is a key to many philosophical issues. Which reminds me of the best philosophy book I ever read (warning: best for ideas, but the writing is often difficult, with not enough examples).

          Like

          1. Paul, I would leave out the “self”, and say Dasein amounts to that (mechanism as part of a system) which can reference. But it’s about being that thing, as opposed to observing that thing. When Heidegger talks about things being “ready to hand” for Dasein, like a hammer for a carpenter or a brush for a painter, he’s pretty much specifically excluding a self in that process.

            But hey, in some circumstances the “self” can be ready to hand, as in when you do things involving yourself, like getting a sandwich when you’re hungry, without thinking about your self as a self. [New thought for me. Thanks]

            *

            Like

    1. JamesOfSeattle
      “The reason is that, the subjective never completely reduces to the objective, nor the objective to the subjective.”

      Perhaps we could ask that the predictions made by a ‘complete’ objective model should include a prediction about what the subject should experience. That would mean that the model needs to capture what the boundary is between the subject and the thing experienced, what it is for the subject to ‘know’ something exists, and how it applies this to knowing of itself, of the world outside itself and the of relation between the two.

      Like

      1. Peter,
        I think James got it right by focusing on behavior. How else would we know that a subject experienced something? Self report, observable reaction, or brain scans showing states similar to when self report or reaction were present, are all behavior, or based on behavior.

        In terms of theory, I do agree that connecting objective events to subjective ones psychophysically is the key. We need to narrow the correlations down to the point where the theory connecting them seems obvious and uncontroversial (even if it never seems intuitive).

        Like

        1. SelfAwarePatterns
          That works for me as you are including in ‘behavior’ brain scans or other ways of checking how the brain is working internally…..it would not be enough to look at external behaviors (eg moving and communicating), which is how I would otherwise have read it, as you get back to the zombie problem.

          Like

          1. Peter,
            I think the thing to realize though is that, ultimately, it’s all based on behavior. We can use brain scans, but only once those patterns have been established through correlations with behavior. In truth, even behavior other than self report first has to be correlated with self report. In short, we have to have a chain of evidence leading back to self report.

            Animal studies, for instance, can use a certain behavior as a criteria only after that behavior has been conclusively associated with human self reports of experience. For instance, we take an animal balancing multiple value based options to be conscious because the same behavior in humans is associated with self reports of humans being conscious of their deliberations.

            All of which is to say, ultimately you can’t evade the zombie problem. But as I noted in the post, zombies aren’t really coherent.

            (BTW, my name is Mike. Although no worries if you prefer to address me as SelfAwarePatterns, SAP, or some other handle.)

            Like

        2. Mike
          I generally agree with the way the arguments are going but there are a couple of nagging related worries here. We don’t need externally visible behavior (I’ll use the US rather than UK spelling!) in order to be conscious – you could sever all motor neurons and generate locked-in syndrome where you have a conscious subject with no externally visible behavior.

          If you go beyond that by using brain scans to, in effect, consider the person as a physics experiment in which you can measure anything you like (probes, brain in a vat, controlled experiments….etc) in order to characterise what is going on, that’s fine as a description of the physics, and in a sense accounts for everything, but not in a satisfying way.

          The missing part for me, in terms of accounting for conscious experience, is that this has not defined the thing that is conscious and how it knows it and its world exists, in its own terms of what exists (not in physics). Therefore to demonstrate something is conscious we both need a model of the nature of this thing that is conscious (an architecture of patterns and processes), and of the contents of its consciousness; and to be able to map that particular model both to the physics and to what the conscious entity is able to know as a result of having that particular architecture.

          If we miss the step of describing the conscious thing in its own terms, we haven’t explained consciousness in a way that is satisfying to the ourselves as conscious entities, we’ve just done the god-like physics.

          Like

          1. Peter,
            No worries if you want to use UK spelling. Many people here do.

            On locked in patients, I agree, ontologically, from the god-like view. But epistemically, our ability to detect whether they’re conscious comes down to them having neural patterns that are similar to ones we see in healthy individuals with the same stimuli.

            It’s conceivable that even without those specific patterns they’re still conscious, that maybe their senses are also compromised, and they’re just completely shut off from the world, but if so, we currently don’t have a way to detect that. Of course, a thorough understanding of the brain would allow us to detect if anyone was in that utterly shut out state.

            I totally agree with your necessary components. In truth, I seriously doubt we’ll ever have one theory that answers all the questions, but more likely a host of interacting theories, similar to how microbiology is actually a lot of different models that together give us an understanding (albeit with many issues still to work out) of the fundamentals of biological life.

            Myself, I think we get some insights from breaking the problem down into smaller components: reflexes, perception, bottom up attention, top down attention, affects, imagination, metacognition, spatial memory, etc. The smaller we can go, the less mystical the whole thing feels, and the more like a reverse-engineering project.

            Like

        1. Well, I gotta backtrack a little, cuz Mike just said this:
          “All of which is to say, ultimately you can’t evade the zombie problem.” (And stuff leading up to that.)

          I think I’m more in line with Peter here, in that I think, given the technology to follow all of the neurons in question, we could theoretically do all the work without self-report. Self-report makes locating the neurons involved in high-level concepts (like Jennifer Aniston(sp?)) easier, but report is not fundamentally necessary. Compare with how we know that certain neurons are detecting/representing a vertical edge without anyone having to report said vertical edge.

          *
          [if zombies aren’t coherent, how can they be a problem?]

          Like

          1. James,
            So, if you “follow all of the neurons in question” and declare to me that you’ve mapped a conscious percept, and I ask, “How do you know it’s a conscious percept?”, how will you be able to demonstrate that it is in fact conscious vs unconscious processing?

            Like

          2. Forgot to mention the zombies. Obviously I don’t think they are a problem. The only problem is people’s perception of them as a valid concept, and using the concept as a basis of skepticism toward a system that appears to be conscious.

            Like

          3. Mike, agreed about zombies.

            And by “following the neurons” I include watching the development of the neurons, so, knowing the causal history of the neurons. You ask how would I demonstrate that I know that I have mapped a conscious percept. That’s like asking an electrician how he would demonstrate he knows which light switch on the wall turns on which light. He could draw the diagram, and show you how it matches the physical wires, and where each wire is connected, and where the power comes from, etc. But I bet you’re trying to get me to say he would demonstrate it by flipping the switch. 🙂

            Reminds me of my high school math teacher explaining how to tell the difference between an engineer and a mathematician:

            Set up — room with a hot stove, a table, and a pot of water sitting on the table
            Problem — prove the water will boil
            Engineer — takes the pot, puts it on stove, waits for it to boil, says “Done”
            Mathematician — takes the pot, puts it on stove, waits for it to boil, says “Done”

            New set up — same room, pot of water on the floor
            Problem — prove the water will boil
            Engineer — takes the pot, puts it on stove, waits for it to boil, says “Done”
            Mathematician — takes the pot, puts it on the table, says “Done”

            Point: if the mechanics and details are sufficiently understood, what’s the point of the demonstration?

            *

            Like

          4. James,
            So, you’re talking about someone who already has a complete understanding of the nervous system? Well, certainly once we have that understanding, no demonstration is necessary. But that’s tautological. Saying no demonstration is necessary if we already have the complete answer doesn’t really help us in actually getting the answer.

            The question, is what is needed to get us there? Is there any way to get there that doesn’t involve us pairing up neural states we observe with certain behaviors? And is there any way to establish whether those behaviors truly indicate conscious experience except by their similarity to behaviors that humans only exhibit with reportable states?

            Once the evidence chain is established, we can just look for the neural states, but that chain has to be there first. Or am I missing something?

            Like

          5. “The question, is what is needed to get us there? ”

            The answer is an understanding of the parts. If you gave Alan Turing AlphaGo in 1950 he would have some trouble figuring out how it works, at least until you explain what the transistors are doing, and how memory is storing bits (and maybe some other tidbits). After that, I expect he would get it pretty quickly.

            Understanding what is causing the behaviors is better than just identifying which behaviors are similar.

            *

            Like

      2. Hey guys, try this:
        It seems to me that when Peter mentioned “‘compete’ objective model”, that he took things from epistemological over to ontological. In that case the question gets into metaphysics. Furthermore the true naturalist believes that on this level the subjective does reduce back to the objective.

        Once in a while I like to speak as a god as well. I find it grounding.

        Like

  10. Mike, you said: “We have experience because it’s adaptive.”

    That statements is the quintessential question: what comes first, the chicken or the egg? The entire evolutionary process, from elementary particles up the hierarchy is grounded in adaptive-ness. Without the ability to adapt to changing conditions there would be no evolutionary process. According to our current model of physics, that adaptive-ness is defined as “random-ness”, which is a code word for magic.

    Our current model of physics is grounded in magic, magic which everyone willing admits actually works. And everyone in the materialist camp is perfectly happy with that paradigm. I understand why the church of idealism is furious with materialism, because the materialistic ontology of magic is ludicrous. Now, that is not to say that idealism isn’t equally screwed up as well, because they have no coherent explanation of matter itself other than invoking magic either. Magic is certainly an explanation, but really, don’t you think it is time for us to move past that paradigm.

    Adaptive-ness is a qualitative property of consciousness, not magic. Our own empirical experience of consciousness as a solipsistic self-model demonstrates that fact if it demonstrated nothing else. It’s been fifty years since the first man stepped on the moon. It’s time to grow up and quit whining. Panpsychism: One small step for man………… One giant leap for mankind.

    Like

    1. If by “magic” you mean processes we don’t understand, then that’s definitely true. Physics is always in terms of relations and forces of primitives. The primitives themselves are in terms of lower level relations and forces. Eventually we get down to primitives that, at least for now, we just have to accept as fundamental, currently the gravitational, electromagnetic, and the strong and weak nuclear forces.

      The question is whether conscious experience is an additional force, or some composite of those other forces. I personally think it’s a composite, the integration of information by a system for planning actions.

      But if I were convinced it was unexplainable in terms of those forces, in essence irreducible, than an experential force might make sense. The question then arises where that force resides and why, and our inability to detect its presence or absence does seem to lead to panpsychism.

      Not my view, but I can see how people get there. The good news is that the differences between our views really cancel out at the observational level.

      Like

      1. It’s all good and definitely interesting to research, as well watching all of the handwaving that goes along with the discourse. Canceling out at the observation level is definitely the case though, because both panpsychism and magic can be used to explain what we observe. I think James Cross hit the nail on the head that materialism and idealism are irreconcilable in our current framework of realism. Of course, very few people are comfortable with, let alone willing to go down the path of anti-realism. Anti-realism is truly a model where “spooky action at a distance” becomes the explanatory model. In spite of that spookiness, anti-realism is the only architecture that can reconcile the dichotomies of mind and matter and eliminate “all” forms of duality. One just has to get used to the “color” I guess.

        Like

        1. This is an area where my intellectual position is different from my emotional one. Emotionally I’m a scientific realist whose interest in science is in the search for truth.

          But intellectually I’m an instrumentalist. All we can know about scientific theories are whether they accurately predict conscious experiences. This is technically an anti-realist position. But that doesn’t mean all speculation beyond observations are equal. Parsimony still matters.

          Like

          1. I have no difficulty grasping your position Mike, and I both appreciate your candor as well as respect your position. I can only assume that you have read my posts at James Cross’ website. If you are at all interested in where I’m coming from, you can get a feel from reading my comments over there.

            Thanks, and keep up the good works, it’s been fun…

            Liked by 1 person

  11. Suppose (as a thought experiment) physics does provide a correct and complete account of what is going on. What the brain would then be doing is constructing an enactable representation of its view of what is going on, but this would be a gross approximation of the physics, because that’s the best it can do. Included in that representation would be its representation of itself acting in the world, again as gross approximations, chunking up the physics into objects, properties, actions, events, causes and effects.

    Then to give an account of consciousness we would need to say how we represent ourselves to ourselves in these terms, with all the flaws this has relative to a complete representation of the physics. It would need to be constrained by the particular features of neurons because that’s what we have available to us to hold and manipulate this enactable, predictive, purposeful representation.

    It’s a bit like understanding a chess game by working with the permitted positions and moves of chess pieces on the board. That’s a high level description of what is going on, sufficient for its purpose but not fully representative of the physics. For example, I just have to put a piece half way between two squares and the physics still stands, but the grandmaster cries foul!

    Like

    1. I agree across the board.

      Are you familiar with Michael Graziano’s Attention Schema theory? It explore many of the same issues you’re raising here. I’m not sure if it’s reality, but much of Graziano’s approach resonates with what you’re describing.

      Like

  12. [starting as new thread because may get into nature of subjective “self”]

    You regard all of the mechanisms as “conscious”, just that the one that the one you equate with the autobiographical self is our consciousness. Is that correct?

    I want to be careful about calling anything “conscious”. I would say that certain events/processes/functions are conscious-type events/etc., i.e., psychules. The consciousness of a system is the repertoire of psychules that system can perform. The system can include more things than just the mechanisms of the psychules, so for example, memory can be part of the system.

    So we can talk about the consciousness of a neuron, which is the repertoire of psychules it can perform, which is very few, possibly one. Or we can talk about the consciousness of a person, whose repertoire will be vast. Or we can talk about the consciousness of sub parts of the brain. In some cases we can identify sub parts by their functionality (inputs and outputs) without knowing the physical make up of the mechanisms. Sometimes it may be useful to identify a system of multiple mechanisms if they share a common feature, such as the same set of neurons for input, or a common output, such as reportability.

    I actually don’t think the autobiographical self is conscious. Or perhaps more accurately, I don’t think most of it is conscious. I’m not thinking about every memory, belief, or value all the time. At any one time, I only have a few loaded.

    So looking at this in light of what I just wrote, the consciousness of the system which we’re calling the autobiographical self is the repertoire of tasks of which that system is capable. Which tasks are currently underway will always be a tiny fraction of that repertoire.

    What seems to be in consciousness is my planning, not my necessarily my immediate sensations, most of which seem handled habitually or reflexively.

    So the consciousness you’re referring to here is the consciousness of a particular system, the one we’re calling the autobiographical self. The inputs of this system, I’m thinking, is that smallish set of neurons which support semantic pointers. Depending on the attention mechanisms, those pointers may be pointing at the concepts of planning (“I will finish this post before getting a snack”), or they may be pointing at specific sensations (“that’s a bus going by”).

    On the global workspace as a small number of neurons, that doesn’t match Dehaene’s description. His notion includes larges swaths of the parietal region, prefrontal cortex, and cingulate gyrus: the fronto-parietal network.

    I’m saying Dehaene et al. have looked at the trunk of the elephant and come to the wrong conclusion about the nature of elephants. Those large swaths of cortex provide the inputs and outputs, but (I conjecture) the nexus of a small number of neurons in some subcortical region provides the global availability. Those large swaths are all part of the system, but the key representations are happening in the nexus. (Again, conjecture)

    Under your description, the apple enters consciousness [of the autobiographical self/system] when a semantic pointer is created in the small number of neurons making up your version of the workspace. But what makes it conscious?

    It becomes “conscious” when it becomes part of a conscious-type event (psychule) , specifically as the input to said event. The output of said event may be a report, or it may be an evaluation of the reachability of said apple, or it may be the creation of the memory of the apple in a specific location.

    If I’m looking at the apple, I could see the semantic pointer maybe triggering its appleness, but how do I consciously perceive the details of the apple?

    Which details?

    In the following quote I have inserted “[*]” in several places. In each instance, insert the question “How exactly might that work, physically, with neurons?”:

    My own answer to this is that the appleness gets registered in the prefrontal cortex [*], in a higher order representation[*], which requests more information from lower order representations [*] in the parietal, temporal, or occipital lobes in an ongoing conversation, a conversation which manifests as long range recurrent signalling[*]. And the integration is close enough to the introspection mechanisms in the anterior prefrontal cortex that they can notice it.[*]

    *
    [get to “self” later maybe?]

    Like

    1. James, you’re going to make me work for a living 🙂

      In general, the way I see most of this working is along the lines of Damasio’s CDZs (convergence-divergence zones). Essentially sensory input cause neural firing patterns, where each successive layer becomes more selective in what excites it. At the lower layers, every part of the visual signal participates in the pattern, but as we climb the layers, colors, textures, contours then shapes, gradually become more important, until we get to appleness, a multi-modal convergence, which may result in a retroactivation back down through the hierarchies (a prediction) that either results in confirmation or an error correction signal.
      https://www.cell.com/trends/neurosciences/fulltext/S0166-2236(09)00090-3 (might be paywalled)
      or
      http://willcov.com/bio-consciousness/diagrams/Damasio%20CDR%20diagram.htm

      So, to address your charge:
      My own answer to this is that the appleness gets registered in the prefrontal cortex [in an action oriented CDZ], in a higher order representation[multi-modal CDZs], which requests more information from lower order representations [sensory specific CDZs] in the parietal, temporal, or occipital lobes in an ongoing conversation, a conversation which manifests as long range recurrent signalling[spikes in pyramidal axons traversing between cortical regions, through the thalamus and other sub-cortical area, with spikes of other axons transporting response signals]. And the integration is close enough to the introspection mechanisms in the anterior prefrontal cortex that they can notice it.[the activation of CDZs cascades into the anterior PFC where introspective CDZs are formed]

      On the way you’re using the term “consciousness”, I think a better term might be either “information processing” or “computation”. It seems better, to me, to reserve the term “consciousness” for where there is some kind of intentionality, some kind of representations. I know you’re saying that that happens in a single synapse firing, but to me the representation is the vast hierarchy feeding into that final CDZ.

      Of course, consciousness is in the eye of the beholder, so I can’t really tell you that your usage is wrong.

      Like

      1. Umm, not sure you want to get back into this, but I’ll ask again, How exactly might that work, physically, with neurons?

        I understand and accept how convergence zones work with neurons. I’m trying to understand how a group of neurons could request information and then other neurons could send that information. It sounds like you’re saying that maybe individual neurons can send distinct signals. How many distinct signals can one neuron send?

        *

        Like

        1. I can’t give a full account from neurons up to cognition. If I could do that, I’m pretty sure there’d be a Nobel with my name on it.

          But have you ever seen this talk from Steven Pinker on neurons to consciousness? Even Pinker only gives an idea of how it might work, but it’s still pretty informative.

          “It sounds like you’re saying that maybe individual neurons can send distinct signals. How many distinct signals can one neuron send?”

          As I understand it, the only variation a neuron really has is the frequency of its firing. When we say neurons are excited, what we mean is that they’re firing more frequently than the average around them. When we say they’re inhibited, it means they’re firing less frequently. What goes into and out of CDZs are the patterns of firing, along with the frequencies of those firings.

          Like

          1. Yes, I see how what CDZ’s can work. But what I can’t translate into logical functions like “and gates” and “not gates” is “a request for information”, “a conversation”, etc., at least at the level where you are using those phrases.

            *

            Like

          2. Okay, this is about as speculative as it gets, which means it’s almost certainly wrong, but it may at least be in the ballpark.

            A “request for information” can result from a firing pattern in the PFC, say one associated with planning an action, which in turn cascades into pyramidal cells that spike back to perceptual regions (the request). The spike arriving in those regions triggers new image firing patterns, which cascade again through pyramidal cells back to the PFC. (Much of this being relayed through the thalamus and other subcortical regions.) The spikes coming back trigger new firing patterns in the PFC, which may complete the pattern that initially triggered the spikes to the other regions. (The response was received.) This could lead either to new requests (spikes back to the perceptual regions) or spikes into the premotor cortex for action, or it could die out if return spikes from the limbic system inhibit this particular PFC firing pattern (the scenario is ruled out).

            All of this is also likely mediated by intermediaries in the subcortical regions. For example, a lot of the signals probably relay through the hippocampus and surrounding regions for location and temporal related processing. The affect response may come from the amygdala, etc.

            Does that help any?

            Like

          3. Hofstadter’s “Strange Loop” idea involves a self-referential positive feedback loop as the seat of consciousness. You might find some parallels there with what you just wrote.

            Like

          4. Thanks. It’s interesting how some titles, just in hearing them, manage to resonate. The “Strange Loop” one, from the moment I heard it, made sense. I had the same feeling the first time I saw Dawkins’ “Selfish Gene” title.

            My only concern, which may well be ameliorated with actually reading the book, is he says he’s a strange loop. What I described above seems more like numerous concurrent tangled loops with intersecting states.

            Like

          5. IIRC, he does postulate an over all (single) “strange loop” as the basis consciousness, but I think he may also see loops comprising it. Been many years since I read it, and I’m sure it’ll be hundreds of pages yet before he gets around to it in GEB.

            Liked by 1 person

          6. Mike, I really appreciate the effort. The request is probably too difficult. I’m not worried about being speculative. I’m just hoping to get an idea how any mechanism might work.
            I think I’ll leave it at that.

            *

            Liked by 1 person

  13. I don’t know where I got the link [possibly a post on your site, but I think I got it from twitter], but I just read this paper,The Phenomenological Illusion, from John Searle, and my respect for him has gone up considerably. I couldn’t find anything he said that I thought was wrong (as opposed to his Chinese Room stuff).

    Basically, he looks at phenomenology, so, Husserl, Heidegger, and others, and concludes that what they are talking about is experience from a given perspective, which is the subjective perspective, and they provide some valuable insights. However, some people, i.e., those suffering from the phenomenological illusion, make the mistake that those insights are all we can get from reality, and so the things we experience are all the facts there are about reality. These people then ignore (or explain away?) what we consider mind independent basic facts of the world.

    Worth a read if you can get to it.

    *

    Like

    1. Thanks. You probably didn’t get it from me, although maybe someone else left it in a comment. Can’t say I’m much of a Searle fan. His reasoning in the writings I have read, along with his public talks, left me deeply unimpressed.

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.