A standard model of consciousness?

I’ve often noted that I find more consilience than disagreement between the empirically grounded theories of consciousness.  They seem to be looking at the problem at differing levels of organization, and together they may present a growing scientific consensus about how the mind works.

In particular, a few weeks ago, when discussing higher order theories, I made the observation that another theory historically discussed on this blog, the attention schema theory, could be considered a higher order representation.  It’s gratifying to know that my conjecture wasn’t hopelessly off base, as Michael Graziano, the author of the attention schema theory, along with coauthors, has the same idea.

In a new paper (which unfortunately appears to be paywalled), they take a shot at reconciling various theories and philosophies of consciousness, and reach a striking conclusion:  Toward a standard model of consciousness: Reconciling the attention schema, global workspace, higher-order thought, and illusionist theories:

Here we examine how people’s understanding of consciousness may have been shaped by an implicit theory of mind. This social cognition approach may help to make sense of an apparent divide between the physically incoherent consciousness we think we have and the complex, rich, but mechanistic consciousness we may actually have. We suggest this approach helps reconcile some of the current cognitive neuroscience theories of consciousness. We argue that a single, coherent explanation of consciousness is available and has been for some time, encompassing the views of many researchers, but is not yet recognized. It is obscured partly by terminological differences, and partly because researchers view isolated pieces of it as rival theories. It may be time to recognize that a deeper, coherent pool of ideas, a kind of standard model, is available to explain multiple layers of consciousness and how they relate to specific networks within the brain.

As a quick reminder, global workspace theories (GWT) argue that consciousness results from mental content making into a global workspace, often thought to range across the fronto-parietal network.  Higher order theories (HOT) posit that consciousness involves secondary or higher order thought or representations of first order sensory representations or other more primal processing, with the higher order representations though to be in the prefrontal cortex, although possibly in other regions as well.  The attention schema theory (AST) posits that awareness is a model, a schema, of the messy emergent multilevel process of attention.

Before attempting a reconciliation, the authors make a distinction between two views of consciousness, which they label i-consciousness and m-consciousness.  i-consciousness is the information processing view of consciousness, how information is selected, enhanced, and used.  m-consciousness is the mysterious experiential essence version.  (This distinction seems similar, but not exactly equivalent, to Ned Block’s distinction between access consciousness and phenomenal consciousness.)

The theories noted above all focus primarily on i-consciousness, often to the frustration of those concerned about m-consciousness.  However, the authors argue that our impression of m-consciousness comes from an internal model the brain uses to track its own processing, a model that evolved to be effective rather than accurate, meaning that what it presents is a simplified and cartoonish picture of that processing, one that because of the simplifications, produces something that seems magical and incompatible with i-consciousness, but that nevertheless is a component of it.  This description of m-consciousness resonates with illusionist theories.

For the authors, the internal m-consciousness model is the attention schema.  However, while they think the attention schema uniquely tracks subjective experience, they admit that the brain probably produces numerous models of different aspects of its processing.  All of these models could be considered the higher order representations of HOT.

They also put forth the attention schema as a bridge between HOT and GWT.  The global workspace could be seen as the highest level of attention processing, which would make the attention schema essentially a higher order representation of the global workspace.

So we have a multilevel attention mechanism, which culminates in the global workspace, which is modeled by the attention schema, making it a higher order representation of the workspace.  The attention schema, along with all the other higher order representations, produces m-consciousness, a simplified cartoonish model that is effective but not accurate, the contents of which we could describe as an illusion.  All of this would make up i-consciousness.

My take is that this is a compelling view, although perhaps a bit too slanted toward the AST.  For example, I think emotional feelings are higher order representations of lower level reflexive survival circuits, which seem just as central to subjective experience as attention.  All of this, to me, strengthens HOT as the more fundamental view of what’s happening.  (Which since HOT is a collection of theories, means we’re still far from a final theory, if we indeed ever get to one final one.)

But the convergence of these theories, if they are in fact converging, is starting to look like the rough outlines of a standard model, a collection of theories that together provide an account of the mind.  Only time and additional research will tell if it actually is.

Unless of course, I’m missing something?

h/t Neuroskeptic

135 thoughts on “A standard model of consciousness?

  1. I think we are missing a lot! Of course, we have only recently began studying this problem systemically/scientifically. Previous discussions were philosophical and were limited by the paucity of the data.

    I find your posts very helpful and illuminating … keep missing something … or three … as I believe our misses are occurring by smaller and smaller amounts.

    Liked by 2 people

  2. Two words. Yeah baby!

    I’m kinda surprised they didn’t put IIT in there as well. All of those theories add up to integrated information.

    Now all they need is a mechanistic understanding of representation to explain how m-consciousness comes from i-consciousness.

    *
    [cracks knuckles]

    Liked by 1 person

      1. I think it’s unfortunate that IIT is not included because I think it has something to contribute beyond just the obvious integration of information. For example, IIT 3.0 introduces the concept of qualia space, which concept will be important for understanding the above-mentioned mechanism of representation, albeit slightly differently than described in IIT 3.0.

        BTW, I see our boy Wyrd has recently been playing in this space. He recently posted a link in a thread here to a comment in a thread on his site in which he was using a vector space to combine concepts and then extract sub-concepts from those concepts. I can’t figure out how to find that thread on his site, so I can’t tell if he was doing this with Eliasmith’s Semantic Pointers in mind or not.

        *

        Like

        1. I’ve historically found IIT too abstract, too disconnected from actual neuroscience, to find it much use. And its penchant for attributing consciousness to things that show now empirical evidence of being conscious makes it feel more like an ideology than a scientific theory.

          That said, I’m planning on reading Christof Koch’s new book, which I know will be pro-IIT. I don’t expect to be convinced, but maybe Koch will point out something I’ve missed.

          No idea on the Wyrd thread. You could use the search on his blog (on the bottom footer). Or maybe he’ll weigh in here.

          Like

          1. I looked all around for that search function. I guess I didn’t look at the bottom. Anyway, yes the discussion started from semantic pointers. Here’s the link: https://logosconcarne.com/2019/05/09/our-inner-screen-vector-space/

            In the very last comment, which I hadn’t seen until Wyrd linked to it a little while ago, Wyrd says he created some Python code to do the vector manipulation. Hey Wyrd, if you wanna crack artificial general intelligence you should figure out how to use those vectors to manage causality. First you need to code events and then order them, like “dog barked” and “cat ran away”, or “arm pushed cup” and “cup fell to floor” and “cup broke”. Then you need to code goals like “break cup”, and use that to recall memories of previous associations that include “cup broke”. Interested?

            *

            *

            Like

          2. “…our boy Wyrd…”

            Your “boy”??

            “Wyrd says he created some Python code to do the vector manipulation.”

            I mostly wanted to test the idea that randomly generated high-dimension vectors all turn out to be roughly orthogonal to each other. (Yep.) I kind of lost interest in it after that.

            Liked by 1 person

          3. Wyrd, boy = term of endearment, member of the club.

            Too bad about no more interest. AGI awaits. BTW, can I have your code? I’m guessing straight up vectors may be more efficient for the task than simulated neurons.

            *
            [email = James at James of Seattle (one word) daht kahm]

            Like

          4. “boy = term of endearment, member of the club.”

            Um,… no.

            You don’t possess me, I’m not “yours,” and no adult male should ever be called “boy” by a relative stranger.

            Like

  3. For example, I think emotional feelings are higher order representations of lower level reflexive survival circuits, which seem just as central to subjective experience as attention.

    Are you suggesting these higher order representations are conscious without being attended to? And in what sense are the representations of the reflexive circuits? Seems to me that emotions are simply constellations of system-wide effects, and our “feelings” related to the emotions are simply the proprioceptive perception of the effects. Thus, the representations are not “of” the circuits but instead are “of” the outputs of those circuits. On introspection we can recognize the constellation as having a common cause and we can give that unspecified but identified common cause a label, such as “joy”.

    *

    Like

    1. I actually think there are nonconscious feelings, so no, I’m not saying that they’re always conscious. You used the word “attended” though. One important point of the AST is that we can attend to something nonconsciously. AST says we’re only aware of it if the attention schema focuses on it. Whether that’s true might depend on what exactly we consider the scope to be of the attention schema. In other words, does it include imaginative mind wandering?

      On emotions, it seems like you’re asserting James-Lange theory here. I think there’s enough evidence from people with spinal cord injuries to rule that out. Certainly in healthy people the physiological effects resonate back interosceptively, and are part of what drives the representation.

      But here’s the thing, we have the option to override our survival circuitry, at least the final motor output parts of them. The early physiological aspects happen before we can do the override. But for the higher level circuitry to decide which reflexes to allow or inhibit, it must have some conception of what the signal from those circuits, along with the interoceptive resonance, mean. It’s that concept that I’m calling a representation of the survival circuitry.

      Like

      1. I may have mixed up proprioception with interoception.

        Also, I may need to get a better understanding of the difference between attention and “the” attention schema. I see attention as a competition to get into the global workspace, and whatever succeeds is what has our attention. There are definite prefrontal cortical mechanisms (goals) which try to influence that competition, but mind wandering is what happens when that prefrontal influence is turned down leaving the competitors without outside help.

        As for emotions, I think those have their effects directly on the above mentioned competitors, boosting some and not others, sometimes overriding the prefrontal influence, and sometimes being overridden by said prefrontal influence.

        *

        Like

        1. On the distinction between attention and the attention schema, this essay is pretty old, but I recall it being a good summary of the attention schema theory.
          https://aeon.co/essays/how-consciousness-works-and-why-we-believe-in-ghosts

          On your last paragraph, it depends on exactly what you mean by “emotion”. There are the survival circuits and then there are the feelings. From what I’ve read, feelings are prefrontal prediction mechanisms (representations) that help the PFC decide which subcortical survival circuits (and/or habits) to allow and which to inhibit.

          Like

  4. Yes, I think some of these ideas are moving in the right direction. It is noticeable though that academic incentives encourage lots of different named theories that their proponents can talk about and publish, rather than a coming together in a collective effort. That is what is needed to put together a broader model and test it.

    Like

    1. It seems like there’s always been an element of that. Think back to the feud between Newton and Leibniz. Everyone hopes to win a Nobel prize.

      The paper identifies one of the chief issues with the different theories: differing terminology. Although they don’t completely help, coining yet new terms: i-consciousness and m-consciousness, rather than just working with the existing ones.

      Like

  5. If I had to count the number of the papers that began with “Toward(s)” and never went anywhere…

    I can’t recall one that ever actually led to anything significant.

    I doubt glomming together a bunch of similar or somewhat related theories will lead to any significant breakthrough that couldn’t have already been achieved with one of the theories by itself. Actually it could make it less likely because, I would think, it would make the theory more complex and less likely to pass the simplicity test. It might enrich another generation of consciousness researchers somewhat like string theory has for physicists.

    Liked by 1 person

    1. I’ll take your theme further James. From the standard “Laws are like sausages. It’s better not to see them being made” quote, what we seem to have here is standard politics rather than standard science. Of course everyone wants a piece, so then we wet the beaks of various players to see if an amalgam may thus able to survive. This sort of thing doesn’t work long term in science, but may indeed be a good strategic move for a while for some people, and especially in softer varieties of science such as this.

      I’m biased of course. My own dual computers model is definitely lower order, as well as quite original.

      Like

    2. Geez guys. Obviously from all this hostility, the approach of the paper hit some nerve.

      All I can do is point out that convergence is usually seen as a good sign in science. And as I’ve noted before, I don’t think there will be one theory that explains it all, just as we didn’t find one theory to replace vitalism, but a galaxy of molecular chemistry models.

      Given that history, and the ad hoc nature of evolution, I don’t find it surprising that combinations of theories might be the reality. And the term “standard model” comes from the particle physics framework of theories. So it’s not like this approach hasn’t been fruitful in the past.

      Liked by 1 person

      1. To me this doesn’t seem like hostility Mike, but rather warranted skepticism. Why wouldn’t this convergence be motivated by standard politics? We are, after all, talking about a human endeavor.

        You’ve become more invested in these sorts of ideas in recent months (at least). Thus as a skeptic it may be prudent to be a bit apprehensive about what you naturally want to succeed. I’m admittedly the opposite. But note that I have at least been clear about my own such biases.

        Most of science today is not “ad hoc”. Where it does remain this way in science however, seems the opposite of what scientists are most proud of.

        Liked by 1 person

        1. Eric,
          If you read about GWT, HOT, and AST at length, you’ll learn about numerous experiments and observations. They’re not armchair philosophical speculation. That doesn’t necessarily mean they’re right, but any alternate theories will need to explain the same data.

          If you can find logical problems with the theories, or conflicting data, then it might be productive criticism. But if you’re going to just assume political motivations, without evidence, then that doesn’t look like skepticism to me, just bias.

          If politics were part of it, why didn’t they loop in the IIT and panpsychists, or the recurrent neural networking people, or any of the other dozens of theories? They could have secured a lot more allies that way. But they focused on theories where a real convergence might exist.

          “You’ve become more invested in these sorts of ideas in recent months (at least).”

          If you look at the history of this blog, you’ll see that I’ve always explored neuroscientific theories of consciousness. The first post on AST was in 2013.

          Liked by 1 person

      2. Maybe if I could read the paper I might change my mind. But, since I haven’t been a big fan of the component theories, I doubt I would think the combined one to be promising. Don’t get me wrong. HOT, Global Access, etc. might be useful for explaining some aspects of how consciousness works but as a general (“standard”) theory that explains how consciousness emerges from matter I’m not seeing it. Maybe science can never explain that but, if that is the case, claiming this as a “standard” theory seems unwarranted.

        Like

        1. “Maybe science can never explain that but, if that is the case, claiming this as a “standard” theory seems unwarranted.”

          Unless the theory can demonstrate why the question is ill-posed. The hard problem may be unanswerable, but the meta-problem isn’t. That’s what the m-consciousness concept is about.

          Like

        2. James, on ‘how consciousness emerges from matter’, do you want it explained to a god-like physicist looking on objectively from outside, or to the conscious subject itself, with its unique subjective perspective? Very different things, and we all slide between the two rather too easily, depending what we want to prove.

          The conscious subject is a set of interacting patterns and processes, which are conscious because they happen to have incorporated an accessible and actionable representation of themselves. The physical substrate that enables all this to happen is there, but in some ways irrelevant to the subject, except in grounding it in the physical world through senses, motor actions and neural wetware.

          For the god-like physicist, there can be a high level description of what the physical materials are doing which can, rather arbitrarily, be partitioned as a subject of experience, and a content of experience, but without ever having first hand access to that experience….and many other partitions and descriptions of what the physical materials are doing would be possible and equally valid.

          Like

          1. I don’t think there is such a thing as a god-like physicist.

            I would want a standard theory to be able to explain how we can make a non-biological entity (computer, software, something else) conscious or explain why it can never be done. I’m not asking for all of the details but at least a framework of how it could be done or why it can’t. In other words, something to guide future research. At least have a position of some sort on substrate independence with a concrete rationale behind it.

            Like

          2. Both Dehaene and Graziano take their theories to be doing that. I think Graziano’s (the attention schema theory) has a better claim, since it’s more specific. Of course, we’re still struggling to recreate even basic nonconscious cognition, so there’s a long way to go.

            Interestingly, there’s a lot of talk about trying to have AI explain itself, which will involve it having a model of its own processing. If we get to that state, we might have the beginnings of artificial-m-consciousness.

            My suspicion is that we’ll find creature consciousness, when we have a thorough understanding of it, to be an awkward architecture for what we’ll actually want from AI. A lot there comes from evolutionary history, and much of it won’t be productive for your typical autonomous robot.

            Like

          3. James: Well I think that my model of consciousness (which has similarities to Graziano’s) is implementable in software to generate something that is conscious and I am working on that, as I reckon implementing it is the only way to get beyond discussions in which we all use similar words differently.

            Mike: Yes I agree that communications from and to AI about what it is doing could be a great way to test and benefit from artificial consciousness. I think it is just one of the ways in which the architecture of consciousness can accelerate AI, as several features of the architecture would assist things like machine learning.

            Liked by 1 person

          4. I would want a standard theory to be able to explain how we can make a non-biological entity (computer, software, something else) conscious or explain why it can never be done.

            James, consciousness is a biological phenomenon. As Searle said (and I paraphrase): You can make a model of digestion but it doesn’t digest anything.” The only known way to create consciousness is to succeed at reproductive sex.

            Like

      3. Mike,
        What neuroscientific evidence do you perceive to validate these models? Furthermore there was a time when you perceived neuroscience to validate the framework of Todd Feinberg and Jon Mallatt. What do you now perceive about their neuroscience based endeavor which had fooled you?

        In the end psychology supervenes upon neuroscience, which is to say that neuroscience must account for macro organism function. Thus to me it seems more productive to work things out from the direction of psychology initially, or the approach taken from my own dual computers model. This model is relatively neuroscience independent. An analogy would be designing a building through conceptual drawings that don’t immediately get into the structural materials that would be needed. Then once we get a conceptual framework for the required building (or for organisms this would be their psychology), specialists could use their engineering expertise for what would be needed to support such a structure (or their neuroscience).

        My own perception is that these higher order theories should be discarded outright, and given that they deny consciousness to the vast majority of organisms below the human. It’s widely accepted that there is something it is like to exist as a fish, for example, and perhaps even a garden snail. If so then these theories should be leading science in the wrong direction. Note that this is something which neuroscience itself can never show us, and because the field resides at the wrong level of abstraction.

        Liked by 1 person

        1. “What neuroscientific evidence do you perceive to validate these models?”

          That’s a vast question. The main evidence for GWT is that conscious perception seems to correlate with massive activation of the parietal and prefrontal cortices. For HOT, lesions to the prefrontal cortex seem to have powerful effects on conscious awareness of a perception. In other words, activation of the early sensory regions by themselves isn’t sufficient for conscious awareness. In the case of AST, there are cases where someone’s attention can be diverted without them being aware of it, such as when watching a magic trick. And awareness seems to be deeply affected by lesions to the temperoparietal regions, as well as the prefrontal ones.

          “Furthermore there was a time when you perceived neuroscience to validate the framework of Todd Feinberg and Jon Mallatt. What do you now perceive about their neuroscience based endeavor which had fooled you?”

          First, I never bought all of F&M’s hypotheses. Neither did you if I recall correctly. For example, I think both of us are skeptical that exteroceptive images alone are sufficient for consciousness, which is their view.

          However, most of what I’ve written about their views isn’t necessarily inconsistent with the theories above. Some HOT advocates, like Joseph LeDoux, are skeptical about animal consciousness, but it’s not automatically entailed by those theories. (Hakwan Lau, Stanislas Dehaene, and Michael Graziano are much more open to animal consciousness.) There’s no reason why animals, particularly mammals, can’t have a global workspace or higher order thought. (I think the term “higher order” is probably confusing. All it really means is later in the sensory-action sequence.)

          These mechanisms might be radically different in early vertebrates, not to mention invertebrates, but that doesn’t mean they’re not there. This is one reason why I’m skeptical of focusing too much on neuroanatomy, because functionality shifts around over the course of evolution, and there are often alternate ways to accomplish the same thing. It’s why I prefer focusing on capabilities, and that’s where I think F&M are strongest in their analyses.

          I do sometimes wonder if F&M shouldn’t have called their book “The Ancient Origins of Cognition“, since a lot of what they discuss might be what in humans we’d classify as nonconscious cognition. But it’s worth remembering that they’re explicitly not focusing on human level consciousness. Their definition of consciousness is very broad, broader then yours or mine. (Although not as broad as panpsychists or plant cognition advocates.)

          On fish and garden snails and dismissing theories outright, I’m not sure snail and fish sentience is as widely accepted as you claim, but hopefully my comments above put you more at ease on this, and in any case, rejecting theories because we don’t like what they tell us isn’t scientific. If a well validated theory predicted that nothing but humans were conscious, and you rejected it only because you didn’t like that outcome, you wouldn’t be pursuing a real understanding of reality anymore.

          All that said, I think the overall question, are animals conscious, is ill-posed. They don’t have human consciousness. But they do have lower levels of it. Ultimately it’s a matter of definition. If you look at my earliest post about F&M, that’s a point I make from the beginning.

          Liked by 1 person

          1. “The main evidence for GWT is that conscious perception seems to correlate with massive activation of the parietal and prefrontal cortices. For HOT, lesions to the prefrontal cortex seem to have powerful effects on conscious awareness of a perception.”

            The problem here is that the human brain has a big PFC so, of course, it is there for a reason and will be “activated” if the brain is doing much at all besides sleeping or under an anesthetic. Whether that proves anything more than that humans have a big PFC I don’t know. It would be really a surprise if the PFC was dormant while we are awake. Is there any part of the brain that especially goes dormant during wakefulness?

            I expect we would find the same in almost any animal – that most of the brain is active during wakefulness and a lot is much less active during sleep.

            Like

          2. The activations are all in relative terms. As a rule, the entire brain is always active, but how active various regions are, and the timing of the rise and fall of activity, are measurable phenomena, along with their correlations with self report. And remember, these are statistically significant patterns across large numbers of measurements.

            That’s not to say that alternate explanations aren’t possible. There remains room for any or all of these theories to still be wrong. But again, alternate theories need to explain the same data.

            Like

          3. But there is a certain amount circularity involved with PFC being crucial. Since the only organisms that can self-report are humans then naturally the PFC will be active. For that matter, the requirement to self-report itself might be a big part of what is making the PFC active since the PFC is heavily involved in social interaction. How do we distinguish the brain activity for consciousness and awareness from the brain activity needed to observe the consciousness and report it? Once we are asking a subject to report their perceptions we’re already influencing the brain activity in potentially biased ways. It’s analogous to participant-observer issues in anthropology where the ethnographer just by presence in the culture is influencing the behavior.

            Like

          4. Protocols have been developed to minimize the self report factor. If test subjects receive a certain stimulus while being scanned, which they consistently self report being aware of, the scans for that can be saved. Then other test subjects are scanned while being shown the exact same stimulus, but with no requirement that they self report it. The scans are then compared. There are, of course, variances in activations, but the PFC is still heavily activated in the second group.

            Of course, I said “minimize” rather than “rule out” because someone could still argue that the PFC activation is superfluous. (Some do make this argument.)
            https://www.jneurosci.org/content/37/40/9593#sec-5

            Liked by 1 person

          5. That paper looks interesting. Thanks. I will need some more time with it.

            For me; however, it wouldn’t be at all surprising that the PFC is involved in human consciousness. It’s there so it is bound to be doing something while we are conscious. The question is whether it is critical to all consciousness.

            Liked by 1 person

          6. From what I’ve read, every mammal has a PFC, although it’s relatively small in non-primates. In pre-mammalian vertebrates, some of the pallium might provide a simplified version of the functionality.

            It is possible some aspects of consciousness might be able to function without the PFC, such as the sensorium, but most of the affects happen in the PFC. So, does a non-feeling awareness of the environment count as consciousness? It’s worth noting that some variants of HOT allow for some of the higher order representations to be in the parietal and/or temporal lobes.

            Like

          7. I thought the amygdala was mainly involved in processing emotion but there is still the circular issue that, because humans have a PFC, it is bound to be involved in almost everything.

            Like

          8. The problem with the word “emotion” is it’s used in varying ways, ways that are often conflated with each other. Originally it meant the automatic reaction as opposed to the feeling of that reaction, but in modern usage it’s often used to refer to the latter. I use “affect” because it more specifically refers to the feeling of the emotion instead of the survival circuit reflex.

            The survival circuit portions rise up through subcortical circuits, including the amygdala. But in humans, the affect seems to happen in the frontal cortex (PFC and ACC). That doesn’t mean a creature without a PFC or ACC never has an affect. Remember, evolution can solve the same problems in multiple ways.

            But assessing affects in animals is a tricky business. Often evidence for survival circuit reflexes (which are very ancient) are taken as evidence for affects, but Feinberg and Mallatt identify operant learrned behavior as a crucial indicator. You don’t need to have an affect to react reflexively, but you do to learn goal-directed behavior.

            Like

          9. So the awareness of the crayfish wouldn’t be unfeeling?

            Would an unfeeling AI not be conscious even if it exhibited other behaviors consistent with consciousness?

            Like

      4. Mike,
        You’re right that I was never happy with F&M’s brain architecture, and even though their work did help me get a sense of when central organism processors, and then affect, probably evolved. I think it’s a shame that they added “inside consciousness” and “outside consciousness” to what otherwise could have worked alone, or “affect consciousness”. Here they could have proposed non-conscious function (as is the case for our robots), that creates a virtual affect based mode of function. Then they could have said that beyond this affect input, as well as intro and extro senses, as well as memory input, this second computer interprets such inputs in order to figure out how to promote its affect based interests through muscle operation. So I’d say that regardless of good work regarding the evolution of life, they aren’t going to get far without better brain architecture.

        I appreciate your account of how GWT, HOT, and AST correspond with known neuroscience, and certainly don’t question the validity of their evidence itself. Instead I worry about their conclusion that any or all of these theories present basic enough ideas to effectively represent “what it’s likeness” for life in general. It seems a bit anthropocentric to say, “You lack much of what we have, and therefore your existence should be about like it is for one of our robots — personally insignificant”. There’s just too much evidence that organisms with less advanced brains can also feel good and bad.

        We can certainly say that human consciousness depends upon advanced brain structures. So why not leave GWT, HOT, and AST right there, or as models for “human consciousness”? Apparently because the prize for “consciousness alone”, is simply too great.

        Given apparent flaws in the work of F&M, GWT, HOT, AST, theory of constructed emotion, and on and on, wouldn’t it seem prudent to also try approaching this question from the opposite direction? Here we could sketch out a structure that makes sense in a psychological capacity, and then go back and fill in the neuroscience of such a model later. But in order to gain a working level grasp of my own such model, you’d need to try to active use it rather than just passively asking how it works.

        Liked by 1 person

        1. Eric,
          I actually like the divisions between the different types. It helped sharpen my understanding of the components. And the affect portion calls attention to something many theories of consciousness don’t address. Neither Dehaene nor Graziano really give much of an account for emotional feelings.

          I think you’re real beef is that they called each of them “consciousness”. It might have been better if they’d called them interoceptive representations, exteroceptive representations, and affect representations. Of course, to F&M, the existence of those representations and their utilization (even if only reflexive) is consciousness, or at least the primary consciousness they’re aiming for.

          Our theory of consciousness, as developed here and in chapter 5, falls squarely in the category of “first-order representational theories” (Mehta and Mashour, 2013). Such theories say that consciousness consists of sensory representations of the world and body that are directly available to the subject.

          Feinberg, Todd E.. The Ancient Origins of Consciousness: How the Brain Created Experience (MIT Press) . The MIT Press. Kindle Edition.

          I think the details of “available to the subject” is what’s missing from first order theories.

          On scoping the theories to only human consciousness, I wonder if you actually read what I said above about those theories not ruling out animal consciousness.

          Liked by 1 person

      5. Mike,
        I most certainly did read what you said about GWT, HOT, and AST not ruling out consciousness for animals below the human. But note that you said Joseph LeDoux was skeptical, while Hakwan Lau, Stanislas Dehaene, and Michael Graziano were merely much more open to the idea. Of course it could be that there are less complicated brain structures that may be said to replicate some of the function of the human brain, and so may produce consciousness from their models, though this would naturally be speculative given where these models originate. If “there’s nothing it is like” to be non-human, then “no harm… no foul”. Otherwise however, shouldn’t we seek the most primitive example that we can find and designate this to be a more fundamental version? Here the human would instead have a highly evolved variety of “consciousness”.

        Consider electricity generation. It might be produced through a simple hand crank that’s wired up right, or it might be produced through a modern nuclear plant. If we didn’t know the mechanics of electricity production however, and had nothing more than a modern nuclear plant from which to study this phenomena, wouldn’t you expect some extra fancy ideas to emerge about the production of electricity?

        Science advances by means of effective reductions. Starting with the human brain as our essential model of a machine which produces “consciousness”, and then backtracking to say that perhaps certain less advanced forms of life are able to replicate the same sort of thing in some capacity, thus seems like the wrong direction to take this (that is if “there is something it is like” to exist for many varieties of life, as the vast majority of us perceive).

        My own first order theory of consciousness does not explicitly get into “the hard problem” of its production (though it does have some implicit implications). But if you’re convinced that “affects are formed from the signals from reflexes or survival circuits that are beginning to fire, primarily as input into higher level circuitry that has to decide whether to allow or inhibit that reflex”, couldn’t you find a more basic model which is open to such an answer?

        Like

        1. Eric,
          “were merely much more open to the idea”

          It seems like the only thing you’re going to get stronger than that is a statement of faith. I personally don’t want to see those. I want to see conclusions reached through evidence and logic. When I start to read something from someone where I detect ideology rather than cold hard science at work, I immediately discount that viewpoint.

          On finding the most primitive form of consciousness, the first problem is coming up with a definition. People use the word “consciousness” to refer to a wide variety of things. Which is why I usually present that hierarchy.

          I’d also note that people tend to be inconsistent about this. They insist that consciousness is something simple, and that we need to look for it in the simplest parts of the brain, then express wonder at how something so rich could come from something like those simple components. The answer is, it doesn’t. The richness is built on top of a vast complex of functionality, most of which we have no introspective access to, but that doesn’t mean it’s not there.

          That quote on affects is my shot at the most basic version. It’s so basic that it’s not conscious, capable of being handled by habitual systems. Of course, if you insist that only conscious feelings are feelings, then more cognitive aspects have to be brought in.

          Liked by 1 person

      6. This has been a productive discussion for me Mike. I’ll retain my “nuclear power plant” analogy for GWT, HOT, and AST, convinced that science will ultimately need a generally accepted definition for the “consciousness” term which is more like a machine operated by a hand crank.

        Yes it’s unfortunate how people use the consciousness term for a wide variety of notions. Of course I don’t, and I’m never inconsistent here. Soft science should need to develop this sort of perspective some day in order to eventually become hard.

        Like electricity, in the end there must be physics associated with affect/ consciousness. Whatever physics happens to be involved, whether the “information processing” explanation that you favor, or something else, scientists will need to develop a basic associated model and then scale this up for the human. Here human based conceptions, like GWT, HOT, AST, and so on, should be rendered quite obsolete.

        Liked by 1 person

        1. I once used the example of a video game for the same concept. Pong is a much simpler game than World of Warcraft, but they’re still both games. That said, a person familiar with only games like WoW might have a hard time understanding the stark simplicity and limitations of Pong. We have to be careful not to project the full breadth and depth of our experience on much simpler creatures, while still recognizing the primal similarities.

          Liked by 1 person

      7. Agreed Mike. Once a primal definition for “consciousness” does become generally accepted, it will need to be understood that a wide variation in functionality shall exist between technically “conscious” entities. I suspect that the vast majority of us will do just fine with this. Note that your five layered hierarchy of consciousness could then finally be retired. Progress!

        So what might a useful primal consciousness definition be? I favor the reasonably standard “what it’s likeness” idea, or something that I consider to boil down to “affect”, “sentience”, and that sort of thing. Even if entirely functionless, or essentially inciting no output mechanisms whatsoever, experiencing this would still render something “conscious”.

        I wonder if you have any reasonable suggestions for a competing primal consciousness definition?

        Liked by 1 person

        1. Eric,
          “Note that your five layered hierarchy of consciousness could then finally be retired.”

          It might eventually need to be retired (it will certainly need revision), but for me it would only be because of new data. Someone coming up with a definition won’t change the realities it’s referring to. Only those realities being different would.

          “I favor the reasonably standard “what it’s likeness” idea, or something that I consider to boil down to “affect”, “sentience”, and that sort of thing.”

          That’s the difficulty with the “what it’s like” definition. It seems simple, but only because it’s hopelessly vague. You’re relating it to primal sentience, but someone else might relate it to sensory experience, or our knowledge of that experience. There’s a tendency to lump all of it together, but they’re not the same.

          “I wonder if you have any reasonable suggestions for a competing primal consciousness definition?”

          I think the way most people use the phrase “primary consciousness”, it needs to include sensory representations too. Indeed, it’s not clear to me that feelings as we understand them make sense without representations of the environment, one’s body, and the relationship between them.

          Liked by 1 person

          1. Mike, regarding your:

            [What-it’s-like] seems simple, but only because it’s hopelessly vague.

            Nagel’s “There is something it is like …” isn’t “hopelessly vague”—it’s thoroughly incoherent with its suggestion of an ontology for a ‘something’ in the “There IS” phrasing. Additionally the attempts of others, like Chalmers for instance, to explain the meaning of the ill-formed cringe-worthy noun ‘what-it’s-likeness’, use the language: “There is something it feels like to be a …” so the explainers are simply saying that bats and other mammals are sentient, which is a long-standing cultural belief per that paper on animal sentience I provided the link to in another of your recent posts. I’m assuming you’ve read that paper as well as Hacker’s teardown of Nagel’s language massacre in “Is There Anything It Is Like to Be a Bat”—if not, let me know and I can supply the links again.

            “Sensory experience” and ‘knowledge’ are contents of consciousness and, although it’s true that consciousness cannot exist without at least one content, the contents of consciousness (knowledge, perceptions of the environment, etc.) and consciousness (sentience, meaning ‘feeling’) are two different things.

            So your observation:

            Indeed, it’s not clear … that feelings as we understand them make sense without representations of the environment, one’s body, and the relationship between them

            … seems to be saying that core/primary consciousness—an organism’s feeling of being embodied and centered in a world—doesn’t make sense without that “feeling of being embodied and centered in a world” part. Unless my linguistic sense isn’t functioning this morning, your phrasing appears to assert that consciousness cannot exist without contents.

            It seems that Eric might be coming to appreciate Damasio’s proposition that “Consciousness is a feeling composed of feelings corresponding to sensory tracks—that, overall, “Consciousness is the feeling of what happens.” Is that conception somehow disagreeably simple? It suggests an obvious evolutionary origin for core consciousness as an embodied simulation of an organism in a world—beginning with primary bodily feelings—and its further development to ‘extended’ human consciousness, in which all of the contents of consciousness (thought, etc.) are themselves feelings.

            Is the simple conception of consciousness-as-sentience being rejected for precluding all the philosophical mystification of consciousness in the theorizing of consciousness philosophers? What’s going on here?

            Like

          2. Oops! Another runaway italics fatfinger … should have been:

            “and the relationship between them”

            You still got your magic fixit wand Mike? 😉

            Liked by 1 person

          3. I meant that the italics should terminate after “and the relationship between them” … it’s one of those days! 😉

            Liked by 1 person

          4. … it’s not clear to me that feelings as we understand them make sense without representations of the environment, one’s body, and the relationship between them.

            That totality is exactly what core consciousness IS: a simulation in feelings of being an embodied organism in a world.

            That’s the Body (embodied), the Environment (in a world) and the Relationship (centered in) between them. All accomplished with feelings … consciousness.

            Like

          5. I’m not a fan of the “what it’s like” language myself, but I try to resist getting too nit picky with the language. It’s meant to refer to an overall intuitive sense of subjective experience. But both phrases are hopelessly vague, and I suspect most people are tangling up the human version of experience with it, whether they intend to or not.

            On the primary consciousness definition, I’m not quite sure what you’re asking.

            On the feelings point, I think I understand what you’re saying. But I find that use of the word “feelings” to be too polymorphic for clarity. In my mind, a feeling could be a somatosensory perception or an affect. There are typically affects laced in with exteroceptive and interoceptive perceptions, but I see them as distinct phenomena. But that’s a terminology nit.

            Like

          6. The “what-it’s-like”-ers all explain that abomination of an expression by saying they’re talking about phenomenal consciousness, which, per Ned Block’s aged definition is:

            According to Block, phenomenal consciousness results from sensory experiences such as hearing, smelling, tasting, and having pains. Block groups together as phenomenal consciousness the experiences of sensations, feelings, perceptions, thoughts, wants and emotions.”

            But maybe too many syllables compared with [cringe] ‘what-it’s-likeness’ … 😉 Block’s is a reasonable contents of consciousness definition but for my observation that the “experiences of sensations, feelings, perceptions, thoughts, wants and emotions” are all feelings. All experiences are feelings.

            On the primary consciousness definition, I’m not quite sure what you’re asking..”

            I wasn’t asking anything but pointing out that you’re saying that consciousness cannot exist without contents of consciousness, as I wrote, although consciousness and the contents of consciousness are two different things.

            And I think you’re saying that the word ‘feeling’ has too many meanings (polymorphic?) for you to understand what’s being specifically referred to, since you believe the word refers to both a “somatosensory perception or an affect.” That’s true! I assume you’re referring to bodily feelings (somatosensory perception) and emotional feelings (affect).

            I explained ‘feeling’ at great length on your Frankish post, so I thought you understood what I meant. But, perhaps not yet. So here’s an addendum:

            I began with bodily feelings like a touch or a toothache or a pain in your toe. Bodily feelings are surely the initial consciousness to evolve, since they’re so critical to survival. Note, however, that bodily feelings are localized so that they’re felt as if they were actually in the body part rather than in the brain. But that’s the baseline definition of a feeling. Visual and auditory perceptions are also localized, but as coming from a body part—the eyes and ears in this case. “Coming from” isn’t quite the same as “localized in” but the feeling is produced by the same brain mechanism. This now expands the definition of ‘feeling’ a bit to include both “felt in” and “coming from” body parts, so it’s a no-brainer to see that all of these are “bodily feelings.”

            Speech also involves body parts (larynx, tongue, mouth) for hearing/speaking people and is therefore an activity localized to specific body parts. Per Temple Grandin, autists think in pictures, a “coming from” the eyes bodily activity as already mentioned. So both cases of thinking are localized to body parts and thinking is properly called a ‘feeling’ and are produced by the same brain mechanism as bodily feelings. Emotions (affects) are more diffuse and not always felt in relation to a body part, although that “sinking feeling in you stomach” certainly is. But whatever the emotion, the feeling of the emotion is, once again, produced by the same brain mechanism that produces bodily feelings.

            Collectively, all of these are unquestionably feelings and are rooted in embodiment—these are the contents of consciousness. If I’ve omitted to mention a type of consciousness content, let me know and I’m sure it’ll easily fit into this description. Based on this inclusive description of what is meant by ‘feelings’, it’s easy to see why Damasio views consciousness as “the feeling of what happens” which is “composed of subtracks corresponding to all sensory modes” and, as I’ve explained above, includes thinking and other conscious cognitive operations. Phenomenal consciousness is a feeling. Consciousness, however adjectivized, is a feeling.

            In my NTC (Neural Tissue Configuration) proposal ALL of the above identified feelings are produced by specific cellular “configurations” of whatever neural tissue produces consciousness.

            Did I leave anything out? Is anything still unclear?

            Like

          7. Thanks Stephen. I think I have it.

            Would you say second order consciousness, introspection, is also a feeling? Or imagined scenarios? In other words, are you using the word “feeling” to refer to all cognition, sensory, and motor activity? Just checking.

            Like

          8. I’m not sure there is a difference between perception and consciousness. Perceptions are what you are conscious of. And do you perceive abstractions? When you call to mind the formula for the Pythagorean Theorem, is that a perception? I say it is.

            Regarding representation, my first impulse is to reject the dichotomy sort of like we now reject the dichotomy of life versus non-life. But the question I’m currently trying to work out is how to explain qualia. Representation requires at least two processes: one to create the representation vehicle and one to interpret the representation and produce appropriate output. But is the qualia of the represented object or the output of the interpretation? When a frog interprets the black spot in it’s view, is the qualia of “black spot” or “time to flick my tongue”. Or maybe there is no qualia unless the representation is general purpose and can be the input of multiple processes, i.e., globally broadcast. Or maybe there is no qualia until the general purpose representation is broadcast and subsequently referenced, i.e. higher order thought. Or something else?

            *

            Like

          9. I think of perceptions as sensory processing, which can be conscious or nonconscious. For example, I can walk while listening to a podcast, avoiding obstacles and the like, obviously perceiving the environment at some level, while my conscious mind is focused on the content of what I’m listening to.

            Of course, the word “perception” does get used for quick intuitive cognition, such as perceiving a solution to an equation. This usage has long made me wish I could think of a better word for the sensory modeling.

            I’m not entirely sure frogs have visual qualia. A frog left in a cage with fresh but dead flies will starve. Their tongue only seems to go out reflexively. John Dowling asks what then does a frog really “see”?

            On qualia, my view, which is roughly compatible with HOT, is that qualia happen in the movement planning parts of the brain, from information that comes in from the sensory perception parts. It doesn’t seem to happen when the same sensory information makes it to the habitual or reflexive systems.

            Like

          10. [Qualia] doesn’t seem to happen when the same sensory information makes it to the habitual or reflexive systems.

            If you can’t say what qualia is, how would you know if it was happening? What if it was happening in the reflexive systems, but nothing was accessing or referring to it?

            *

            Like

          11. Nonconscious qualia? Or qualia associated with a subterranean consciousness? I accept the nonconscious version of a lot of things, but I think qualia is a case where to speak of it as outside of consciousness is wrong by definition.

            Qualia seem to involve circuits that go from the early sensory areas, through the association regions, to the action planning and introspection regions, where they can potentially influence the language center. (The exact path remains unknown, but there are known crucial points.) If part of the sequence is blocked, such as when someone is cortical blind due to damage to their visual cortex, then information might make it to the movement regions by other circuits (such as subcortical ones) and affect behavior, but the person won’t have any qualia associated with it.

            All of which is to say, qualia is information flowing by certain pathways and reaching certain destinations.

            (Remember, anytime I talk about the cortex, read in that the thalamus and its various nuclei are in the mix.)

            Like

          12. All of which is to say, qualia is information flowing by certain pathways and reaching certain destinations.

            Afraid I find this unsatisfactory. My entire enquiry, which began a few years ago, started with “consciousness is information processing”. So what you just said is just the start of the question. What do you mean by information, and how does it flow? What does it mean that it reached a destination? You obviously don’t think just any information processing counts, because that would include reflexes. So what is it about the “certain pathways” or “certain destinations” that makes them different such that the difference makes a difference? And for that matter, exactly what do you mean by information?

            Also, do you consider that there could be more than one “agent” in the brain? What do you think about split brain patients? Does the half without access to language still have qualia? Seems like to me.

            *

            Liked by 1 person

          13. Wow James. Did I hit a nerve or something? I’ll answer as best I can, although we’ve discussed most of this before.

            On information, I see it as patterns that, due to their causal history, can have causal effects in a system. In other words, it can be thought of a causal nexus. The scope of the nexus, how much the environment affects it, determines how widely the effects on the system resonate with the environment.

            Reaching a destination means that the state of that destination has been altered by the propagating patterns, with the relationship of that state mapping in some manner to states of the source region, which, if a sensory one, is topologically mapped from the surface of a sensory organ (such as a retina).

            On information counting, it’s not like there is some attribute the information propagating through the reflex lacks. It’s just that the reflex doesn’t directly impinge on the introspection circuits. It does do so indirectly, but by the time it gets there, it’s a representation of a reflex used in action modeling, which we call a feeling.

            What is it about certain pathways? Ultimately it comes down to how they impinge on the deliberation and introspection centers. In other words, it comes down to location and connections. There’s nothing magical about the neurons or the information going through those circuits. It just comes down to what they can influence.

            It’s also worth noting that the reflexes are the raw material on which everything else is built. Perception, affects, imagination, and introspection are just extremely complex reflex arcs.

            On more than one agent in the brain, what do you mean by “agent”? The brain is a massively parallel processing network. It could be considered to have 86 billion agents. Or we could look at various integration regions, such as the midbrain, parietal cortex, and prefrontal cortex. All of these are heavily interconnected, so a case could be made they’re not separate agents, but then no human acts in isolation, so it comes down to how we decide to delineate “agents”.

            I do think split brain patients have two consciousness, or one consciousness fragmented, with the fragments coordinating as best they can; there is no fact of the matter on which one. The other side has its own sensory stream and so does have qualia, but only for the opposite side of the body. Obviously its stream doesn’t include the language center. (Although typically the non-language can still say single words. It just can’t string them together grammatically.)

            Hope this helps.

            Like

          14. I’ll be interested in the other Smith’s answer as well James. My perception is that each of you consider information processing to potentially serve as “qualia physics” in itself, which to me seems a bit magical.

            If qualia exists by means of processing alone rather than output of a mechanism given associated processing, then to me that would be interesting. What else exists by means of processing alone? Heat? Entropy? I don’t see how qualia could be anything quite that universal.

            Like

          15. Wow James. Did I hit a nerve or something?

            Yes, but only in that this seems like an opportunity (for me) to get you to understand things more like I do.

            On information, I see it as patterns that, due to their causal history, can have causal effects in a system.

            I’m going to ask for more rigor in wording. How can patterns, abstractions, have a causal history or causal effects? I have an answer that I think is compatible with what you’re thinking, but I want to see what you say.

            [information] Reaching a destination means that the state of that destination has been altered by the propagating patterns

            What exactly is propagating? I’m pretty sure we can compare the source and the destination and not find any matching pattern. [My answer: the object(pattern) being represented is propagating, although frequently the “destination” is a new representation which has a new object being represented.]

            It’s just that the reflex doesn’t directly impinge on the introspection circuits.

            I’m having some trouble figuring out how neurons impinge directly vs. indirectly.

            What is it about certain pathways? Ultimately it comes down to how they impinge on the deliberation and introspection centers.

            .
            Seems to me like it’s not so much a question of how they impinge but whether they impinge.

            On more than one agent in the brain, what do you mean by “agent”? The brain is a massively parallel processing network. It could be considered to have 86 billion agents. Or we could look at various integration regions, such as the midbrain, parietal cortex, and prefrontal cortex. All of these are heavily interconnected, so a case could be made they’re not separate agents, but then no human acts in isolation, so it comes down to how we decide to delineate “agents”.

            This is exactly my point. When deciding whether something is conscious, i.e., experiencing qualia, we have to be specific about the agent we’re considering. And we have to consider the possibility that when we say that some event is unconscious relative to agent 1, that does not mean it is unconscious relative to agent 2.

            I do think split brain patients have two consciousness, or one consciousness fragmented, with the fragments coordinating as best they can;

            Would you same the same for non-split brains? I assume so.

            So it seems that what you require for consciousness is introspection. And the question becomes, what is needed for introspection. How can you look at a system and say: ah, there is introspection.

            Hope this helps.

            Very much so, thanks.

            *

            Like

          16. “How can patterns, abstractions, have a causal history or causal effects?”

            This is hard to get into without examples. Tree rings are patterns that form from the way the texture of the tree would is changed each year. So they are caused by the growth processes of the tree. After it has been chopped down, it allows us to determine the tree’s age (the effect). Another example is a certain chemical given off by a type of predator. Another fish comes along and smells that chemical, and has learned an aversion to that smell from prior experience, so it reacts with an escape response (the effect).

            “What exactly is propagating?”

            A wave of changes, one causing the next. In a nervous system, it’s the neural spikes generating further neural spikes. In computers, it’s electricity flows being modified by logic gates and modifying the gates.
            [Urf. Just read your answer and realized you were asking at a different level. I have no problem with your answer.]

            “I’m having some trouble figuring out how neurons impinge directly vs. indirectly.”

            Direct has fewer intermediaries and transformations between them. Indirect has more layers, and thus more opportunities for other circuits to intervene and modify the signals.

            “And we have to consider the possibility that when we say that some event is unconscious relative to agent 1, that does not mean it is unconscious relative to agent 2.”

            Consciousness lies in the eye of the beholder. Most of the time, when discussing it, we’re talking about our consciousness, or in other systems, the “endpoint” or “highest level” consciousness.

            “Would you same the same for non-split brains?”

            Well, under the fragment interpretation, they’d have a functioning corpus callosum, a high bandwidth connection between the two fragments, which would arguably make them one system. Under the two consciousness interpretation, the consciousnesses would be joined together, merged. (Note: this merging works because, of course, the two hemispheres evolved to function together. ) Of course, the fact that split-brain people are able to function as well as they do tells us that the coordination between the hemispheres may be more limited than we’d think.

            “So it seems that what you require for consciousness is introspection.”

            In humans, it’s how we categorize the conscious from the nonconscious. That doesn’t mean animals don’t have many of the experiences that fall within the scope of our introspection, but they don’t appear to have introspection itself.

            “And the question becomes, what is needed for introspection.”

            That’s a good question. Answers vary. Some people are skeptical that much of what we introspect actually reflects any of our cognition. I think that level of skepticism is unwarranted, if for no other reason, it would require a lot of duplicate computation. In humans, introspection seems to be a deep recursive metacognition added on to our deliberating machinery, which is the main thing it has access to. It also probably leans heavily on a developed theory of mind. Introspection is unreliable.

            “How can you look at a system and say: ah, there is introspection.”

            Language is the best indicator. Language requires recursive metacognition, both to produce and to interpret. If language isn’t present, as it obviously isn’t in non-human animals, then it’s much harder. Only primates show clear signs of metacognition, but it’s limited compared to humans. People are always claiming to have found metacognition in other species, the but results are usually open to simpler interpretations. If it exists in non-primates, it’s much harder to detect, and therefore is probably much more limited.

            Back to you!

            Like

          17. “How can you look at a system and say: ah, there is introspection.”

            Language is the best indicator. … If it exists in non-primates, it’s much harder to detect, and therefore is probably much more limited.

            I’m not really looking for correlations. I’m interested in causation. For myself, I think introspection, and language, are simply correlated with qualia because they are capabilities that become available under the right circumstances.

            Here is my current (pretty much as of this discussion) theory: what makes introspection and language possible is the ability to generate general-purpose representation, which is a representation available to multiple processes. So a single physical thing (by which I mean a specific defined set or sets of neurons acting over time) which impinges on multiple processes but representing the same object/pattern to each. So these multiple processes could include associating a word with said representation (“tiger”) or generating a memory of said representation or possibly triggering certain physiological responses if said representation also includes patterns for “out there now” and “looking at me” or goal-oriented processes such as “don’t move until it goes away” and “make sure John sees the tiger and doesn’t make noise” (leading to the noiseless shushing, i.e., finger to lips followed by pointing in direction of tiger).

            It is probable that any such system for representing to multiple processes would be capable of representing more than one object, depending on the pattern of activity. Eliasmith’s Semantic Pointer Archtecture describes such a system capable of a vast array of representations.

            Note also, such a system could be considered a global workspace.

            *
            [over]

            Like

          18. I’m not catching the meaning of your first paragraph. Are you saying that qualia aren’t part of the causal chain that lead to behavior, or us discussing them?

            Or are you saying that qualia and the rest share a common cause, but that qualia themselves have no causal role in behavior or language? If so, what then is the adaptive role of qualia?

            At a certain level of organization, I have no issue with the rest. I agree it sounds very much like the global workspace.

            Like

          19. I’m saying that qualia is not a causal property any more than, say, the appearance of an object is a causal property of that object. It would be odd to say the appearance of a bicycle is adaptive (functional), but likewise it would be odder to consider zombie bicycles that have no appearance.

            I guess some might say (have said) that the appearance is adaptive in the way that the appearance of a computer desktop icon is adaptive, but I don’t think that’s correct. The appearance of a computer icon is adaptive because it share’s features with the appearance of something else, and we can then associate the functionality with that something else. For representations, the appearance, or qualia, just is what it is.

            *

            Like

          20. I agree that the appearance of a computer icon is useful because of its resemblance to real world object, like a garbage can. But it seems like that resemblance makes it useful because of the usefulness of the appearance of the real world object. The appearance of a garbage can is useful for identification, so I know it’s a receptacle for garbage, as opposed to the nearby recycle bin, which has a slightly different appearance to clue me in that it should receive different items.

            I know a lot of people see qualia as something in addition to the information, but I don’t. To me, qualia are information. That doesn’t mean they’re necessarily always useful information. The mechanisms that fire to make red vivid, perhaps for spotting ripe fruit, also fire when I’m looking at a sunset. But just because an adaptive mechanism fires gratuitously doesn’t mean it isn’t adaptive.

            Like

          21. I guess my point is not that appearance can’t be useful so much as appearance isn’t created for the purpose of being useful.

            I’m going to try this out as my new catch phrase: qualia is the appearance of representation.

            *

            Like

        2. FWIW, I think the best candidate for primal consciousness is representation. Everything anyone calls consciousness is combinations of things you can do with representation.

          Regarding levels of consciousness: did you guys see this poster go by in the twitterverse? Most people’s levels go to 5, or maybe 10. When you’re at 10, where do you go from there? Eva de Hullu’s go to eleven.

          Also, notice what’s going on in the right part of the poster. All those levels made up of little [I]—>[c]—>[O] units. Each one a little representation.

          *
          [no consciation without representation]
          [you know life is good when you can make a perfect Spinal Tap reference]

          Like

          1. But James, eleven just redefines everything. Yeah, I know. This one goes to eleven. 🙂

            That looks like the diagram you showed me the other day. It seems like a hierarchy specifically of perception, rather than consciousness overall.

            On representations, would you say the representation itself is sufficient for consciousness, or how it’s used? In other words, are you a first order consciousness guy, or a higher order guy? (Or do you reject the dichotomy?)

            Like

          2. Mike, you wrote somewhere way, way above this comment: “Would you say second order consciousness, introspection, is also a feeling? Or imagined scenarios? In other words, are you using the word “feeling” to refer to all cognition, sensory, and motor activity? Just checking.

            Mike, first I would say that concepts like “second order consciousness” emit more heat than light and contribute to the notion that there are multiple ‘kinds’ of consciousness rather than multiple kinds of consciousness contents. Introspection is simply “observation of one’s own mental and emotional processes” and is yet another cognitive operation with the story of the Self as its subject matter. All experience of every kind is feeling which is what consciousness is—sentience.

            In my extensive comments in another of your posts (Keith Frankish) about consciousness as a simulation, I wrote that:

            “The world of experience, …, such as vision and sound, colors, shapes, and bodily feelings, indeed all qualia (introspectively available mental phenomena such as the feeling of the color red) is composed of feelings that are the contents of consciousness.”

            My specific and easily understood definition of qualia is an example of what is generally missing in the commentary here, with definitions only spottily provided accompanied by a strange (to me) attitude that definitions are elusive, difficult and best avoided. Remarks about definitions are generally dismissive of them as in “… people come up with new definitions all the time. Everyone hopes their definition will become the new and final one.” Oddly, that attitude and the overall absence of definitions doesn’t seem to interfere with lengthy discussions that are impossible to parse because it’s impossible to discern precisely what is being discussed. Impossible!

            In light of a clear definition of qualia as the contents of consciousness, I find it impossible to understand the claim that “qualia happen in the movement planning parts of the brain, from information that comes in from the sensory perception parts.” And, yes, Mike, there are no non-conscious qualia, by definition. But I can’t understand either your proposal that “Qualia seem to involve circuits that go from the early sensory areas, through the association regions, to the action planning and introspection regions, where they can potentially influence the language center,” or the relevance of that proposal to the feelings that are the content of consciousness.

            Consciousness is not information processing and traveling the rest of this comment thread reveals more dizzying philosophical speculation at it’s complexifying worst, even bringing in the dual consciousness business that’s not apparent to the subject of a split-brain surgery, but somehow is obvious and incontestable to philosophical onlookers. James likes to focus on ‘representation’ as in “… the best candidate for primal consciousness is representation” but it’s not clear that he’s referring to the contents of consciousness as the simulation of an embodied organism centered in a world, which it obviously is.

            I could go on for pages commenting on point after point in this comment thread but I believe I’ve made the point that we should be trying to understand consciousness. But these discussions rooted in undefined terminology always seem to get hopelessly lost in unnecessary and, eventually incomprehensible complexity.

            Simple, straightforward and elegant proposals, like Damasio’s “consciousness is the feeling of what happens”, a perspective conformant with Searle’s scientific Biological Naturalism, are ignored. How come? Insufficiently obfuscated? Insufficiently ‘philosophical’?

            Like

      8. Someone coming up with a definition won’t change the realities it’s referring to. Only those realities being different would.

        Mike,
        Perhaps I wasn’t clear. I’m not merely talking about someone coming up with a definition for consciousness. People do that all the time. I’m talking about a definition which is found to be so effective, that it becomes standard throughout the scientific community. For example, today when someone uses the “electricity” term, there isn’t much uncertainty about what’s being referred to. Why? Given the effectiveness of what science has discovered here to date. This is exactly what I believe needs to happen for the “consciousness” term in order to help softer forms of science (such as psychology and neuroscience), become harder forms of science.

        There’s an interesting criticism that you made of my own “What it’s likeness” definition for primary consciousness. (Yes Stephen, I do mean “feels like” here.) Note that you criticized this as being hopelessly vague since “…someone else might relate it to sensory experience, or our knowledge of that experience.” Then you went on to mentioned that you personally believe that no primary consciousness definition will be complete without such an inclusion. Hmm…. So I’m being vague because others tend to burden my quite precise definition with extra capabilities? That makes no sense, of course, but no worries. Let’s now practically explore my non-vague definition to see if its precision may indeed be effective.

        I consider affect to exist as reality’s strangest stuff. This is pure (+/-) value. Indeed, I define it as consciousness itself, though not functionally so. It was probably sometime during the Cambrian explosion that non-conscious organisms were overly taxed as they ventured into more “open” environments. Though potentially armed with all sorts of non-conscious senses, apparently they couldn’t be “programmed” to function with sufficient autonomy. Our robots today seem to face this very same problem — the more open the setting, the more that they fail.

        I propose that affect/ consciousness initially evolved as an extraneous product of other circumstances. Then over thousands or millions of years the experiencing entity was given some ability to alter the output function of the organism itself. Here it would tend to promote what made it feel good and limit what made it feel bad to the extent that it could. Thus the emergence of functional consciousness, or an autonomous second variety of computer which is produced by the first.

        Let’s also explore my precise primal consciousness definition from the other direction. Imagine a degenerative disease which attacks your brain. First you loose the capacity for conscious muscle function, and so are put into a hospital. Worried friends and family come to see you, and though you can’t respond, appreciate their kindness. You’d be worried too! Then the disease attacks your memory. Without an accessible past you’d become an instantaneous conscious entity which thinks, but only in terms of the present and future. Then let’s say that your informational senses get cut off as well — no perceptions of smell, hearing, sound, and so on. At this point you would certainly not be functionally conscious, though I’d still consider it effective to call you conscious in the most basic sense, and this is because there would still be something that it feels like to exist as you. You’d still suffer horrible pain, for example, if your appendix were to burst. It’s this sort of worry which countless people across the world face when they ponder the existence of injured or mentally defective loved ones.

        If we ever develop a machine which creates something like “pain”, a primally conscious entity would be created, though not functionally so. I submit that such a definition will ultimately help harden our soft sciences, and even though we do not yet know what it is that brains do to produce reality’s most amazing stuff. While I don’t mind people having fun with GWT, HOT, and AST, as I see it the scientists responsible are essentially “trying to discover the nature of electricity by studying a modern nuclear facility, when they should instead be looking to a simple machine which might be outfitted with a hand crank”.

        Liked by 1 person

        1. Eric,
          On definitions, I understood what you were saying, but as you noted, people come up with new definitions all the time. Everyone hopes their definition will become the new and final one. But even if someone did succeed, it wouldn’t change the reality on which the hierarchies are based.

          On my criticism of “what it’s likeness”, it doesn’t seem like you paid close attention to what I said. But in case I wasn’t clear, my criticism of vagary was directed toward the overall Nagel phrase, not your definition. The point was that you related it to your definition, while others could relate it to different conceptions.

          My point about your definition of primary consciousness is that it doesn’t match up with how that term is generally used, including by Gerald Edelman, the biologist who coined it. Your definition matches up with what F&M call “affect consciousness”, or “sentience”.

          However, I would note that your description of how you see it evolving, with open environments and the like, implies exteroceptive perception. You may not want to call that in and of itself “consciousness”, and I’d have some sympathy with that position, but it seems like a necessary prerequisite for your definition, which matches up with layer 1-3 in my hierarchy.

          All of which is to say, it seems like we’re in agreement here. If there is a point of disagreement, it might involve recognizing the prerequisites.

          On developing machine pain, I think the following would need to be true.
          1. The machine would need to have self concern, or at least a concern that the signal was relevant toward.
          2. It would be unable to turn off the reception or processing of the signal.
          3. Processing the signal would result in an automatic reaction that it either had to engage in or constantly override.
          4. The processing and overriding would require significant ongoing resources, which would stress the system.

          If a machine had those things, then I’d say it was feeling pain, or something similar enough that I’d still be fine with the label.

          These attributes exist in animals because of how they evolved. I’m not sure if would be productive to build a machine like that.

          Liked by 1 person

      9. Mike,
        If the “consciousness” term had a definition which was accepted to even half the extent that the “electricity” term happens to be, then I think that at least scientists would have no need for a hierarchy describing what is and isn’t useful to call “conscious” — this definition would tell us. In that case your hierarchy might remain useful in terms of conscious functionality levels however. I suspect that you’re fine with this, as well as yearn for the day that science does achieve a generally accepted consciousness definition. I also yearn for that, as well as the day that good friends (at least) are able to gain a working level grasp of my own such model,, and so provide more detailed assessments.

        So you did understand my position on “Something it’s like”, and were then criticizing how people might interpret this position in other ways? Well that’s good. We’re agreed that this issue should tend to be problematic.

        I realize that my own primary definition for consciousness will not always match up with how people today generally use the term, which should be expected given variability in standard conceptions. And of course there’s still the functionality issue to contend with. For example it’s common to consider normal sleep as something other than “conscious”. From my own model however, sleep may instead be referred to as an impaired state of consciousness.

        Regarding my perception of how affect evolved, if your levels 1-3 are evoked, then here we’d be talking about “non-conscious function” under my proposal. Apparently your recent update brought sentience down to level three however, so consciousness would already exist in that one as I define the term. This would require that levels 1 and 2 exist for its evolution.

        In any case note that under my model non-conscious life could evolve to have inside senses, outside senses, and memory, or the the very thing which is also processed by our robots for output function. Affect/ consciousness would have been the special element to evolve, though it shouldn’t have been functional right off the bat. This dynamic should only have become functional once the non-conscious brain provided it with greater resources in a way that promoted survival. Apparently this mode of function kept evolving to create human consciousness as well.

        On your perception of how to solve the (“truly” as opposed to “not really”) hard problem of consciousness, to me it seems more “higher order” than appropriate.

        1. The machine would need to have self concern, or at least a concern that the signal was relevant toward.
        2. It would be unable to turn off the reception or processing of the signal.
        3. Processing the signal would result in an automatic reaction that it either had to engage in or constantly override.
        4. The processing and overriding would require significant ongoing resources, which would stress the system.

        Note that Peter Martin isn’t just saying this sort of thing, but putting it directly into computer code for testing. Though I’m skeptical, at least he’s trying to get some proof regarding your shared “software based” conceptions of consciousness. As we have with electricity, I suspect that some lower order physics will instead be required in the end.

        Liked by 1 person

        1. Eric,
          “Affect/ consciousness would have been the special element to evolve, though it shouldn’t have been functional right off the bat.”

          I’m curious why you are so steadfast on this point. What convinces you so thoroughly that affects can exist absent any functional role? You seem to see them as a great mystery. But it seems like you’re backing yourself into the mysterian corner by insisting that it must be simple and non-functional, then wondering how they can be what they are.

          Myself, I don’t think you can separate affects from action-selection, from the ability to inhibit reflexes. The first proto-affect might have just been an extra circuit between a reflex and, say, a smell circuit. It might have inhibited the swimming motion if food was smelled, or inhibited the feeding action if a predator was smelled. Over time, selection would expand and deepen that capability.

          “Note that Peter Martin isn’t just saying this sort of thing, but putting it directly into computer code for testing. ”

          I wish Peter the best of luck. I hope he’s focusing on recreating the behavior of, say, a zebrafish, before attempting anything like mammalian, much less human level emotions. It’s not the valence that’s hard, it’s all the interactions.

          On being skeptical of success, it’s worth remembering that no computer could ever beat a human at chess, or Jeopardy, or Go, until one could.

          Liked by 1 person

          1. Mike, your:

            I don’t think you can separate affects from action-selection, from the ability to inhibit reflexes. The first proto-affect might have just been an extra circuit between a reflex and, say, a smell circuit.”

            … touches on a thought I’ve been considering.

            But, first, regardless of F&M, I wish you would restrict your use of the word ‘affect’ to refer to emotional feelings as opposed to bodily and cognitive feelings. IMO, ‘affect’ is primarily a psychological term as in this definition from the minddisorders.com site:

            Affect is a psychological term for an observable expression of emotion” and description “A person’s affect is the expression of emotion or feelings displayed to others through facial expressions, hand gestures, voice tone, and other emotional signs such as laughter or tears. Individual affect fluctuates according to emotional state.”

            But, back to the meat I’m proposing—an “evolution of consciousness” idea:

            A ‘reflex’ (which is completely non-conscious although the perceptions of the reflexive stimulus and bodily movements are conscious) can be diagrammed as S ➙ R, where ‘S’ is the Stimulus and ‘R’ is the Response. That reflex pattern is the same as the pattern of the simple story A ➙ B, which, as I explained elsewhere in my Story Engine proposal, is the basic story form that’s committed to memory. An incoming perception of ‘A’ will pattern-match that story, resulting in the ‘expectation’ of ‘B’ which may or may not become conscious.

            The insight that intrigued me when looking at these diagrams of Reflex and Story is that the Reflex would be a ‘circuit’ already instantiated biologically in the brains of pre-conscious organisms with nervous systems. As an initial conscious feeling evolved, the reflex neural pattern could be co-opted as a base for the unconscious story processing pattern, allowing ‘B’ as it became a conscious feeling to influence the ‘R’ reflex outcome, perhaps initially in an inhibitory way.

            Again, this is a (possibly valuable) proposal about the earliest evolution of consciousness. Let me know if I’m not making any sense.

            Like

          2. “I wish you would restrict your use of the word ‘affect’ to refer to emotional feelings as opposed to bodily and cognitive feelings.”

            My use of “affect” is for the feeling of the emotion.

            That definition you found strikes me as the definition of “affect display”, which is different. An affect display can happen from purely reflexive circuitry.

            “allowing ‘B’ as it became a conscious feeling to influence the ‘R’ reflex outcome, perhaps initially in an inhibitory way.”

            That sounds right. A reflex is normally S -> R, but it could be divided into S -> R1 -> R2, where R1 is the portion that happens on the stimulus and can’t be overridden. R1 includes changes in heart rate, breathing, blood pressure, etc. R2 is the motor action. But R1 also includes signals to action-selection circuitry, which may allow or inhibit R2. I think the reception of the signal by the action-selection circuitry, is the affect (or perhaps the proto-affect if we’re talking about the earliest evolutionary glimmers).

            Like

          3. Per that definition of ‘affect’ (and most that I looked at) ‘affect’ == “affect display” == “expression of emotion.” [Me are a computer programmer … ‘==’ means “is equal to”]

            ‘Feeling’ means feeling, a term I’ve gone to great lengths to define by example, but I think we all understand what ‘feeling’ means.

            Like

      10. Mike,
        Why am I convinced that it’s possible for affect to exist without any role in a given organism’s function? Great question! There are two reasons. First note that this is the overwhelming standard for evolution. Life doesn’t evolve extra fingers and so on for reasons, but rather does so by means of random mutations. Either they work out eventually given improved survival, or tend to fade away. I certainly don’t consider there to be a teleological programmer saying, “Let’s do [this] in order to help alleviate the issue of [this]”. So what I’m talking about here is quite standard.

        Secondly I consider affect, sentience, or whatever one wants to call it, to be an INCREDIBLY big deal. This is, as I see it, primal consciousness itself, or the essence of what it’s likeness. It’s the stuff which evolution fabricated to eventually create you and me, or entities which can feel quite good, or conversely suffer horrendous misery. Even memory tends to fail us when we ponder the potential extremities of pain itself. In order to sufficiently grasp my point here one might need to do something like take a hammer, and smash his or her thumb! (I felt sympathy for Stephen Wysong recently when he had to resort to such an analogy for potential clarification.)

        I’m no computer guy, but if the sensation of a smashed thumb transpires by means of computer processing alone, then what else happens this way? Above I’ve proposed possible answers of “heat” and/or “entropy”. Otherwise I’m not sure. What else happens by means of computer processing exclusively, when the result of that processing is not hooked up to any output mechanisms whatsoever? Furthermore is it truly possible that code could be written which in itself causes the computer that I’m now typing on, to experience what I know of as “thumb pain”? Or perhaps if John Searle were to look up answers to symbol laden notes to produce other symbol laden notes, could this produce an entity which experiences “thumb pain”? No other physics required? I’ll go with this if I must for discussion’s sake, and it doesn’t actually contradict any of my models, though to me it seems just plain wrong.

        Virtually any computer could always beat me at Chess, or Jeopardy, or Go, and because these games concern relatively closed environments. But I’d like to see a robot which can do the sorts of things that I specialize in. There are none that can use various tools and materials to build a home which is not too unlike the one that you live in, and because this would be an “open environment” task. While evolution was able to build something that could do the sorts of things that we do, our robots seem many orders inferior in this respect.

        If it turns out that non-programming based physics is off the table for you regarding the (truly) hard problem of consciousness, then I do still have a question. Could you momentarily set aside your single computer architecture for brain function, in order to ponder my dual computers architecture? Here there’s a vast supercomputer in your head, and it produces a tiny virtual computer that does less than 1000th of one percent as many calculations as the one which produces it. While the big one is fueled by means of neurons, the tiny virtual one may now be said to function on the basis of the affect which the big one produces solely by means of neuron based processing. I’ve been mentioning this model here for a few years. Would this stipulation help encourage you to ponder my model in a practical rather than just lecture level capacity?

        Liked by 1 person

          1. James,
            If you’re asking if I see a path for my dual computers model of brain architecture to evolve, well yes, it certainly has evolved. It seems to me that this conception struck me about 15 years ago. I’ve been refining it ever since. These days I think it works pretty well and so would like others to run it through their own paces to see if they also find it useful. (Here I presume that you aren’t asking if I consider brain function to evolve, which of course I do.)

            Then are you asking if I see a path for standard neurons to evolve into a computer simulator? I may not understand the question. To be clear, I consider consciousness to exist as a virtual computer by means of the brain, and I suppose that what’s perceived by this second computer exists as a generally effective simulation of what’s actually Real. My current perception is that you’re asking if the main computer itself could become a computer simulator, or somewhat like the computer that I’m now typing on might display simulations of a storm? If that’s what you’re asking then I guess so, though I’m not sure what medium this simulation would be expressed through. A brain screen?

            Like

        1. Eric,
          ” Life doesn’t evolve extra fingers and so on for reasons, but rather does so by means of random mutations.”

          Traits do start as mutations. They are immediately either a net benefit, detriment, or neutral. Note that a trait that does nothing but requires energy is net detrimental. Feelings, it seems to me, have a cost, if nothing else the energy to maintain the extra neural substrate. It’s hard to see them lasting long without earning their keep.

          It is possible they started out as a spandrel, but that’s sheer speculation. It’s hard to see them spreading and developing unless they provided some benefit.

          “Secondly I consider affect, sentience, or whatever one wants to call it, to be an INCREDIBLY big deal.”

          I find it odd that you say this, but then don’t want to examine its biological-value predecessors, or the stages it might have gone through over evolution.

          “In order to sufficiently grasp my point here one might need to do something like take a hammer, and smash his or her thumb!”

          That seems like an appeal to emotion. If someone severed the right connections to my anterior cingulate cortex, I could smash my thumb with abandon and not feel any distress, at least other than the intellectual knowledge that my hand was being damaged. With further damage to certain regions of my parietal cortex, the concept of a thumb could become inconceivable to me, taking care even of that intellectual distress.

          “I’ve been mentioning this model here for a few years.”

          And I have pondered it, in numerous conversations over those years, both public and private. On most of those occasions, I’ve asked you questions, many of which you responded to by saying you don’t do engineering. I’ve given you feedback and criticism, some of which you seemed to take hard. I’ve told you the parts I can accept, and why I can’t accept the other parts. It’s hard for me to do more until / unless you evolve or develop it more.

          Liked by 1 person

      11. Mike,
        I’m not so sure that affect would consume all that much energy. This would depend upon the physics of affect, whether computer processing alone as you favor, or something else as I do. What we can be sure of however is that it did evolve, and whether or not it was instantly adaptive.

        I’ve noticed from your twitter feed that Eric Schwitzgebel has an extremely relevant weekend post up. Apparently you and Lee have already commented. I’ve come to enjoy the perspective of this philosopher of psychology.

        Liked by 1 person

          1. Lee,
            You mean this?: http://schwitzsplinters.blogspot.com/2019/10/what-makes-for-good-philosophical.html?showComment=1570404620840#c1257169317972110432

            According to his post my argument would be an “excellent” one if it ought to move my target audience somewhat. Apparently that was the case for you, but was the professor moved as well? Or should he have been? I’ve noticed him to generally get back to me within a week.

            While support from you seems logical here, I wouldn’t expect him to support the notion that morality exists in us as a social tool from which to advance individual self interests, as well as my position that the strength of this paradigm highly restricts modern mental and behavioral sciences. He’s a card carrying philosopher of psychology!

            If an argument is given calmly and respectfully, I consider there to be a pretty good sign of its excellence beyond just moving someone else’s beliefs. It’when an argument is met with defensiveness and emotional rebukes. To me this suggests that effective points were made, and even if never admitted.

            I don’t recall an situation where Schwitzgebel lost his cool. To me this suggests that he’s either extremely objective, hides his biases well, or perhaps most likely, I simply haven’t seen enough of him yet.

            Like

    1. I saw the press releases and skimmed the paper, but have struggled to see what exactly is new here. We already knew pyramdal cells, including L5p ones, are a crucial component of long range circuits in the brain, and we know such circuits are crucial for cognition, including conscious cognition.

      There’s an unstated implication that there’s something magical about these particular cells, which I didn’t see justified anywhere. (But since all I did was skim, I might have missed it.) In general, I find implications of magic distasteful, except in fantasy stories.

      I’ve been waiting to see if anyone in neuroscience whose opinion I trust comments on it.

      Like

      1. The thesis is that these particular cells are “magical” in the sense that whatever happens outside of them is unconscious.

        “This perspective makes one quite specific prediction: cortical processing that does not include L5p neurons will be unconscious.”

        This gets in a way to the debate over the role of the brain stem. I could be misunderstanding something since I still sometimes confuse the myriad of brain structures. I think the thalamus sits atop the brain stem which is definitely associated with wakefulness and consciousness since damage in that area results in coma. So if we think wakefulness and attention comes from that area but the contents of consciousness (in mammals at least) come from the cortex then these neurons are what join the two together. I can’t seem to find much on the evolution of these neurons but I find that the cortex itself has some evolutionary roots in the pallium which may have evolved additional capabilities in birds to give it capabilities similar to the cortex. Also, the mushroom bodies in insects have genetic affinities to the cerebral cortex.

        Like

        1. There’s an important distinction between what you said and that snippet. The snippet says that L5p have to be included, not that consciousness only happens within them. Since consciousness depends on widespread communication throughout the brain, and pyramidal cells are the long range cabling between those regions, that statement seems correct, but again that seems like something we already knew.

          The term “brainstem” is a bit ambiguous. Usually it includes the midbrain, pons, and medulla. Most people don’t include the thalamus in that reference. Some don’t even include the top part of the midbrain, the superior colliculus, although the midbrain is an evolutionarily conserved system and considering that portion separately doesn’t make a lot of sense.

          Evolutionarily, the thalamus is a part of the brain called the diencephalon, sometimes referred to as the “tweenbrain” since it sits between the midbrain and the telencephalon (the rest of the forebrain, including the pallium). The telencephalon has always depended on the diencephalon for communication between its components, as well as a bridge to the midbrain. Although being biology, nothing is absolute and there are a lot of independent connections.

          I’ve always said that the cortex can’t be considered separate from the thalamus. You’ll often see it referred to as the thalamo-cortical system, but often you just see “cortex” in casual conversation.

          Like

  6. “The thalamus is a small structure within the brain located just above the brain stem between the cerebral cortex and the midbrain and has extensive nerve connections to both. ”

    I don’t consider what I said in conflict. I didn’t say consciousness resided in these cells, just that they had to be the loop. Or that was all I meant to say.

    At any rate, it seems like consciousness more likely is a product of the whole system.

    Like

      1. Another prediction from the paper:

        “In other words, we make the strong prediction that cortical processing in itself, when not integrated with the NSP thalamic nuclei via L5p neurons, is not conscious. In particular, feedforward cortical processing, where information is mainly flowing within the cortical superficial layers bypassing thalamocortical neurons, is non-conscious.”

        Like

        1. They may be right, and yet, what’s bugging me about this paper is that what they’re doing is like looking for the internet in network routers, particularly the ones at major ISPs. You can say that those core routers are essential for the internet to function, and that whatever doesn’t go through them isn’t part of the global internet.

          Yet implying that the internet is in those routers, crucial as they are, doesn’t make sense. The internet is all the functionality in the spokes as much as the hubs, with an overall synergy necessary for our experience of it. And there are crucial services (DNS, etc) and major centers (WordPress, etc) that are major contributors, although none of them will work without those core routers.

          And what they’re calling the “state of consciousness” just seems like the level of arousal, which we could equate to the core routers having power and connections. Of course consciousness requires arousal to work. But we know from patients in persistent vegetative states that you can have sleep / wake cycles with no awareness.

          So, at a certain level or organization, I think the authors are right. But their picture seems low level and incomplete. On the positive side, it can be fitted in with the other theories as part of the overall framework.

          Like

  7. All of these—HOT, IIT, GWT, AEIOU-T— are philosophical theories concerned with identifying logical organization rather than biological functionality. I believe it’s relevant to ask, “What is the evolutionary story for these philosophical theories?” I think it must be this:

    Some itty-bit of integrated information repeatedly assists in the location of foodstuffs for a completely non-conscious organism, allowing the non-conscious organism to unconsciously reproduce, which, over the span of eons allows a genetic mutation at some point for a little itty-bit more information to be integrated in the brain of an offspring. Over many generations, repetition of this evolutionary sequence results in a sufficient accumulation of integrated information, culminating in the birth of the first conscious organism, whose first thought is, “WOW!”.

    I’d put my money on the evolutionary development of an itty-bitty global workspace. 😉

    Liked by 1 person

    1. HOT definitely has philosophical origins, but there are specific scientific versions. I’m not sure about the original GWT, but Dehaene’s global neuronal workspace seems firmly rooted in neuroscience. AST, if I recall correctly, started from a psychological perspective, but is also heavily informed by neuroscience.

      IIT is, well, IIT. It seems to value mathematical formalism above all else. I’m not really a fan.

      On the evolutionary story, I tend to think it all started with adding prediction to otherwise reflexive systems. To me, all the plausible theories are details of that core function. Another way to interpret it is recognition, or the search capability you’ve discussed before.

      Liked by 1 person

    2. Still wondering about this “global workspace” notion, which is unfamiliar to my lexicon, among other terms of this kind here. The more I hear it applied, perhaps the clearer the idea will become, and I’ll just keep ‘tuned-in’ to the application as we proceed. Puzzling, but not impossible.

      Like

  8. Random question, because I am curious. Do any of you think some people doubt or disagree with the ‘full-on’ or pure materialist/physicalist view, because of fear? Like fear of nihilism and a meaningless mortality?
    I personally disagree with materialism, but not because I find it too uncomfortable to accept.

    Like

    1. Hi Linda,
      I think it’s more complex than just fear, but a lot of people do seem to have an emotional aversion to it. Some of it, I think, comes from a deep intuition of dualism. Most cultures, including hunter gatherer ones, appear to have a concept of ghosts and spirits.

      Along the lines of the paper, a lot of this may come down to the fact that our model of the mental is very different from our model of the body, including the brain. The two seem irreconcilable. But we have a lot of psychological work telling us our model of the mental, while very effective in day to day life, isn’t a reliable source of information on the mind itself. But for many people, accepting that is hard. The model is a primal pre-conscious one. If feels more real than anything else.

      That, and the cultural and religious narratives built around it, with promises of an afterlife. Although, interestingly, the oldest Christian narratives were not dualist in outlook, discussing an afterlife involving a full body resurrection. So strictly speaking, dualism doesn’t seem necessary for that outlook.

      Like

      1. Thx for responding.
        The only reason why I asked is because of something I came across on PsychologyToday. And it made me think.

        Please ignore all the gender associations in this comment. The whole conversation there is a little weird:

        “The difference is that when males give into cognitive biases, fears of nihilism and own meaningless mortality people actually let them do it and spread the misinformation because we, in the West (other cultures are even worse at that) still live in a very patriarchal society where when the women believe in New Age or spiritual concepts she is labeled as gullible but when the male does it, well, he may have a point! For the record, I am a female.

        I have exposed myself to all of the above mentioned and I strongly believe these are individuals on the spectrum of schizotypy who really want to believe there is more to life than there is due to basic human fears and it so happens that males are much worse at self-awareness and emotional intelligence than females therefore they engage in intellectual escapism, dissociation and self-serving empowering theories.

        * OBEs – Out of Body Experiences
        * NDEs – Near Death Experiences
        * ESP – Extrasensory perception”

        They’re doing whatever it takes to deny nihilism.
        Too many people find materialism too uncomfortable to accept.

        Research in behavioural sciences shows us that human behaviour is predictable if we facilitate a particular situation. There is no evidence for consciousness as in free will, as we are always responding to stimuli, responding to something, that includes reverse psychology behaviour. The only enlightened individuals are those who made themselves believe that they are enlightened which doesn’t mean they actully are. Human behaviour is based on basic principles of power relations and Maslow’s hierarchy of needs. You comprehend these two, you can figure out people’s actions and reactions – sounds like programming/conditioning to me.”

        She’s basically talking about the authors and researchers who are interested in those mentioned ‘self-serving theories’. It’s a long list.
        Even though I can’t take 96% of the groups and the people on her list seriously, and some New Age pseudoscientists that are not on her list ( they’re probably not on her list because it’s already long. And she did say she ‘could go on’ ), I don’t think it’s correct to say every single one of them are all individuals ‘on the spectrum of schizotypy and wanting to believe there is more to life’.
        Her whole list is here:

        http://www.psychologytoday.com/za/blog/psychology-yesterday/201306/why-people-believe-weird-things

        Like

        1. “… it so happens that males are much worse at self-awareness and emotional intelligence than females therefore they engage in intellectual escapism, dissociation and self-serving empowering theories.”

          I am a male, and I couldn’t agree more with the statement above with emphasis on “much worse”.

          Like

        2. Interesting. This is the first I’ve heard of schizotypy, so I have no clue how it might influence people on that list. In general, it’s very easy to see other people’s cognitive biases, and very hard to see our own. All in all, it seems more productive to debate ideas rather than individual people’s psychology.

          Like

    2. Here’s my short and spectacular answer Linda:

      The solipsistic self-model is driven by a pathology, and that pathology is the innate need for a sense of control. The structural qualitative properties of any given solipsistic self-model is determined by those properties and those properties will underwrite the pathology of what is required for that singularly targeted sensation of control. In order to convince anyone of an alternate world view other than the one held by a given individual, there has to be a payoff. And that payoff is this: “Does the argument I just heard reinforce a sensation of control which my own beliefs already provide, or does it destabilize the foundations upon which my own beliefs are grounded and therefore destabilizes my own sensation of control? Control is the driving force of our primary experience because a sense of control and the sense of self are coextensive as one and cannot be divided.

      Like

    3. Linda, I just finished a quick read of the rest of the comments following this one of yours and—Welcome!

      It seems no one has yet asked you why you disagree with materialism, as well as “the possibility of an existence beyond materialism.” I’m interested in learning your reasoning and/or the intuitions that incline you to that view.

      I wrote recently that I believe most people feel like their thinking and emotions are somehow different from their bodily feelings, like touch and pain. They believe that the ‘thinking’ part is their ‘essence’ or ‘soul’ which can conceivably survive the death of the body. Those feelings are reinforced by religious and/or spiritual beliefs and lead to a belief in “existence beyond materialism” as you put it.

      I’m committed to a scientific, hence materialist/physicalist view about the entire universe, ourselves and our consciousness included. I adopted that position long ago, since science seems to be the only way to increase our knowledge of all that exists and ‘understanding’ is my goal. As such, wrt Consciousness, I adhere to philosopher John Searle’s Biological Naturalism which essentially holds that consciousness is a biological production, like digestion. I’ve been persuaded by neuroscientist Antonio Damasio’s writings that consciousness is a feeling, composed of feelings corresponding to every sensory track. “Core consciousness” is the baseline animal feeling of being embodied and centered in a world. Evolution has expanded that fundamental repertoire with symbolic and metaphoric/story processing to yield our human consciousness, called “extended consciousness.” I recommend Damasio’s The Feeling of What Happens if you’re interested in pursuing those ideas.

      For a general introduction, I recommend the textbook-ish Consciousness – An Introduction by Susan Blackmore. Here’s a link to a related EPUB ebook of hers:

      http://pool-71-191-254-98.washdc.fios.verizon.net:8080/get/EPUB/Blackmore%2C%20Susan-Consciousness_%20A%20Very%20Short%20Introduction_10023.epub

      [To read EPUB eBooks on a PC or laptop, download the free Calibre program which includes an EPUB reader. The Calibre download for all platforms is at: https://calibre-ebook.com/download and the Android ebook reader app is I use Aldiko, with free and $4 “pro” versions.]

      More PDFs and EPUBs than you can read in a lifetime are findable using Google, so that’s where I start looking for resources about consciousness—search for titles, subjects, author names, and etc., including PDF or EPUB in your search string. Wikipedia is great for acquiring a quick overview of any subject you’ll see mentioned here. Also try filepursuit.com—it’s amazing what’s just laying around out there in WebWorld.

      So, Linda, to repeat my question: How did you come to disagree with materialism?

      And, yeah Deepak Chopra—head conductor the on Woo-woo Train but he knows where the money is … 😉

      Like

  9. BTW, I also want to add, just for the sake of it, that I have and still visit the Consciousness.Arizona site ( which was also on the commenter’s list )
    Not everyone is a crackpot or a New Age woo-promoter.
    Maybe nowadays, most of them are? Because her comment was from earlier THIS year. Maybe.
    But the last time I checked ( last week ), not everyone there is a schizotypy or delusional ‘nihilism-denying/fearing New Ager’.

    Tbh, the reasons why I thought there were ONLY cranks and pseudoscientists in the entire site or conference in the first place, was because the name Chopra was in there, and I’ve read forums and watched a lot of YouTube debunking/takedown videos about it.
    But I think if I’m going to learn more about consciousness and such, I should read some books on the subject, not rely solely on PsychologyToday, forums, blogs, and YouTube videos.
    YouTube is not a very reliable source for learning this kind of stuff.
    Except for debunking many cults and cult leaders, scammers, New Agey people, groups, and notions many of us already know are frauds, delusional, and/or can’t take seriously, like Teal Swan ( that woman needs some SERIOUS help. Just google her name ), David Hawkins, The Spirit Science, Desteni, David Icke, Jordan Peterson, John Hagelin, Law of Attraction, and many more.
    But as tempting as it is to just watch a 10-20 min video and say you’ve learned something and call it a day, that’s not the way it works.

    But I do think Chopra and many of the mentioned “New Agey cranks” give serious consciousness researchers, quantum physicists, and people that are interested in it ( I’m one of them. I’m interested ), bad names and reputations.
    Nowadays, it seems as though people can’t even say, or remotely suggest there might be more than just materialism, without receiving angry responses, being called, labeled, or accused of being a ‘woo-meister, ‘on the woo-woo train’, ‘reality denier’, ‘wasting your life’, ‘pseudoscientist’, and so many more labels and names, that I could fill a book.
    But not everyone who’s open to the possibility of an existence beyond materialism is a ‘whiny anti-materialist’.

    Just something I’ve been observing for some time.

    Liked by 1 person

    1. The Tuscon conference has developed a bad reputation, at least among neuroscientists. They tend to congregate at the ASSC, which is a bit more grounded. I highlighted a debate about AI consciousness from it a few months ago.
      https://selfawarepatterns.com/2019/06/30/the-assc-23-debate-on-whether-artificial-intelligence-can-be-conscious/

      …although I do see some philosophers on the Tuscon program, notably Keith Frankish (a materialist) and Philip Goff (a panpsychist).

      On books, I tend to favor neuroscience books, but they skew physicalist, and that doesn’t sound like what you want. Although I’m currently reading Christof Koch’s new book, ‘The Feeling of Life Itself’. I disagree with Koch on a lot of stuff (he’s a panpsychist) but his writing is very accessible, and you won’t get pseudoscience from him.

      If you’re looking for a very light intro, Annaka Harris’ book ‘Consciousness’ is about as light as they come. But it’s heavily panpsychist.

      Like

      1. The other name I see on the conference roster that caught my eye was Carlo Rovelli. He’s a physicist, and presumably materialist. He seems to be going in an interesting direction re: the natural history of “meaning”, although his session seems to be involved with consciousness and time, which would make sense given his most recent book “The Order of Time”.

        Linda, I’d be interested to know what influences you away from materialism. I’m a hard-core materialist, but I promise not to bite.

        *

        Like

    2. Linda,
      If you do want to get yourself more straight with this kind of stuff, I recommend continuing to hanging out around here. It’s unfortunately rare for admitted women to do so, but Lee’s pretty civilized as you may have noticed, and James. Furthermore Mike does the vast majority of responding around here anyway and you won’t get cheap shots from him. Beyond his posts I think you’ll find some pretty good stuff to read on his published twitter feed above. I do at least.

      As for me, I’m about as strong a physicalist as you’ll ever meet, though I’ll also admit that in the end this is a product of my own personal metaphysics. It would be stupid of me to say that I “know” that causality can’t fail, as some claim. My position is that to the extent that causal dynamics do fail, nothing exists to discover anyway. So why bother trying to figure out that which has no potential to be figured out? Though I generally consider religious people to evaluate things in terms of “faith over reason” given standard desires to reach Heaven and avoid Hell, in the end my own position reduces to faith as well.

      Let me also give you a bit of unfortunately rare advice for wherever your discussions take you in life. You’ve surely seen Ben Kingsley’s portrayal of Gandhi? In your discussions with others, always first ask yourself how it is that this man would respond. I suspect that you’ll do no better than that.

      Like

      1. “f you do want to get yourself more straight with this kind of stuff, I recommend continuing to hanging out around here. It’s unfortunately rare for admitted women to do so, but Lee’s pretty civilized as you may have noticed, and James. Furthermore Mike does the vast majority of responding around here anyway and you won’t get cheap shots from him.”

        Thank you.
        My plan exactly. I’ve been lurking here for a while and finally decided to comment. This place does seem civilized.
        I definitely will be hanging around here for a while. I’ve been to Jame’s site too, which is also cool.
        I rarely find civilized sites nowadays.
        Even PsychologyToday seems to be going downhill a little bit, as shown in the link. I’m actually glad I didn’t say anything to any of those commenters.

        Anyways, sorry for responding so late all the time. I’m ridiculously busy.

        Liked by 2 people

      2. Reality does not conform to our own ideas of what we perceive and/or wish it to be, we conform to Reality. That’s where the rubber meets the road. All of our efforts to bring Reality into the subjection of our own pre-conceived biases and will is nothing more than sandbox discourse, a bunch of four year olds playing in a sandbox arguing about shit of which we have no idea of what we are talking about.

        Here is Eric Schwitzgebel’s response to my post about the solipsistic self-model being driven by a pathology: “Lee: That’s a somewhat darker view than my own. Of course you might say that I only fail to accept the view you advocate because it doesn’t reinforce my sensation of control!”

        I’m a pragmatist, not a dark thinker. As a metaphysician, human nature and psychology is my forte. Religious people, (I include materialists and idealists in that category) are disturbed by psychology. Human nature is not dark, human nature is self-serving just like any other discrete system which makes up the entirety of the universe. Self interest is what makes the entire expression work, resulting in the diverse novelty of the expression. As much as we as human beings “want to believe” we are exceptional, the inverse is actually the case. Human being are not exceptional, we are all conditions. Our conscious experience is not an illusion, it’s a condition, and that condition is a possibility of another condition.

        That is exactly what we observe, and that’s what the science of physics tells us. One discrete system is a possibility of another discrete system, it’s called change. Einstein pointed out that no new matter is being created and matter is never destroyed, it merely changes state. Arthur Berndtson said it best when he pointed out that the question of whether a God exists in an inappropriate one. The correct question should be whether a God “ought” to exist. In agreement with Berndtson and Kant, a God ought to exist. According to Kant, it’s a called the “thing-in-itself”, which is a well crafted and useful term. In contrast, the idiom god is a useless word because of the intrinsic baggage that comes along with the term.

        That is my short and spectacular take on the nonsense of sandbox discourse…

        Like

        1. “I’m a pragmatist, not a dark thinker. As a metaphysician, human nature and psychology is my forte. Religious people, (I include materialists and idealists in that category) are disturbed by psychology. Human nature is not dark, human nature is self-serving just like any other discrete system which makes up the entirety of the universe. Self interest is what makes the entire expression work, resulting in the diverse novelty of the expression. As much as we as human beings “want to believe” we are exceptional, the inverse is actually the case. Human being are not exceptional, we are all conditions. Our conscious experience is not an illusion, it’s a condition, and that condition is a possibility of another condition.”

          That’s funny, because before I came here, I thought I was the only one having these thoughts. 😅
          In PsychologyToday, I came close to responding to that person ( her name is MGar, btw ) and nearly said something similar to what you’re saying.
          BUT, after I noticed that she responded to someone else by saying, “maintain your self-serving delusions. I hope you don’t have children” at the end of her sentence, I didn’t bother, even though I’ve received worse already ( such as “jump off a bridge” ) from the religious.
          And yes, I include materialists and idealists in that category as well.
          I will not be backing away from this blog anytime soon. 😊

          Like

          1. Linda,
            I hope the time you spend on Mike’s blog is a productive experience. Although you will discover on your own that most contributors are incorrigible materialists, a view which by its own nature results in tunnel vision.

            FYI, our primary experience is a paradigm of control. If you are not one who driven by the innate need for a sense of control you may well be an anomaly. Most human being’s default position reflects Descartes’ “I think, therefore I am.” It’s a defensive position actually, it’s grounded in the obsession for control, a position which postulates: The only thing I can know with certainty is that “I” exist. According to this paradigm, the solipsistic self-model is the only known in the entire universe. In contrast, those individuals who fall in the category of what I consider to be anomalies postulate: The only thing I can know with certainty is that control is an apparition, the ghost of rationality. According to this paradigm, the self is not the center of the known universe, but merely a condition, and that condition is another possibility.

            Like

  10. Well, all you Consciousness Geeks and Freaks, if you’re curious as to how consciousness in the block universe of relativity physics instantiates our immortality, I welcome you to read and comment on the review revision of my paper “Einstein’s Breadcrumbs”:

    https://drive.google.com/file/d/1hKfy2cMxVgdXMo-OeAyFZmoHGTeMltOu/view?usp=sharing

    Einstein called this immortality the “mystery of the eternity of life,” but he wasn’t much into psychology or William James’ “stream of consciousness” would have removed that ‘mystery’ part. Einstein wrote:

    Neither can I believe that the individual survives the death of his body, although feeble souls harbor such thoughts through fear or ridiculous egotism. It is enough for me to contemplate the mystery of conscious life perpetuating itself through all eternity, to reflect upon the marvelous structure of the universe which we can dimly perceive, and to try to humbly comprehend even an infinitesimal part of the intelligence manifested in nature.”

    This fascinating conjunction of Cosmology and Consciousness may be discussed at the email address:

    erl.einstein@gmx.com

    Please don’t email to disparage the Relativity of Simultaneity, although if you can rescue Presentism from the challenge of the Eternalist simultaneity at the event horizon of a black hole I would be interested.

    Any discussion about the Eternal Re-experiencing of Life hypothesis (ERL)—my formalization of Einstein’s “eternity of life” proposal—is most welcome and I’ll respond as time permits.

    Perhaps you’ll find the final sentence of “Breadcrumbs” provocative:

    “Welcome to BlockWorld. You are the ride!

    Like

  11. Hello, Stephen.

    “More PDFs and EPUBs than you can read in a lifetime are findable using Google, so that’s where I start looking for resources about consciousness—search for titles, subjects, author names, and etc., including PDF or EPUB in your search string. Wikipedia is great for acquiring a quick overview of any subject you’ll see mentioned here. Also try filepursuit.com—it’s amazing what’s just laying around out there in WebWorld.”

    Thx for that. I’ve never heard of Filepursuit. I’ll look into that. I already do go to Wikipedia.
    I hate eBooks, Kindle, and tablet books, though. I prefer just plain ol’ physical books. But I will look into Blackmore and John Searle’s stuff.

    “So, Linda, to repeat my question: How did you come to disagree with materialism?
    And, yeah Deepak Chopra—head conductor the on Woo-woo Train but he knows where the money is … “

    I guess it’s because I’ve been reading some of David Chalmer’s stuff for a bit. I’m a bit of a fan of Chalmers.
    Sometimes I think I lean a bit towards naturalistic dualism.
    I’ve actually been exposed to Chalmer’s and other people’s stuff and different views, before I started reading up on the materialistic ones further.
    And compared to the other views, I found the materialistic one to be a little bit limiting, I don’t know how else to describe it at the moment, and thought that one doesn’t have to hold full-on materialistic views to be scientific, nor does not holding that view automatically makes one religious.
    Most of the religions make me cringe.

    However, had I read and only been exposed to Chopra’s stuff, BEFORE taking an interest in science rather than after, I think I would have been on the pseudoscientific woo-woo train as well.
    So thank goodness.
    I swear, it’s as if when something even has the word ‘materialism’ in it, Chopra and all of his fans refuse to hear it.

    Like

  12. Hi Linda,

    EPUB is an open standard format as opposed to the proprietary ones from Amazon and others(.azw, .mobi, …). Most folks don’t realize that they don’t own their eBooks, they merely license them (i.e., are allowed to read them and nothing else) and the ‘license’ can be revoked at any time. It’s a good idea to copy your Kindles and other proprietary eBooks to a separate backup device since the Amazon-like Corp’s can, and have, deleted them from a user’s PC for whatever they believe is cause.

    I have a lifelong love of reading “dead tree” books, as they’re called. I buy and read several used books every month from abebooks.com … far better than amazon for used selection and prices. Check it out!

    For years I sternly resisted reading eBooks but the sheer volume of engaging no-cost titles findable online encouraged me to find a way. My solution involves a good quality properly-sized Android tablet. I use a Lenovo TAB4 8 with an 8” diagonal screen. The reading app I use is Aldiko, which has a free version and an ad-free $4 “Pro” version. (Pay the bargain $4 and you can install the Pro version on all of your Android devices). When reading on the tablet, I use the “nighttime” setting of white text on a black background, but you can set the text and background colors to other choices. Aldiko lets you adjust the brightness, so I use brighter during daytime and lower the brightness at night. The text size is also adjustable and I use a daytime “regular” size text which I enlarge at night. Very easy on the eyes.

    My tablet’s storage currently holds about 1000 EPUB’s (of the 2000 or so that I have on my PC) so I’m not about to run out of readables. I also use Aldiko on my Android smartphone “phablet” (about a 6” diagonal screen) and I now read while waiting in line or waiting (what else?) at a doctor’s/dentist’s office. I’m no longer limited to the bizarre collections of periodicals doctors all seem to enjoy. I have the same 1000 or so EPUB’s on my smartphone.

    Chalmers is a philosopher and “cognitive scientist” (whatever that is) and has earned his fame with The Hard Problem, which is only a problem and only hard if you’re a dualist. Pursuing dualism doesn’t provide much, if anything, in the way of confirmation so dualism must be taken on faith. I was raised Catholic so I’m not a big fan of faith-based beliefs. YMMV, as they say, but I bailed on dogmatic religion at about age 15.

    Materialists—perhaps most of the neuroscience community—believe that consciousness is yet another biological phenomenon, like digestion (as Searle points out). Just because we don’t currently know how consciousness works doesn’t mean we can’t eventually find out. As I’ve already mentioned, I’m with them. While I appreciate, and have experienced, profound spirituality, which I consider an essential emotional element of being human, I don’t believe it’s relevant to discovering the truths about the world. Figuring out some answers for “what’s it all about?” has been my driving concern since I was a teenage science fiction addict. Science, rooted in materialism, has been the only wildly successful human activity in history to make progress in supplying meaningful insights and answers and, besides, that’s where we got computers and rockets and smartphones … and TV! 😉

    I hope you’ve been able to take a look at my “Einstein’s Breadcrumbs” paper. I find the implication that we eternally re-experience our exactly-like-this-one lives simply mind-blowing, and its origins in Einstein’s own beliefs rooted in relativity physics mean it’s nowhere near WooWoo Land. It’s written for an educated general audience and I’ve heard it’s nearly entertaining. I’m most interested to learn what you and others think about the case I’ve made for the hypothesis.

    Like

    1. Hi, Stephen.

      “Chalmers is a philosopher and “cognitive scientist” (whatever that is) and has earned his fame with The Hard Problem, which is only a problem and only hard if you’re a dualist. Pursuing dualism doesn’t provide much, if anything, in the way of confirmation so dualism must be taken on faith. I was raised Catholic so I’m not a big fan of faith-based beliefs. YMMV, as they say, but I bailed on dogmatic religion at about age 15.

      Materialists—perhaps most of the neuroscience community—believe that consciousness is yet another biological phenomenon, like digestion (as Searle points out). Just because we don’t currently know how consciousness works doesn’t mean we can’t eventually find out. As I’ve already mentioned, I’m with them. While I appreciate, and have experienced, profound spirituality, which I consider an essential emotional element of being human, I don’t believe it’s relevant to discovering the truths about the world. Figuring out some answers for “what’s it all about?” has been my driving concern since I was a teenage science fiction addict. Science, rooted in materialism, has been the only wildly successful human activity in history to make progress in supplying meaningful insights and answers and, besides, that’s where we got computers and rockets and smartphones … and TV! 😉”

      Yes, I’m well aware of that.

      “I hope you’ve been able to take a look at my “Einstein’s Breadcrumbs” paper. I find the implication that we eternally re-experience our exactly-like-this-one lives simply mind-blowing, and its origins in Einstein’s own beliefs rooted in relativity physics mean it’s nowhere near WooWoo Land. It’s written for an educated general audience and I’ve heard it’s nearly entertaining. I’m most interested to learn what you and others think about the case I’ve made for the hypothesis.”

      I’ve only read a tiny bit and I stopped, because I fell ill ( technecally my own fault. Too much stress and not enough rest! Not a good idea during cold and flu season ) and I’m recovering. But so far, it seems like an interesting paper. I’ll keep reading.
      I am open to ideas. I just wish more people were open without all of the drama.

      Like

      1. Linda, I should clarify that there are credible members of the interdisciplinary “cognitive scientist” community, like Lakoff and Johnson of Philosophy in the Flesh, but I don’t consider Chalmers to be credibly ‘scientific’ with his dualism and his support of the proposal that “philosophical zombies” are “logically possible”. Chalmers is clearly in the philosophical camp where evidence-free propositions are routinely accepted as credible. A philosophical zombie (which is not a “scientific zombie”) is supposedly identical to a conscious human being but lacks consciousness, which is apparently not seen by Chalmers as a disability because the zombie continues to behave precisely as the conscious original does. Although I’m not sure what “logically possible” means, other than the proposal not being self-contradictory, the philosophical zombie is a biological impossibility if the operation of consciousness changes as little as a single molecule in the brain. Considering that conscious control can override and/or alter an unconsciously determined action sequence, the philosophical zombie proposition isn’t credible.

        The ‘drama’ surrounding ideas that you mention is usually a sign of people heatedly supporting emotionally grounded beliefs. As regards the block universe, I’ve encountered considerable drama from those who adhere to a belief in flowing time in spite of the fact that there’s no evidence whatsoever that such a thing exists. The belief in free will—another evidence-free emotional belief—is hotly defended and any threat to that belief tends to arouse an emotional reaction. I’m with you in preferring drama-free discussions.

        The ERL hypothesis presented in “Einstein’s Breadcrumbs” is usually met with denial by those who are emotionally biased against it. The fact remains, however, that no one has falsified either of the two premises underlying ERL—the reality of the block universe and the streaming/flowing nature of consciousness. Presentism and Possibilism, the philosophical alternatives to the Eternalism of the timeless block universe, have no scientific support whatsoever and the movie-like stream of normal consciousness is undeniable.

        I hope you recover quickly Linda. I’ve found that anxiety and stress are consequences of worry and worrying is a complete waste of time. Besides, in the block universe, the future is fixed and the “What, me Worry?” attitude of Mad magazine’s Alfred E. Newman is most appropriate. 😉

        Like

        1. “The ‘drama’ surrounding ideas that you mention is usually a sign of people heatedly supporting emotionally grounded beliefs. As regards the block universe, I’ve encountered considerable drama from those who adhere to a belief in flowing time in spite of the fact that there’s no evidence whatsoever that such a thing exists. The belief in free will—another evidence-free emotional belief—is hotly defended and any threat to that belief tends to arouse an emotional reaction. I’m with you in preferring drama-free discussions.”

          I agree. I think people need to calm down a bit, take a deep breath, and take a closer look. But as I always say, new ideas and such are always going to result in drama at first, if it means challenging one’s emotional beliefs that help them sleep at night and/or function during the day.

          “I hope you recover quickly Linda. I’ve found that anxiety and stress are consequences of worry and worrying is a complete waste of time. Besides, in the block universe, the future is fixed and the “What, me Worry?” attitude of Mad magazine’s Alfred E. Newman is most appropriate”

          Thx. I’m getting better.

          Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.