Borderline consciousness?

Eric Schwitzgebel had an interesting paper come out this week, exploring the question of whether there can be cases of borderline consciousness, that is, cases where a system is neither determinately conscious nor determinately non-conscious. For example, maybe humans, dogs, and cats are determinately conscious, rocks and protons are determinately not conscious, but something like a lancelet or sea slug is not determinately conscious or not conscious.

Schwitzgebel stipulates at the beginning that by “consciousness” he’s referring to phenomenal consciousness, Nagel’s “what it’s like” concept. He also makes clear that he’s not merely talking about the contents of consciousness, nor the state of being awake or asleep, both of which it’s not controversial to say can be indeterminate. His focus is on whether the property of consciousness is present.

In that sense, he quickly runs through four possible answers: saltation, indeterminacy, panpsychism, and eliminativism. Saltation draws a line in the sand and insists that there is a sharp distinction between things that are conscious and those not conscious. Indeterminacy admits borderline cases, and is what Schwitzgebel is arguing for.

Panpsychism sees everything as conscious, so the question of what is or isn’t conscious doesn’t come up. He discusses an interesting dilemma that hadn’t occurred to me before. If elementary particles are conscious, as well as collections of elementary particles like brains, then is every combination of elementary particles its own unique consciousness? If so, then the number of particles in the universe is infinitesimal compared to the number of conscious entities. It’s a notion that reminds me of unlimited pancomputationalism and its bizarre consequences.

Eliminativism sees nothing as phenomenally conscious, so again, the question doesn’t come up. Schwitzgebel is careful here to clarify that what’s typically being denied is a concept laden with presuppositions like immateriality, irreducibility, infallibility, and metaphysical privacy. (Keith Frankish has a fresh paper out clarifying again what illusionists deny.) But Schwitzgebel notes, eliminativists still tend to use the word “consciousness” to refer to access or functional consciousness, a system using information for cognitive purposes such as decision making and self report, and so everything he discusses can pertain to that concept.

Although I’m not aware of many eliminativists who contest indeterminacy. It seems pretty implicit once we accept that consciousness is not irreducible. And Schwitzgebel’s initial arguments are in terms of access theories like global workspace or higher order thought, which seem like inherently graded concepts with no sharp distinction. There is the question with some theories of a phase transition, but he notes that such transitions, when looked at closely, aren’t as sharp as they appear. They just go through their gradations in a very tight timeframe.

Another question is what Schwitzgebel calls the “luminous penny”. A person can have money, as in several hundred dollars, or no money. But technically if they have a penny, then they have money, since a single penny is money. Maybe consciousness comes in penny-like chunks. Having a single penny wouldn’t buy much as far as behavior or capabilities, but that single unit, being so small, can be more distinct. However Schwitzgebel points out applying this in a meaningful fashion with actual organisms is challenging. And the same issue with the phase transitions seems to remain unless we slide into full panpsychism.

In general, any naturalistic account of consciousness seems incompatible with saltation, the sharp distinction.

It’s when Schwitzgebel gets to inconceivability arguments that I think he comes to the core issue. From the inside, it seems inconceivable to imagine ourselves only being partially conscious. But this is attempting to use consciousness to assess consciousness, in other words, to use the thing that may be barely there to assess itself. He points out that the very act of attempting to imagine the case, the harder we try, the more we miss our target. He also contests the very standard of conceivability, discussing imaginary numbers as an example, a crucial mathematical concept that most students struggle with because they are unimaginable.

Toward the end of the paper, Schwitzgebel discusses the views of various philosophers. One I found interesting is David Papineau’s concept of consciousness being an indeterminate concept, that it’s indeterminate whether “consciousness” means recurrent processing, a global workspace, or higher order thought, even if all these things exist. This has long been my view. Schwitzgebel labels it “kind indeterminacy” in contrast to the “in-between indeterminacy” he’s arguing for. Kind indeterminacy seems resonant with Jacy Reese Anthis’ consciousness semanticism, although unlike Anthis, Papineau is not an eliminativist.

If you find this topic interesting, I think the paper is well worth your time to read in full.

The indeterminacy of consciousness has long seemed obvious to me. But I say that as someone who is an eliminativist toward versions of consciousness that are irreducible and scientifically inaccessible in principle. When looked at from a third party perspective and in terms of functionality, it doesn’t seem hard to imagine borderline cases. This seems particularly vivid with various neurological conditions (hemispatial neglect, various agnosias, anosognosia, etc).

The real difficulty for many people is the conceivability argument. Aside from the counters Schwitzgebel discusses, it seems like we should be clear here on the actual issue. Our introspective judgments on whether or not we were conscious at some point doesn’t seem to admit to indeterminate cases. It’s worth noting that judgment can come with uncertainty, or be as wrong as any episodic memory. Like much in the study of consciousness, we should be leery of accepting as reality what is much easier to explain as limitations in self reflective mechanisms.

Unless of course I’m missing something?

Featured image source

46 thoughts on “Borderline consciousness?

  1. I expect you won’t be surprised that I find this paper to be strong support of Psychule theory. As I tweeted (skeeted?), the psychule pretty much just is the “luminous penny”. As you say, “Schwitzgebel points out applying this in a meaningful fashion with actual organisms is challenging.” Challenge accepted.

    I should point out that the analogy to the penny/money is not perfect, as the only significant feature of money is the quantity. It would be closer to make the analogy to “financial instruments”, the penny possibly still being the most simple, but really complex financial instruments being possible. The various structures of financial instruments leads to recognition of “kinds” of financial instruments.

    *

    Liked by 1 person

    1. I actually almost left out the luminous penny part, but then I remembered your tweet and figured you’d likely call me out on it. 🙂

      One challenge that occurred to me, that I didn’t note in the post, but related to Eric’s points, is if anyone would actually recognize a one penny system as a conscious one. You probably would, but I suspect that’s more due to your theoretical commitments than any intuitive sense of the presence of a fellow being.

      Along those lines, your financial instrument point is interesting. I do know of contracts for one penny, but they’re basically meant to be no money contracts with the penny there to get past computer system validations, or to comply with some regulation or other legal requirement. OTOH, there was a time when a penny was actually a useful denomination, although I think we’re talking about the 19th century.

      Like

      1. Anyone who has a conception that simple life might be conscious would (prolly) accept the psychule/penny. The basic points that speak in favor of the psychule as consciousness are “aboutness” and an explanation of the mind/body dichotomy. But then, most observers, including yourself, are using a very complicated form of psychule system to generate ideas, etc., and so are expecting/requiring something similarly complicated for the explanation of any consciousness. And such explanations run into the exact issues that Schwitzgebel describes.

        *
        [and yes, I would have called you out, 🙂 , and also, I distinctly remember going to the local store to buy “penny candy” as a kid, so, 20th century. 🙂 ]

        Like

        1. We’ve probably talked about this before, but what would be an example of a one psychule organism or system? A unicellular organism? Or a thermostat?

          Good point about the candy. I think I remember seeing those, and even once responding to an adult that you could in fact still buy something with a penny. It was probably around the last year that could be said (early 1970s).

          Like

          1. A digital thermostat might be the example of a one psychule system. You might be hard pressed to find a unicellular organism that has only one. Any surface receptor that generates an internal signal would be part of one, the response to the signal being the other part.

            *

            Like

          2. On the unicellular organism, I actually had something like Ogi Ogas and Sai Gaddam’s toy organisms in mind. Although I’m sure there’s nothing that simple in nature since the early Archean, at best.

            So an analog thermostat doesn’t make the cut? What’s missing? And it seems like finding a digital one with only psychule might be a challenge too.

            Like

          3. I’m unfamiliar w/ the toy organism reference.

            What’s missing in the non-digital thermostat, I think, is the symbolic sign vehicle, which is a physical intermediate thing whose only purpose is to carry mutual information. If you see one there, let me know.

            *

            Like

          4. Dude, that post was more than a year ago. You can’t expect me to remember anything that far away. 🙂

            Anyway, I read very slowly (which seems to be why I pick out typos), and I’ll admit that Ogas’ book is low on my list. I’m currently reading Nicholas Humphrey’s “Sentience”, and I would recommend it. It’s an easy read, and explains what’s going on with blindsight pretty well. I don’t agree w/ his conclusions (sensation is consciousness, but perception is not) but I think I can explain them in a psychule format. My next project will be understanding context (via Juarrero’s “Context Changes Everything”), and then maybe Grossberg’s resonance book.

            *
            [may be a year or two]

            Like

          5. So the mercury would count as a sign vehicle?

            On the year, yeah, sorry. On Grossberg, the Orgas book actually incorporates his theory, and has an accessible summary of it. I own Grossberg’s book and its on my reading list for a while now.

            I rarely notice typos, unless they’re really bad, which probably shows in my blog posts.

            I actually read about Humphrey’s theory in one of his earlier books, and really didn’t buy it (stumbling over the same issue it sounds like you’re doing). I know this new book is a fresh take on it, and I do have it in my Kindle library, but I’m having a hard time motivating myself to read about the same theory again.

            Like

          6. Ack. The mercury would not count as a sign vehicle because it’s the physical properties of the mercury (physically expands a lot in response to heat, conducts electricity) that make it work. You couldn’t just arbitrarily replace it with something else. (And most analog thermostats use a coiled wire, I think, but same difference.)

            On Humphrey’s book, I don’t recommend it for the theory but for the high level explanation of the neuroscience of blindsight (helps me work out the psychules involved), and also because it is an enjoyable anecdotal read. I think you would find it a quick read. I put it into the same category as Seth’s and Goff’s books. Good coverage of parts of the field, w/ additional theories I can pass on.

            *

            Like

          7. I guess you could make the argument that the mercury/wire cover the “aboutness” aspect, but pointing out a mind/body separation might be harder. Then again, Schwitzgebel’s indeterminacy could be at play here as well.

            Like

          8. On the mercury / wire, the specific physics always seems relevant for a particular implementation within that implementation. You can’t just swap out a neuron in a brain with some kind of technology and have it work with the other neurons, at least without some sort of bridging or interface tech. But in both cases, the functionality can be replaced with different physics (such as a thermistor for a typical digital thermostat), but only if the overall implementation works with that different physics.

            Not sure what you mean with mind/body separation. Is it something different than, say, metabolism/body separation?

            Like

          9. The physics of the intermediary, wire or sign vehicle, will be necessary for the implementation, but only the physics of the sign vehicle can be arbitrary. The physics of the wire creates the correlate w/ temperature of the mileue. The physics of the sign vehicle need not correlate w/ temperature. Only the mutual information of the sign vehicle need correlate w/ temperature. Thus, anything can potentially be the sign vehicle as long as the mutual information is correct.

            By mind/body separation, (I think) I mean that the description of the mind, while determined by the body, makes no reference to anything physical (except maybe at the sensory border). The description of the mind is totally informational, ultimately (but not practically) groundable in the basic operations COPY,NOT,AND,OR.

            *

            Liked by 1 person

  2. Will we ever actually pin the concept, threshold and assignment of consciousness down to a cohesive theory? The more I read about this topic the more it feels like the “what is the best pizza?” question. In the end, what does it matter? Hungry? Here, have a slice.

    I do think that “elimination” whirly-doo-dad feels closer to the way I interpret it.

    Liked by 1 person

    1. On pining the concept, who knows where language may go. It seems like people would first have to admit that we in fact have a hazy concept no more nailed down than “love”. But even saying that tends to be met with withering scorn by a substantial portion of the people studying it. And putting the word “consciousness” in your paper title guarantees a lot more attention, even when what you’re studying might better be explained with labels like “attention”, “metacognition”, “declarative memory”, etc.

      As I noted in the post, I’m eliminativist about certain types of consciousness, but not others. Overall, I’m more of a reconstructionist, at least on most days.

      Liked by 1 person

        1. Pete Mandik advocates for “qualia quietism” when it comes to qualia, phenomenal properties, and what-it’s-like-ness. In other words, that nothing worth saying is said by using those terms, so the best thing is just not to use them. I’ve mostly adopted that stance, except when talking about other people’s use of them.

          Liked by 1 person

  3. “Borderline” isn’t the right word.

    C= {ce1, ce2, ce3, ce4}

    vs

    C= {ce3, ce5}

    The second seems “borderline” with respect to the first because it has fewer conscious nodes, but it actually has one form lacking in the first.

    Just different not borderline.

    The idea of “borderline” arises because we think there is an unity – all or nothing – so we have to explain what seems to be the marginal cases. But the “unity” may be a false unity created by the higher level forms of consciousness that humans possess.

    Liked by 2 people

    1. There’s some resonance with that and Schwitzgebel’s luminous penny view. Only in your case, you’re saying the pennies can be scattered around, one in our wallet, another under the seat cushions, one in the car, etc. Of course, to actually buy anything (produce behavior) there does have to be some consolidation.

      Liked by 1 person

      1. Yes. More or less.

        “it’s indeterminate whether “consciousness” means recurrent processing, a global workspace, or higher order thought”

        Or none of the above. I would say consciousness is at a much lower level – closer to DNA or proteins – rather than something that requires tightly integrated communication across multiple regions of the brain.

        I don’t see a sharp distinction between higher order thought and phenomenal consciousness. Hearing is quite different from seeing and that is easy to perceive, but it is much harder to perceive that higher order thought is akin to seeing and hearing – another knowing that feels different.

        Liked by 1 person

        1. What’s the line of reasoning that leads you to locate it in DNA, proteins, or low level mechanisms in general?

          As always, it depends on what we mean by “phenomenal consciousness” but a lot of HOT theorists would be delighted to hear you equate them.

          Liked by 1 person

          1. I have the entire “Fragmented” post to answer that question. There certainly is no evidence of a large center where all processing comes together. There is a lot of evidence of loose integration between different nodes with information going up, down, and around so to speak. So, we are already at a low level with just that. I’m not saying it is DNA or proteins, but those could be involved in the explanation. If consciousness is involved in learning, as I believe, then it must be able to directly affect neuron connectivity. That would require something at a fairly low level that would drive the construction of new connections or at a minimum modify the firing of existing connections. That would have to happen at a cellular level.

            I assume most of the HOT theorists wouldn’t think a wasp has conscious experiences. My view puts what is normally called phenomenal consciousness on par with the higher order functions as to the question of conscious experience. There are likely hundreds of potentially conscious nodes in the human brain. The wasp may have less than a hundred.

            This fragmented view makes a lot of sense in the evolutionary context. To expand the capabilities of the human brain, evolution could simply keep adding new nodes every time it needed a new capability and insert it into the network. No need to create specialized wiring so everything routes to a higher order node.

            Liked by 1 person

          2. I’m not sure HOT is really compatible with the view you’re laying out here. Of course, like everything else, there are many variants, so one of them might be. But most have a more restrictive view of consciousness, one that wouldn’t include wasps. Usually we’re talking mammals and birds at most, and often more restrictive than that. LeDoux and Carruthers come to mind as having a pretty high bar for seeing something as conscious.

            Liked by 1 person

          3. Oh, I know it’s not compatible. I was surprised you suggested a HOT theorist would find anything about my view as compatible. Except for Zeki, I’m not sure there is anyone out there with my view and my view may be even more radical than his.

            You might like this:

            Except for the functions we know are unconscious., basically every functional capability of the brain is broken down into pieces with each piece having potentially independent consciousness.

            “Neuron activity shows that the brain uses different systems for counting up to four, and for five or more”

            https://www.nature.com/articles/d41586-023-03136-w

            I would argue that number determination, for example, is a form a consciousness and, therefore, there are at least two forms of it.

            ‘Error Neurons’ Play Role in How Brain Processes Mistakes

            https://www.cedars-sinai.org/newsroom/error-neurons-play-role-in-how-brain-processes-mistakes/

            Yep. A specific form of consciousness involved in error processing.

            The brain makes sense of math and language in different ways

            https://ece.umd.edu/news/story/the-brain-makes-sense-of-math-and-language-in-different-ways

            Probably multiple nodes involved in math and language thinking but still different forms/nodes of consciousness.

            Consciousness is very fine-grained even in higher order thought just as it is visual processing where motion, color, edge detection, and so on are processed by different nodes. Consolidation is done by the network not by specific centers.

            Liked by 1 person

          4. Let me try to clarify:

            “Still, there are then two main dimensions along which higher-order theorists disagree amongst themselves. One concerns whether the higher-order states in question are perception-like, on the one hand, or thought-like, on the other. A thought is composed of or constituted by concepts. Those taking the former option are higher-order perception (often called ‘inner-sense’) theorists, and those taking the latter option are higher-order thought theorists.”.

            https://plato.stanford.edu/entries/consciousness-higher/

            I would say that “higher-order states” are perception-like and agree with that portion of those “inner sense” theories. However, the perception itself is also consciousness even without higher-order representation.

            Liked by 1 person

          5. Thanks for clarifying.

            I’m leery of higher order perception. I wonder why a perception of a perception would be conscious while the first order one isn’t. And the fact that the full first order perception seems to require participation from the parietal and temporal lobes makes it seem dubious that all of that is somehow reproduced in the prefrontal cortex. Higher order concepts has always made more sense to me. It provides actual added functionality, and I think it can provide that inner sense feeling.

            Liked by 1 person

          6. “while the first order one isn’t”

            “dubious that all of that is somehow reproduced in the prefrontal cortex”

            I agree. Basically in every case, we have neurons getting inputs from neurons and trying to make some sense of it (or however we choose to describe it). In the first-order case, the primary input neurons are sensory neurons. The fact that higher-order perceptions are several levels removed from sensory input does account for the more abstract feeling of the higher-order perceptions. “Inner sense” does capture for me the fact that neurons are listening to neurons in all cases, but misses the idea that abstract thoughts are also “representational” of external as well as internal reality.

            It certainly doesn’t seem practical that full visual images, for example, would be reproduced in the prefrontal cortex. I would think whatever information comes to the PFC must be more like a smart hash value of the visual image – quasi random but with a reduced subset of the original processed in the visual cortex. It might be similar to a memory or possible even what is stored if it is converted to a memory.

            Liked by 1 person

          7. I tend to think what’s in the PFC are potential action plans related to the first order representations. Given that relevant plans change depending on the first order representations, I can even see a lot of covariance between them, which might even look like evidence for representations of representations. Although if I’m right, we’d expect the correlation to be far less than for a repeat representation.

            Liked by 1 person

          8. Certainly some of the output of some PFC clusters are going to be triggers for motor actions.

            But you can hopefully see how all of this could work with fragmented consciousness.

            Take the number determination research where they presented dots to people for a brief time period and looked at the accuracy of answers.

            At 10, 000 foot level:

            1- Full image in visual cortex (possibly composite itself) of three dots.
            2- Number determination 4 or less cluster receives input from visual cortex and decides 3 is the number.
            3- Word memory cluster receives input from number determination and arrives at word “three.”
            4- Speech cluster receives input from word memory and initiates action to report three as the number of dots.

            From subjective standpoint, there is experience of seeing three dots, then having concept of three, then word “three” comes to mind which we report. It happens so quickly it seems seamless yet each part is performed by independent clusters with different conscious content. Attention may be priming the sequence of events possibly down to the level of the visual cortex.

            The number research is interesting from another standpoint. They believe two separate systems are involved with 4 or less determination vs > 4 determination Four happens to be the limit of number determination in some insects like bees. So, the 4 or less cluster probably either has been around a long time in evolution or it was independently invented somewhere in the primate line. The 4 or less determination must be relatively easy but with big survival value since even insects can do it.

            But two different clusters for number determination illustrates how the “functions” of clusters may be quite unlike what we would expect. We likely have a hodgepodge of “functions” that work seamlessly together but are quite unlike anything we would design from scratch. Additional functionality is just grafted on to existing functionality as brains grow in size and adapt to their environments. That argues in favor of a fragmentation of function and consciousness.

            Liked by 1 person

  4. It’s a small-brainer: indeterminacy is possible. It’s not a no-brainer, because that would be a case of determinate absence of consciousness 😉

    Almost all of this part is right:

    such transitions, when looked at closely, aren’t as sharp as they appear. They just go through their gradations in a very tight timeframe.

    The likely exception is “just”. Maybe the fact that borderline cases are extremely rare just is what it means for a distinction to be “sharp”. And being extremely brief is one way of being extremely rare.

    What’s the difference between “kind indeterminacy” and “ambiguity”? For example, “water” can mean specifically liquid H2O, or just any H2O. Is water kind-indeterminate? The relationship between recurrent processing, global broadcast, and higher order thought isn’t as neat and tidy as the water(l)-water(any) relationship, but all 3 often happen concurrently and that’s no accident.

    Liked by 1 person

    1. On kind indeterminacy and ambiguity, I think they’re synonymous. Although maybe I should just link you to the relevant (and fairly brief) section of the paper: https://link.springer.com/article/10.1007/s11098-023-02042-1#Sec8

      Right, the question is whether there’s any fact of the matter about whether recurrent processing, global workspace, or higher order thought, or anything like prediction, is consciousness. To some degree it’s a philosophical / definitional issue first, and then an empirical one for that particular concept.

      Liked by 1 person

      1. Ah, thanks. I guess I think he’s right that phenomenality (can I use that word?) is a single albeit fuzzy property. Specifically, a functional one. That’s one of my two cheers for functionalism.

        Liked by 1 person

        1. I’m closer to Papineau (as cited) on this one in seeing phenomenal concepts as vague. I do think we can come up with a functional version; I used one for years on this blog. But its often not clear which version someone is referring to, at least unless we already know a lot about their views.

          Like

  5. “If elementary particles are conscious, as well as collections of elementary particles like brains, then is every combination of elementary particles its own unique consciousness?”

    As I recall, Orson Scott Card said something like this in one of the later Enderverse books. Every atom has a “soul,” and then it takes a more powerful soul to pull those atoms together to make a molecule, and then an even more powerful soul to pull molecules together to make other things. And then it took especially powerful souls to hold all the parts of a living organism together, with intelligent life requiring the most powerful of all souls.

    It was sort of like a chain of command: one soul commanding a whole organism, with subordinate souls in charge of individual organs, and so on all the way back down to individual atoms. And in the story world of the books, they even found a way to show this soul hierarchy thing is scientifically true.

    It’s an idea that stuck with me as “that would be neat” when I read it, though it’s not a thing I’ve ever taken seriously outside the context of Card’s book series.

    Liked by 1 person

    1. That does sound interesting, and like something Card might come up with.

      I can’t remember if I read the Ender series all the way to the end. Was that what they discovered when developing the faster than light capabilities? I remember it being very metaphysical, but don’t recall the details. I do remember it being very strange, so that when they tested it a new version of Ender’s long dead brother and sister popped into existence, but more based on his mental models of them than on the originals.

      Per Wikipedia, all the above was in Xenocide, so maybe what you’re describing comes out in the last book, Children of the Mind? I don’t believe I ever read it. (Or at least the synopsis doesn’t ring any bells.)

      Liked by 1 person

      1. I did read Children of the Mind, so maybe that’s where I remember it from, though I thought something was said about it in Xenocide as well. Anyway, yeah, it had something to do with that FTL system where they step “outside” the universe.

        Liked by 1 person

  6. It is certainly a paper worth reading, but I am not convinced. Schwitzgebel reads Carruthers as saying that global workspace theories would support the graded nature of consciousness. Rather, he stresses that phenomenal consciousness is all or nothing: “so long as one retains some degree of intransitive creature consciousness one will have some phenomenally conscious mental states. And no matter how impoverished their contents, it will be determinately like something to be in them”. (The problem of animal consciousness, 2018)
    According to Carruthers, the all-or-nothing nature of phenomenal consciousness is coherent with a global-workspace account of consciousness. Global broadcasting in humans appears to be an all-or-nothing phenomenon, meaning that either activation levels in the neural populations in question remain below threshold, in which case no global broadcasting occurs, or those activation levels hit threshold, and full global broadcasting results.
    Schwitzgebel argues that there is no evidence to suggest “that falling asleep or waking […] is always a sharp transition between conscious and nonconscious, where a wide range of cognitive and neurophysiological properties change suddenly and in synchrony”. He suggests “that in these examples the whole organism is in a borderline state of consciousness”. But this seems to me to be more about creature consciousness than phenomenal consciousness.
    I think that these transitions are unstable states that are not experienced as temporally extended. My suggestion is that the temporal structure of phenomenal consciousness is not continuous in the strict sense, but rather a sequence of instantaneous snapshots in discrete time steps. States that are more transitory than the time intervals are not perceived.

    Liked by 1 person

    1. Yeah, I’m not sure what Schwitzgebel was thinking with that citation, particularly since he does earlier cite Carruthers for his views on the inconceivability of phenomenal consciousness being graded.

      Although what exactly Carruthers means by “phenomenal consciousness” is something I wasn’t clear on since he also considers himself a “qualia irrealist”. He uses the term “nonceptual content” a lot in his 2019 book, a division he claims is common in the philosophical literature. But I’m not sure how we draw a line between conceptual and non-conceptual content. The examples of nonconceptual seem conceptual to me, just at a lower level. It seems like it’s concepts all the way down.

      But I do know Carruthers considers global workspace all or nothing. In an exchange I had with him on the Brains Blog, he made a strong distinction between it and Dennett’s fame in the brain view, which he considers wrong. However, from what I’ve seen, the part of global workspace that’s been most vulnerable to empirical pressure has been the all of nothing aspect. It’s on much stronger ground when it’s talking about content causally dominating the cortex, but weaker when discussing how it gets there, such as the global ignition mechanism.

      And Stanilas Dehaene himself doesn’t even concern himself with phenomenal consciousness. He makes clear in his book that he working to explain conscious access (his name for access consciousness). But he probably sees phenomenal consciousness as qualia.

      But as I noted in the post, it’s a lot easier to see consciousness from the outside (as in creature consciousness) as graded than it is from the inside, but that’s where the limitations of introspection become an issue.

      Like

    2. “My suggestion is that the temporal structure of phenomenal consciousness is not continuous in the strict sense, but rather a sequence of instantaneous snapshots in discrete time steps”

      I’m beginning to think that way too. Somebody, I believe, described it as serial. Although I don’t think it is strictly serial, I think it is arises in asynchronous discrete firings all over the brain, so sometimes two or more could be occurring at nearly the same time, possibly coordinated by whole brain oscillations; whereas, others would be serial.. The underlying unconscious processes are parallel.

      Like

Your thoughts?