Consciousness science undetermined

An interesting paper by Matthias Michel on the underdetermined nature of theories of consciousness.

Consciousness scientists have not reached consensus on two of the most central questions in their field: first, on whether consciousness overflows reportability; second, on the physical basis of consciousness. I review the scientific literature of the 19th century to provide evidence that disagreement on these questions has been a feature of the scientific study of consciousness for a long time. Based on this historical review, I hypothesize that a unifying explanation of disagreement on these questions, up to this day, is that scientific theories of consciousness are underdetermined by the evidence, namely, that they can be preserved “come what may” in front of (seemingly) disconfirming evidence. Consciousness scientists may have to find a way of solving the persistent underdetermination of theories of consciousness to make further progress.

Michel looks at scientific thought on consciousness in the 19th century.  Interestingly, many of the same debates we have today raged on back then, with people arguing about definitions of consciousness, eschewing metaphysical issues for cognitively accessible ones, and whether consciousness resides in the cortex, the thalamus, midbrain, or somewhere else.

Apparently some scientists in the 19th century thought consciousness might reside in the spinal cord.  Experiments done on animals, such as surgically decapitating frogs but keeping the body alive, showed them still capable of complex motor responses, such as a frog’s body attempting to rub acid off its thigh with its foot, inviting many to speculate that they retained a “feeling consciousness”.  Glancing at the history of spinal cord injuries, scientists back then probably didn’t have many, if any, live patients with severed spinal cords as a data point.

Another debate that goes back to that period is whether consciousness “overflows” reportability.  In other words, are there aspects of consciousness that can’t be self reported?  We saw an example of this idea recently with Ned Block’s contention that phenomenal consciousness holds detailed images that can’t be accessed for self report.

Michel’s main thesis is how difficult it is to assess many theories of consciousness.  Many seem to go on after seemingly being falsified.  An example is IIT (Integrated Information Theory) after it was shown (by Scott Aaronson and others) to indicate consciousness in systems that give no indication of being conscious.  Giulio Tononi simply bit the bullet and asserted that those systems were in fact conscious.

The issue, Michel points out, are standards of detection, such as self report or appropriate behavior.  If someone has a theory of consciousness, and it indicates a particular system is conscious, but that system fails a detection, a proponent of that theory can often simply discount that method of detection.

Thus Block discounts self report as a valid detection standard when it fails to capture the contents he claims are in phenomenal consciousness, and Tononi discounts, well, apparently all methods of detection, when he asserts a set of inactive logic gates are conscious.

That’s not to say that some detection methods shouldn’t be challenged.  Some scientists assert that fish cannot be conscious because they lack a cortex.  But no invertebrates have a forebrain, much less a cortex, yet many display complex behaviors indicating some level of consciousness.  So using the absence of a particular structure, just because it’s implicated in mammalian consciousness, doesn’t seem justified.

It seems to me that, for healthy humans, self report should be the gold standard of detection.  Once we know that a particular activity (behavioral or brain scan activity) is associated with self report in humans, we can then test for similar activity in injured humans or non-human animals.  If someone can’t cite a chain of evidence back to self report, we should be skeptical.

But the detection issues seem like a consequence of a deeper issue, the lack of consensus on what consciousness is.  This isn’t so much about a lack of understanding of what it is, as much as trouble even agreeing what we’re talking about, except in the most vague manners in which it can be expressed.

This is why I sometimes wonder if science isn’t better off focusing on specific capabilities, such as perception, memory, metacognition, etc, and leaving consciousness itself to the philosophers.  But maybe it’s enough as long as scientific theories are clear which type of consciousness, or which aspects of it they’re addressing.

34 thoughts on “Consciousness science undetermined

  1. LOL! Some of what Michel is saying is exactly what I meant when I commented:

    (I wonder if lack of progress with a Theory of Mind, in part, drives some of these fringe ideas both from frustration with the lack of progress and from the idea that nothing says an oddball idea wrong, so the door is open. […] So much is unknown with consciousness that it’s a wide open door.)

    I was thinking IIT when I wrote that. 😀 I quite agree with Aaronson.

    “This isn’t so much about a lack of understanding of what it is, as much as trouble even agreeing what we’re talking about, except in the most vague manners in which it can be expressed.”

    Which I’ve come to think is telling us something about the problem. There is some comparison with how I think the difficulty finding a theory of quantum gravity says something about either GR or QFT.

    Liked by 1 person

    1. Just an FYI: Here’s a concession I finally received from a physicist on Goff’s website after a lively exchange. He was being a bully and relentlessly attacking Phillip so I couldn’t help myself…

      Steven Evans says
      July 8, 2019
      Nobody is claiming GR is the “true nature of reality”. In fact, physicists know that the concept of time in quantum theory and that in GR are incompatible, thus the search for a quantum-gravity theory. However, GR did predict and does describe subtle phenomena confirmed by observation e.g. a clock in Earth orbit runs faster than a clock on the Earth’s surface due to the difference in the strength of gravity. (Notice that he was very careful to say a “clock” runs faster and not time itself?)
      Panpsychism on the other hand tells us nothing.

      Lee Roetcisoender says
      July 8, 2019
      I do not and have never challenged the predictive power of GR, it is a very useful tool. Nevertheless, the simple point is this Steven: Panpsychism is not incompatible with physics; in fact, panpsychism is capable of contributing explanatory power which can add new information and understanding to our current model. The phenomenon of gravity itself is the greatest example of panpsychism’s explanatory prowess. My model concisely and succinctly identifies the cause of gravity and that is something our current model does not do, and it is an achievement that research in quantum theory will never be able to attain. A clear and concise understand of the actual cause of gravity, one that can be scientifically verified will open the floodgates for the discipline of physics which in turn will resolve the mysteries which currently plague us. Some of those mysteries including but are not limited to black holes, dark matter and dark energy.
      For the record, Phillip Goff nor any of the other champions of panpsychism do not have a working model and it is unlikely that they ever will. Nevertheless, this cowboy does. And if and when that bus ever decides to leave town, everyone swinging dick is going to want to ride. As a lone wolf myself, humility and gratitude have never been two of my stronger suits. My book is the culmination of over forty years of research and until my work gets published, my discoveries will remain confidential.
      Thank you for your time Steven,
      Lee Roetcisoender

      Like

    2. I think this paper has something important to say, which is why I posted about it. Theories like IIT or Block’s conception are problematic. They impress a lot more people outside of neuroscience than in it. But it would be wrong to take this and conclude that all cognitive science is stuck. There is steady progress being made.

      The problem between GR and QFT is empirically driven. We have plenty of evidence for both theories, which are both well defined and precise. The problem is that the places where we could break the ties between them or explore their limits are in unobservable places (at least currently).

      I think the problem with consciousness is different in that it’s a poorly defined mess.

      Like

      1. Indeed!

        “We have plenty of evidence for both theories, which are both well defined and precise.”

        I think you might be over-stating our understanding of physics in the problem regime. Our inability to test any theories is a problem, but there is also a lack of understanding of how to GR and QFT together. They are incompatible theories mathematically.

        I think the study of consciousness has these theory problems, plus there is the problem of definitions, as you say. If anything, the study of consciousness is in a worse position than those physics issues due to that.

        We don’t really find anyone in physics fretting over a “hard problem.”

        Like

        1. “but there is also a lack of understanding of how to GR and QFT together. They are incompatible theories mathematically.”

          Definitely. But they’re reputedly the most heavily tested theories in science. We know one or both will eventually need revision. But whatever they’re getting wrong, it’s going to be extremely nuanced, or only show up in extreme conditions (like the singularity of a black hole). This, along with whatever’s going on in quantum mechanics itself, are what I see as truly hard problems.

          Like

          1. Consider the similarities with QFT. The current pictures are subject to interpretation as to what it all means. Both deal with regimes out of reach technologically, possibly regimes effectively out of reach of technology. Both may include some misunderstanding about what’s really going on.

            Like

          2. Actually that comparison might have some important correlations. Arguably, our angst about quantum physics is that we keep trying to find some sort of classical metaphor to understand what’s “really” happening, when it might just be that any attempt to apply classical thinking is hopelessly misguided. Our angst about subjective experience coming from physics might suffer from similar issues.

            Hard Problem theories might live in the same realm as QM interpretations, metaphysical speculation to save appearances.

            Like

  2. Seems to me a lack of clear thinking generates some of the philosophical muddles around consciousness. Sort of things things that are sometimes missed:
    – Consciousness is subjective, so it only needs to exist to the subject in the same sense that anything else exists to that subject.
    – Existence is relative between entities, i.e. they exist to each other to the extent that they can interact.
    – There is a conscious entity A and what it is conscious of B, and a relation between the two such that A is conscious of B. Then we can analyse what form A, B and the relation must take.
    – The nature of consciousness means that B includes a representation of the conscious entity A in relation to the world W external to A
    – Representation of B incorporates pain and pleasure and is ‘thick’ in time (represents past and future states); qualia result.
    ….then develop from there!

    Like

    1. Peter, I can track most of the above, but not all of it, so, questions:
      What do you see as the nature of B? Can it be abstract patterns? Can I be conscious of the absence of unicorns? If B is “the absence of unicorns”, how does that interact with A?

      *

      Like

      1. JamesOfSeattle (apologies if you see multiple replies, replying doesn’t seem to work from my home PC).

        B represents what we can be conscious of and is held physically in the brain.

        A attends to, reads and writes the content of B.

        B also controls and parameterises the real-time ‘wiring up’ of the brain that turns sensory inputs to motor outputs subconsciously, so it is a representation that is enacted.

        A avoids infinite regress because it uses this same subconscious processing it just inputs content from B (instead of from senses) and outputs to B (instead of to motor neurons).

        There is no problem with B representing abstract stuff (like the content of this reply!)

        Website link is to my e-book that has pictures of this.

        Like

        1. Peter, it looks like the Spam folder was eating your previous replies. If that ever happens again and you don’t feel like retyping, just drop me an email (About page) or ping me on Twitter, and I’ll fish it out.

          Like

        2. Peter, I’m trying to get a handle on how you think B actually works. Is B a physical thing that represents an abstraction? Can you physically draw a line between A and B?

          *

          Like

          1. JamesOfSeattle yes, B is a physical thing representing an abstraction. In practice I expect it is the firing pattern of certain neurons at a particular point in the cognitive cycle (of a few 100 ms, being the time necessary for potentially the whole brain to participate). A is a neural process that translates B now into changes to B at the next cognitive cycle. Consciousness results from the representation B and the process A acting over at least one cognitive cycle, so it is both a representation and the process that updates that representation, taking a finite time to do so.

            The starting point for me is that consciousness is implementable, and a good way in is to separate out the thing that is conscious A from the content of consciousness B, and figure out what the latter (B) needs to include given the nature of consciousness…..then to check that back against what neuroscience seems to be indicating.

            Like

        3. Peter, our understandings seem very close, so I want to tease out any possible differences.

          1. You mention a cycle. Do you think the brain has timing cycles like a computer? (I don’t see that as necessary.). I think there is synchronization, and my personal conjecture is that synchronization is the mechanism of attention, and that’s how things get into B. See below.

          2. Does A necessarily have to act on (i.e., change) B? Could A simply be responding to B while other (subconscious) processes determine what gets into B?

          Just so ya know, my working hypothesis is that the base unit of consciousness (a psychule) is a process of the form:
          Input (x1,x2,…xn) —> [mechanism] —> Output (y1,y2,…ym)
          where Input is a symbolic sign vehicle (a representation), the mechanism is generated for the purpose of interpreting the sign, and the Output is a valuable response relative to the meaning of the symbol. Output could include changes to the mechanism (state change) and/or changes to the Input (B), but those changes are not necessarily required.

          So, using your terms: B —> [A] —> C.

          I don’t necessarily assign “consciousness” to A, but instead I assign it to the system which includes A.

          FWIW, for the “a8tobiographical self”, I currently locate B in the thalamus and A as multiple areas in cortex, but I have no evidence other than that the wiring seems right. As for the neural structure of B, you might want to look up Eliasmith’s Semantic Pointer Architecture.

          *

          Like

          1. JamesOfSeattle,
            1. Yes the brain has timing cycles which are determined by the fastest response time of a neuron and the number of layers of neurons participating in each operation. The frequencies of different aspects of processing determine brainwave frequencies (theta, gamma etc). When you come to implement the architecture you see that, in very round numbers gamma is the fastest that a couple of neuron layers in an inhibit/detect_new cycle, (40Hz) and you can do this about 8 times in a theta cycle (5 Hz, ie period 200ms), which is also about the time it takes for a flow there and back across the brain if it has about 10 layers of neurons. Therefore consciousness cycles through the cognitive cycle of a few 100ms, which is the fastest you can step conscious thought….and around 8 things fit in working memory (gamma:theta). Interesting correlations too with the maximum intelligible speed of speaking and music…also with saccades of the eyes, or the clicks of a chicken’s head!
            2. A has to both read B and write to B, otherwise you would have something like ‘locked in syndrome’ of your conscious thought. The sense-like awareness (read) aspect of consciousness tends to get priority in discussions, but the motor-like deciding (write) aspect is equally important. Putting read and write together, ‘control’ or ‘putting into relation’ is actually a better model then separate read, represent, write, although there is an equivalence. By the way, that’s talking only of the conscious processing of B by A. B is also used and updated in realtime subconsciously as we sense and act. That’s why we often battle to keep conscious control of a challenging scenario as subconscious processing (gets there first).
            3. A couple of things I would want to introduce to your A>B>C model. Firstly valence (pleasure or pain) is key, so we wire ourselves up (eg wire up reaching towards visual location of the chocolate) to maximise valence. Also attention is key (kind of equivalent to variable naming in software); and in consciousness what we read and write are complete chunks or wiring options of A>B>C for reason V (optimise valence). Valence is key because otherwise there’s no reason to do anything at all, or anything particular.

            Like

          2. JamesOfSeattle Tried to reply but spam eater keeps on eating what I have written. Happy to discuss further if you mail me pjm678 at hotmail dot com

            Like

        4. 1. Wooosh! [the sound of much flying over my head]

          2. People with locked-in syndrome are still considered conscious. But I agree that activity from A can have effects on B.

          3. I think attention is just a cognitive technique, mostly one that acts via suppression of competitors which would otherwise appear in B. It’s a key component of human consciousness, but is not necessary for “consciousness”. I think valence is just a description of a feedback mechanism. Positive feedback is “good, pleasure”, negative feedback is “bad, pain, suffering”. It’s not something in and of itself. For the most part, it is sequellae, follow-up processes.

          *

          Like

          1. JamesOfSeattle
            1. In short the brain needs to manage its timing cycles carefully because the fastest a neuron can fire is really quite slow (100 times a second) compared with the rate at which we need to control our behaviour.
            2. Yes, I agree you could be considered conscious if just aware of senses, but if you were just a watcher unable to decide or assert anything, something significant would be missing from your conscious existence.
            3. I see very strong roles for attention and valence, quite fundamental to understanding consciousness. In the case of attention, this is because it enables us to refer to things. I think this might relate to the nature of meaning, and of syntax versus semantics, (which I think are part of your theory). Certainly I would say attention is the twist in the loop of consciousness that makes it unusual; and it is much more than just filtering or spotlighing.
            Regarding valence, it is quite fundamental to consciousness that we can determine the best course of action for the organism as a whole. Therefore the moment at which we choose what will maximise valence, our whole-organism choices are made that determine our future

            Like

      2. SelfAwarePatterns, yes I read your blog post and agree with almost all of it. The place where I would slightly diverge is that I think it is possible to clearly define information architecture features that are necessary for a thing to consider itself conscious. Then we can look for those features (i.e. do a ‘white box’ rather than ‘black box’ analysis of the entity) to determine whether it is conscious and the sort of things its architecture enables it to be conscious of.

        That still leaves the problem that if I claim I have built something conscious using such an information architecture, you have to buy my architectural arguments, there’s no other reliable way to convince someone it is conscious.

        Like

        1. Peter,
          I agree that there’s an objective information architecture for any specific version of consciousness. The trick is getting consensus on anything more specific than ambiguous phrases like “subjective experience”, “phenomenality, “something it is like”, etc. It’s a lot easier to talk about architectures for exteroception, interoception, imaginative simulations, metacognition, etc.

          But your comment about information architecture indicates we’re mostly on the same page in this area!

          Like

  3. I think you are right to say that specific capabilities are the better subjects of investigation.
    If you are finally trying to discover the neurologic correlates of self-report, then you have to rely on self-report for both the basis of your theory and the observations which verify it.
    That is a big problem if you think that our reported experiences are transparent to us.
    Each report is then presumably infallible, and you have to account for every single one, somehow.
    For example, if you are testing the theory that ‘Pain = C-fiber activation’, one of your subjects may report a ‘squeegie feeling’ rather than pain, when his C-fibers are activated. You are then stuck with explaining what that is, because otherwise your theory is just a theory of some pain, with an unlimited expanse of uncharted territory lying beyond, and your thesis, ‘Pain = C-fiber activation’ is arguably disproven.
    But, you have an out. You can simply backfill into the expanse.
    The best example of that option is Descartes’ defense of dualism against neurologic correlates.
    When faced with alcohol or Phineas Gage, he could simply counter that the mind was like a musician faced with a broken instrument in those cases. It was unable to demonstrate its integrity, because its activities are opaque to independent outside observation in principle, and its capacity for self-report is superficially impaired.

    Liked by 1 person

    1. I think we have to regard self report as data, but not necessarily reliable data. The trick is to amass enough of it correlated with other data in order to find reliable conclusions. No stimulus or behavior will always rise to the level of being reportable in every test subject. And some fraction of test subjects will report things that aren’t there. Volume smooths it out into statistical patterns.

      But you’re right. You can always find a way to save a theory. But it comes at a cost. Sometimes the cost is minor, such as doubting a particular detection method. Other times it requires large alterations to our understanding of reality, such as Tononi saying any sufficiently integrated system is conscious. In the case of defending Cartesian dualism, the amount of neuroscience that has to be denied is large and steadily increasing.

      Liked by 1 person

      1. Well, as you have said, data (observation) is theory-laden.
        In this case, the theory coloring the data (the experiential reports associated with C-fiber activation) is really the hard problem.
        So, you won’t eliminate the hard problem with reliable reports. There will always be that no-man’s land where multiple realizations lurk and paranoid delusions are not easily distinguished from revelations 🙂
        Someone will always be able to question the veracity of an artificial consciousness’ self report – “Yes it says it has a pain, but is it just manifesting that verbal behavior on a statistical norm or is it really self-reporting a subject-making experience, you know, the kind with flavor.”
        Then you are back to a Turing test, albeit one that the construct may win hands down.
        I have never been sure whether or not you were OK with that.
        By the way, substance dualism fails on a metaphysical basis – the separation of substances is not sustainable (a musician requires an instrument) – rather than on the strength of competing theories under monism. Those leave the path open to endless objections, however incredible.
        I think the hard problem also requires a metaphysical solution.

        Liked by 1 person

        1. “There will always be that no-man’s land where multiple realizations lurk and paranoid delusions are not easily distinguished from revelations ”

          That’s true. But I think with enough data, we can push it into “problem with induction” territory, a philosophical problem we just work around.

          “Then you are back to a Turing test, albeit one that the construct may win hands down.
          I have never been sure whether or not you were OK with that.”

          I think we have no choice but to be. As I’ve noted many times, consciousness lies in the eye of the beholder. I think that was the chief insight of Turing’s test.

          “I think the hard problem also requires a metaphysical solution.”

          You’re probably right. A metaphysical solution for a metaphysical problem. Of course, do metaphysical problems ever really get solved? At least other than, like atomism, being successfully converted into physical ones?

          Liked by 1 person

  4. All discussions of consciousness are ‘underdetermined’ – our abilities to conceptualize will probably never allow us to fully understand ourselves, even if there was enough information available to allow for such a conception. Language forces us to reduce things to manageable concepts. As an example of the complexities, consider C. elegans. Hermaphrodites have 302 neurons, the male 385. The connectomes – the myriads of junctions of each neuron of the entire organism – of both were recently published presenting us for the first time the opportunity to study the basis for their rather complex behaviors: searching for food, sex, memory, even ‘dancing’. H. sapiens has about 100 billion neurons with trillions of connections, and that is only a small part of who and what we are. Thus, if C. elegans exceeds our understanding, what hope is there that we will understand ourselves?

    At this point in time it seems that we will never recapture the events that brought us here:
    “Everything is the way it is because it got that way.” (me informally quoting Daniel Dennett quoting D’Arcy Thompson) We have little insight of what we are, and even less of how we got here. I fully support our efforts to try to find out, even though the goal may be unattainable. We will certainly learn a lot along the way, allowing us to eliminate some of our many mistaken ideas.

    In the meantime, we are entitled to inspired guessing: our universe is an intelligent system of intelligent systems that continually evolves. Time is real. We are now for the first time contemplating the whole in an organized scientific fashion.

    Liked by 1 person

    1. I think we can look at this at two levels. In terms of physics, chemistry, biology, neuroscience, etc, I definitely think we’ll be able to learn and understand ourselves. We already know far more than anyone before the modern area could have conceived possible. Some of it may seem hopelessly daunting right now, but much of what seemed hopelessly daunting to people 400 years ago is common place today.

      But there is also a metaphysical level where we might never achieve a full understanding. I fear the people concerned about the hard problem may never see it solved to their satisfaction, at least in any authoritative manner.

      But we’ll never know so much that we don’t have a knowledge / unknown border. We can only understand things in terms of their lower level constituents, constituents that we may not yet understand. If we ever do understand those constituents, they will be in terms that we won’t understand, at least at first. Eventually we may hit brute facts of reality that we may never understand. What it energy? What is spacetime? Even if we find the answer, we’ll have to start over with the next layer below them. Turtles all the way down!

      Like

  5. I have held the position for some time now that we do not know enough to conclude anything with regard to consciousness … yet. Of course, these things have been discussed philosophically for ages, but we are just now in possession of tools that might give us some answers to some of these questions. Rushing to judgment is as likely to move us forward as to have us run off of a cliff.

    Liked by 2 people

    1. I think it’s okay to form theories, as long as we’re willing to test and revise them on new evidence. The theories we have will almost certainly not be the final ones, (if it’s even possible to ever talk about “final” theories in science), but that doesn’t mean they might not be useful. I don’t think GWT, HOT, Attention Schema, or any other theory is the one and final answer, but they all seem predictive to some degree. And I doubt there will ever be only one answer anyway.

      Like

  6. Yes, consciousness science is seriously underdetermined, for now. But, well, tough problems are tough. So this is news?

    Mike, you seem to believe in a hard and fast line between verbal questions and empirical questions, and you also seem to think we can already tell that some specific questions about consciousness are verbal questions. I don’t think either one of those things is true (even if we rewrite the last bit as “very-largely-verbal questions”).

    Like

    1. Paul, I’m not sure what you mean by “verbal questions”. But if you mean reasoning vs empiricism, then I always see it as a mix. There is no empiricism without reasoning. All observation is theory laden.

      But every step in reasoning is subject to error, errors which can accumulate. Empirical observations act as a reality check to ensure we haven’t drifted too far. Reasoning that goes on too long without the reality check become increasingly less grounded speculation.

      Or do you mean something else by “verbal question”?

      Liked by 1 person

      1. You got it: reasoning/empiricism, analytic/synthetic, verbal/empirical; all closely related. OK, good to know. (I was emphasizing the “there is no reasoning without empiricism” side of the coin.)

        Liked by 1 person

Leave a reply to SelfAwarePatterns Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.