Hard criteria for theories of consciousness?

(Warning: consciousness theory weeds.)

A new paper in the journal Cognitive Neuroscience: Hard criteria for empirical theories of consciousness, takes a shot at proposing criteria for assessing scientific theories of consciousness.  The authors make clear at the beginning that they’re aiming their criteria at empirical theories, rather than metaphysical ones.  So they make no attempt to provide criteria for straight metaphysical theories like panpsychism, property dualism, idealism, or similar notions.

The first set of criteria involve explaining what they call “paradigm cases of consciousness and the unconscious alternative”.  This involves addressing all the standard empirical data, such as the results of visual masking perception tests or binocular rivalry experiments.  It’s not enough to just address the conscious part of these cases, but the unconscious portions as well.  For example, when masking prevents a person from consciously perceiving an image, the theory must address what happens.

Most major theories do address these cases, but a notable exception is Penrose’s Orchestrated Objective Reduction theory.  Orch-OR really isn’t widely accepted in the scientific community, but it’s included here as an example.

(One point to note about this criteria.  The authors assume these cases provide evidence for consciousness.  But it’s worth noting what is actually being measured is subject report.  Even in no-report paradigms, it amounts to delayed report, or behavior or activity previously associated with report.  Many don’t want to acknowledge it, but the scientific study of consciousness is ultimately the study of report.)

The second criteria is the unfolding argument.  I covered this awhile back.  Causal structure theories such as Integrated Information Theory (IIT) or Recurrent Processing Theory equate consciousness with recurrent processing.  But the same causal output can be produced by an “unfolded” version of the neural network in question, making this stipulation of the theories untestable.

The third criteria is the small and large network arguments.  Many theories, if taken literally, can be implemented in trivially small networks of, say, a dozen neurons.  Some theories, like IIT, explicitly accept this, but others such as Global Workspace Theory (GWT), Higher Order Thought Theories (HOTT), and Predictive Processing Theory (PPT), don’t.  Accepting the small network situation makes a theory panpsychist in nature.  The authors argue that theories subject to the small network argument that don’t accept panpsycism are effectively incomplete.

I’m not sure this is a fair criticism of GWT, HOTT, or PPT.  These theories aren’t meant to be standalone, but to supplement neuroscience models.  IIT on the other hand, does claim to stand alone, being prepared to label a rock with high phi as conscious.  But the others are meant to be interpreted as explaining why brain like systems are conscious.  Judging them without that seems strawmannish to me.

A close corollary of the small network argument is the large network argument.  A large network is composed of many small networks.  If each of the small ones are conscious, it leaves the large network with a combination problem to explain.  (Similar to the well known issue for panpsychism.)

The last criteria is the “other systems argument”.  What does the theory say about the consciousness of systems other than awake humans?  The theory should be generalizable to other systems, or provide a strong argument why only humans can be conscious.  Some of the theories they tag with not adequately addressing this, such as Graziano’s Attention Schema Theory, doesn’t fit with what the proponents actually say; in his writing, Graziano explicitly speculates about AI systems having their own attention schemas.

I actually thought the unfolding argument was a special case of the other systems argument.  I’m not sure why the authors kept them separate.

None of the theories emerge from these criteria unscathed, as this table from the paper shows:

Click through for source

The conclusion section makes this point (ToC=theory of consciousness):

Maybe the plethora of ToCs simply reflects the fact that we have too few experimental constraints. It is possible that with more data and a more detailed view of the subprocesses of consciousness, the mystery will evaporate, similarly to what happened with the discussion about the ‘nature’ of life. Nowadays biologists understand what life is, but there is no ‘theory of life’ (Machery, 2012). It is the entirety of subprocesses such as homeostasis, reproduction, etc., that differentiates life from non-life.

As someone who has done his share of reading both in the consciousness and neuroscience literature, this has been my suspicion for some time, that we will never have just one theory of consciousness as though consciousness were just one objective thing in the brain.  Similar to life, we’ll eventually have an understanding of the complex array of processes and capabilities that make up our phenomenal experience, and that provide the intuition that other systems share it, but there won’t be just one theory for the whole thing.

That’s not to say many of theories can’t provide insights into aspects of consciousness, what the authors called “subprocesses”, but we shouldn’t expect any one of these theories to be the whole story.  By themselves, they will always predict either too few or too many systems as conscious.

Unless maybe I’ve overlooked something?

57 thoughts on “Hard criteria for theories of consciousness?

  1. What makes life a little different from consciousness is that the boundary between life and not-life doesn’t seem as fuzzy as the boundary between conscious (system) and not-conscious (system). More strikingly, we generally don’t think of machines as life, and we certainly don’t think software as life (except in a metaphoric sense). We’re pretty clear that a numerical simulation of life isn’t life.

    A question I have is, from a category view, is consciousness a bigger category than life? (Demonstrated by what seems more clear-cut boundaries between life and not-life compared to consciousness.) How much are they alike as categories, and to what extent is the comparison meaningful? Can we approach a more clear-cut definition of consciousness? Is the line between organic and machine meaningful here? What about the line between physical implementation and numerical simulation?

    If numerical simulations can be conscious, then it seems consciousness is the bigger category. And yet consciousness is a subset of life… 🤔

    Liked by 1 person

    1. Life does have its own fuzzy boundaries with people arguing whether viruses, viroids, or prions count as living. I actually once shared an article by someone claiming that life doesn’t exist.

      Why is defining life so frustratingly difficult? Why have scientists and philosophers failed for centuries to find a specific physical property or set of properties that clearly separates the living from the inanimate? Because such a property does not exist. Life is a concept that we invented. On the most fundamental level, all matter that exists is an arrangement of atoms and their constituent particles. These arrangements fall onto an immense spectrum of complexity, from a single hydrogen atom to something as intricate as a brain. In trying to define life, we have drawn a line at an arbitrary level of complexity and declared that everything above that border is alive and everything below it is not. In truth, this division does not exist outside the mind. There is no threshold at which a collection of atoms suddenly becomes alive, no categorical distinction between the living and inanimate, no Frankensteinian spark. We have failed to define life because there was never anything to define in the first place.

      https://selfawarepatterns.com/2013/12/04/why-life-does-not-really-exist-brainwaves-scientific-american-blog-network/

      I didn’t buy into that idea for the same reason I don’t buy into consciousness illusionism, but like illusionists, he makes a good point. It seems to me that eventually the distinction won’t be between life and non-living systems, but between evolved and engineered ones, with hybrids in between.

      On numerical simulations, I think in principle it is possible for a virtual entity to be alive. Obviously from our perspective it wouldn’t be physically alive, but from the perspective of virtual prey, virtual predators are predators.

      Whether consciousness is broader than life depends on which definition we’re using. I think naturally occurring consciousness is an aspect of certain types of complex life, just as naturally occurring pumps, respirators, and motor systems are aspects of life. But we can build technological versions of pumps, respirators, and motors. So we’ve broadened them beyond life. (Although it’s interesting that a typical dog seems to think cars, lawn mowers, and other contraptions are alive, at least when they’re on and in motion.)

      Nervous systems can be viewed as naturally occurring information processing systems. And we’ve already demonstrated that at least the lower level functionality can be reproduced in technological systems. The question is whether we’ll encounter something that stops us from going all the way to a system that reliably triggers our intuition of a fellow consciousness. Obviously with my deflationary functionalist view of consciousness, I strongly doubt we will.

      Like

      1. “Life does have its own fuzzy boundaries with people arguing whether viruses, viroids, or prions count as living.”

        Yes, but consider what a small subset of all life that is. Viruses and viroids are in the same taxonomic branch, and prions aren’t all that different from virusus.

        “It seems to me that eventually the distinction won’t be between life and non-living systems, but between evolved and engineered ones, with hybrids in between.”

        Maybe. I suspect we’ll always draw a line between organic (i.e. living) systems and machine ones. It think there are some fundamental differences between organic systems and metal-plastic ones.

        “Obviously from our perspective it wouldn’t be physically alive,”

        Obviously. 🙂

        “But we can build technological versions of pumps, respirators, and motors.”

        Which are physical replacements for their physical counterparts. We’ve long agreed a “Positronic” brain seems likely to work. (The divisive issue is whether numerical simulations will. We can create excellent numerical simulations of pumps and motors, but they don’t pump or drive anything.)

        “Although it’s interesting that a typical dog seems to think cars, lawn mowers, and other contraptions are alive,”

        I’m not sure that’s true. They’re certainly alarmed at first. When Sam was a pup and we were out for a walk, she refused to go past a house where a guy was out mowing his lawn with a push power motor. But after some exposure, it was never a problem. (I trained her to be fine with thunder and fireworks.) Likewise her first car ride as an aware active pup freaked her out so much I had to pull over and let her out. But she came to love going for a ride, so I think that initial panic was motion-based.

        Scent for dogs is what sight for us is, and machines don’t smell like life, so I’m not sure what dogs make of them. As your quote points out, humans categorize (and I disagree with the quote because “life” is just as valid a category for us to define as “bricks” or “galaxies”).

        “The question is whether we’ll encounter something that stops us from going all the way to a system that reliably triggers our intuition of a fellow consciousness.”

        Well, again, we agree about “Positronic” brains. Per your “deflationary functionalist view” the question almost is rather why wouldn’t they work. But a numerical simulation of such a system is a whole other kettle of fish.

        Like

        1. I don’t know that I would consider virus to be a small subset of life. They may be in terms of raw biomass, but I read something recently that there are something like 10^31 viruses on Earth. They appear to be everywhere life is, may have evolved from the same pre-cellular replicators that cells evolved from, and are a major evolutionary force with horizontal gene transfer and modification of the DNA of infected cells.

          On drawing a line between organisms and machines, I guess it depends on what you think life is. I think it’s just an evolved system. Life began at the molecular level and spent billions of years refining its molecular machinery before complex life arose bottom up. We shouldn’t be surprised that it’s far more sophisticated than the systems we’re building top down. But as we become more accomplished with nanotechnology, I expect that to change.

          Good point about dogs. They do tend to become more comfortable around machines over time, once they realize they’re not a threat. But all the barking at the initial encounter sure seems like they’re reacting to it as though it were something they might be able to chase away, that is, an agent they might be able to scare.

          Yeah, the numerical simulation is the old disagreement. For me, a key fact is that any simulation has to run on some kind of physical substrate, and I think we then need to assess the substrate + simulation together as an overall system. Physically, it’s still a different system from the original, but now the question is how much that difference matters. It may well matter a lot in terms of performance and efficiency.

          Like

          1. “I don’t know that I would consider virus to be a small subset of life.”

            Viruses in terms of their taxonomic classification. Compare with bacteria, which also have many forms, or other single-cell life, or microscopic multi-cellular life. Viruses, as a class, are a small subset of the total taxonomy of life. More to the point, out of that large taxonomy, viruses and prions are the only boundary where people really ask the question, “Is this life?” We’re pretty clear on the matter otherwise.

            “I guess it depends on what you think life is. I think it’s just an evolved system.”

            When we talk about “life” I think we mean much more than an evolved system. It has myriad specific necessary properties: organic, entropy-converting, self-replicating (with genetic mixing and possibility of mutation), self-growing, often self-learning, self-fueling. A mountain might be said to be an evolved system, but it’s not life.

            “But as we become more accomplished with nanotechnology, I expect that to change.”

            We’ll get better, for sure. There are some fundamental limits regarding nano-machines, such as leverage and energy. Surface tension and friction become huge factors at that scale. A lot of what nature does is at the chemical level. It may be the only way to do what nature does is chemical.

            “But all the barking at the initial encounter sure seems like they’re reacting to it…”

            That’s about as much as we can really say for sure: they’re reacting to it. They have a limited toolkit for reacting to the unknown, and barking is a primary one. Dogs will bark at anything they don’t understand.

            “I think we then need to assess the substrate + simulation together as an overall system.”

            As you go on to say, it’s a different system. Which is putting it mildly, since it’s a completely different system that works according to completely different principles. At best, such systems have some partial isomorphy in their results, but there isn’t any in their function.

            As you say, it’s a question of how much that difference matters. Is consciousness in the results or the function?

            (For me that the substrate can be any computational system is also problematic in suggesting the consciousness lies more in the software than in the substrate. It means billions of monks with abacuses and billions of acolytes as carrying messages between the monks could compute a conscious mind. Albeit one whose thought process was glacial.)

            “It may well matter a lot in terms of performance and efficiency.”

            It depends on how detailed the model has to be. If, for example, neurons and synapses need to be modeled down to the chemical level, it may be effectively impossible computationally. Or just hugely expensive, such as supercomputer time is now. It’s again the advantage nature has in doing Her computation at the atomic level.

            Like

          2. The taxonomy strikes me as about what we know. As I understand it, we’ve studied a few thousand viral species, but there are estimated to be millions (at least). In terms of boundaries, there are also disputes about when life begins (although that’s really more about personhood) and when it ends, questions which strike me as ultimately more philosophical than scientific.

            When I said life was an evolved system, I didn’t mean that to be all inclusive of everything it does. I was focusing on the central thing that separates it from possible future technological systems. (Although I guess numerous design prototypes could be considered a type of evolution, but if so, it’s guided evolution.)

            On nanotechnology, there shouldn’t be any fundamental limits for technology that life isn’t also subject to. The laws of physics apply to both. Proteins are essentially nanomachines, so technology should eventually be able to do at least anything they can do. And there’s no particular reason chemistry can’t be used in machines. We’re basically already doing it with gene editing technologies, although that’s really just modifying existing biological processes at this point.

            I disagree that the simulation necessarily wouldn’t be isomorphic with the original system’s function. That’s really the whole point. It’s functionality at a certain level of organization, hopefully the level relevant to the output we want from the system. And as a functionalist, I definitely think consciousness is in the functionality, and functionality is multi-realizable.

            I think we once had the conversation where I was onboard with the idea, at least in principle, billions of monks could create a conscious system. Or at least the idea isn’t obviously false to me. Although as you note, it would take a long time to produce a single thought. (Although far less time than Searle’s Chinese Room.)

            On how far down things have to be modeled, it makes a difference whether we’re talking about a native AI or an uploaded mind. The native AI I’m pretty sure wouldn’t have to be modeled down to that level to produce a system that many would see as conscious. The uploaded mind would likely have to be modeled down to the molecular level in some areas (likely synapses) but in others we could probably get by with coarser modeling. A thorough understanding of the mind would let us know where we could and couldn’t cut corners, and where we could outright replace large portions of functionality.

            Like

          3. I’m pretty sure the diversity applies to the bacteria as well. But we’re getting lost in the weeds here. My point was that the division between life and not-life to me seems considerably more distinct than does our notion of what is conscious and what is not — largely because consciousness is so poorly defined.

            If you disagree, you disagree. Is it because you think life is as poorly defined as consciousness, or that consciousness is just as well defined as life? (Your virus argument suggests the former.)

            “I guess numerous design prototypes could be considered a type of evolution, but if so, it’s guided evolution.”

            The very definition of “Intelligent Design”! 😀

            “On nanotechnology, there shouldn’t be any fundamental limits for technology that life isn’t also subject to. The laws of physics apply to both.”

            Agreed. I just said there are fundamental limits on how things work at the chemical level and doing that may be our only option for achieving the same effect.

            “I disagree that the simulation necessarily wouldn’t be isomorphic with the original system’s function.”

            I truly don’t understand how. A running numerical simulation functions completely differently than the physical system it models. It’s not a matter of levels of organization — there is no level of numerical organization that is isomorphic with any level of physical organization.

            I acknowledged there was “partial isomorphy in their results” — which requires interpreting the numeric output in a physical manner. But up to that point, surely you don’t believe the systems themselves are isomorphic? What is the isomorphism between crunching numbers and physical processes?

            “I definitely think consciousness is in the functionality, and functionality is multi-realizable.”

            Not always. Only certain physical materials in a specific configuration can produce laser light. If consciousness is like laser light — a physical process that arises from a specific configuration of physical materials — then it may not be found in a numeric simulation.

            (I’ve got a post coming up about exactly this, so maybe we can get down in the weeds there.)

            “On how far down things have to be modeled, it makes a difference whether we’re talking about a native AI or an uploaded mind.”

            Do you base that on the premise that low-level modeling isn’t actually necessary for consciousness? Isn’t the more reasonable assumption that nature — which casually creates at the atomic level — would implement something as complex as a conscious mind using low-level effects? Plants depend on photosynthesis; surely brains are more complicated.

            (Arguments there is no evidence for such are empty, firstly because lack of evidence isn’t evidence of lack, and secondly because we have no idea what causes subjective experience, so the field is certainly open for low-level effects we haven’t recognized.)

            “A thorough understanding of the mind would let us know where we could and couldn’t cut corners,”

            Absolutely!

            Like

          4. I think these days consciousness definitely has the bigger definitional issues. Mostly because it has a lot more active philosophy and theorizing happening with it. But historically life had a lot of that too, with a lot of talk of animal spirits and vitalism. Over time, as the mechanisms increasingly became better understood (along with basic physics like electricity, chemistry, etc), that talk diminished. Or maybe it’s more accurate to say it retreated to the mind, currently the last refuge of those hoping there’s still something magical about us.

            If a simulation functions “completely differently” from what it’s simulating, then it wouldn’t have the ability to accurately simulate it, not even to some level of approximation. To the extent that it does accurately do so, there has to be some functional isomorphism. That’s really the whole point of a simulation, to isolate the relevant dynamics and reproduce them.

            On the laser light analogy, I think I’ve broached this before, but it matters what the downstream effects of the laser are. Is it a communication laser? Or is it cutting something? Stopping at the laser itself is failing to ask Dennett’s hard question: And then what happens? When we do, we may find that the laser is just one of many possible solutions for a functional outcome, which is I think is at the heart of multi-realizability.

            My point about AI is that I think most of the functional dynamics happen at the neural circuitry and system dynamics level. Saying it’s “more reasonable” for consciousness to be based on atomic level dynamics is a supposition I don’t see any evidence for. For example, while fully understanding the human heart may involve studying things down to the molecular level, the function of a blood pump can be reproduced without needing to recreate all of those lower level dynamics.

            If arguments against notions based on lack of evidence are empty, then we’re free to engage in whatever loose speculation we want, to bring in whatever type of story makes us feel good, as long as it remains possible within the current gaps in knowledge, even if it goes against the trends in the evidence. I’m a skeptic. I think lack of evidence is epistemically important. And history hasn’t been kind to the overwhelming majority of loose speculation.

            Like

          5. “But historically life had a lot of that too,”

            Your point being that presumably consciousness will follow that same arc. It’s a likely assumption, but it’s just that: an assumption.

            I know you loath the idea humans could be “magical”, but our subjective experience is certainly “mysterious” (at least for now) and it may turn out there is something scientifically “ineffable” about consciousness.

            Per Gödel, there is something ineffable about mathematics. Per Turing, there is something ineffable about computing. Do you think human consciousness is more complex, or less complex, than mathematics or computing? Especially given your commitment to strong computationalism, you should recognize the potential for the ineffable in brain computation.

            “If a simulation functions ‘completely differently’ from what it’s simulating, then it wouldn’t have the ability to accurately simulate it, not even to some level of approximation.”

            You don’t seem to understand what I’m saying. A system has a result — which you’re calling its function. That system also works in a certain way (i.e. functions) to accomplish that result.

            Obviously System A, and a different System B, can accomplish the same result (with caveats). As you would put it, they have the same function.

            But how those systems accomplish that result — how those system function — is completely different. I don’t see how you can deny that.

            An isomorphism involves two things having the same shape. A human brain and software+hardware do not have anything at all like the same shape. This again, is factually undeniable.

            The sameness you keep pointing to involves mapping the numbers that are the result of the simulation to something physical. A weather sim maps its numbers to actual physical weather. A laser sim maps its numbers to physical lasers. A mind sim maps its numbers to physical output. If that sim is accurate (and works), the assumption is the entire system would be conscious.

            But the internal function and architecture, the “shape” of a brain versus software+hardware is, as I’ve said, completely different.

            “On the laser light analogy, I think I’ve broached this before, but it matters what the downstream effects of the laser are.”

            You have, and I don’t understand why this time, either. Laser light is not intended, itself, to be an analog of mind. All the example points out is that certain physical phenomena only arise under specific physical conditions.

            Those phenomena can be numerically simulated, usually to arbitrary precision, but the simulation cannot (itself) produce the same physical results. It can only describe them. The same applies to a weather simulation. In fact it applies to any simulation of a physical system.

            The only time it doesn’t apply — the only time a simulation accomplishes the exact same thing as what it simulates — is when simulating an abstract system. A simulation of a calculator produces the same thing (output numbers) as the calculator.

            “…a supposition I don’t see any evidence for. […] If arguments against notions based on lack of evidence are empty, then we’re free to engage in whatever loose speculation we want,”

            That’s seriously twisting what I said. Dude, come on. It’s a well-established scientific principle (and probably an argument you’ve made in the past) that a lack of evidence isn’t an evidence of lack.

            To me, the current brain+mind mysteries are a bit like quantum physics. We know quantum physics has to be incomplete. Given our lack of understanding in both fields, all any of us are doing is making assumptions.

            Like

          6. My biggest issue with conclusions of magic, particularly with people rushing to that conclusion, is it’s giving up, surrendering to the notion that we can never understand something. Given the history of people reaching that conclusion, and later proven wrong, I think it’s a losing strategy. The problem is that there are a lot of people who resent progress in areas like this. So they take any glimmer of mystery, difficulty, or limitation as an excuse to declare the scientists wrong and the whole endeavor hopelessly misguided, doomed to failure. Much better to just come around to their view of magic. I’ll admit I find that attitude maddening.

            I think the way a lot of people talk about Turing halting problem and Godel’s incompleteness theorem is in that genre. I did a post a while back on Godel’s theorem, which ended with this quote from Turing:

            …I would say that fair play must be given to the machine. Instead of it giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques… In other words then, if a machine is expected to be infallible, it cannot also be intelligent. There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.

            https://selfawarepatterns.com/2015/12/28/godels-incompleteness-theorems-dont-rule-out-artificial-intelligence/

            My point doesn’t require that the laser be an analog of the mind, merely that as a physical phenomenon, its effects and overall role in a broader framework matter. It’s kind of like if we built a computer out of mechanical switches. The switches are a certain physicality. If we just stop there, nothing can replace it. We couldn’t just plug a chip with transistors into that system without some sort of interfacing bridge. But it’s the switch’s overall role that matters, and whether that role can be implemented with differing physics.

            I think we magnify the mysteries of the mind because it’s us, and who or what we currently think is like us. But unlike quantum physics, a lot of progress is currently being made.

            Like

          7. “My biggest issue with conclusions of magic,…”

            I don’t disagree with most of that paragraph, but I don’t see how it applies here. I’m not claiming “magic” or rushing to any conclusions. The idea that I “resent progress” is absurd (and slightly offensive). I’m not declaring “scientists wrong” or “doomed to failure.” None of these objections apply here.

            It is scientific fact that certain systems have in-principle ineffable properties. Mathematics (which is just logic and therefore very simple in principle) has ineffable properties. (Computation is just mathematics, and Turing’s proof and Gödel’s use the same argument involving Cantor’s diagonal.) There is also Heisenberg’s Uncertainty, which makes the physical world slightly ineffable at the edges.

            I stress the in-principle nature of these ineffable properties. Our understanding of reality says very strongly these are not mysteries we can ever hope to solve.

            The argument I am making doesn’t involve any “magic” or special pleading about “hard problems.” It merely points to systems with these in-principle limits and asks if the incredibly complex system we call the human brain might also have some ineffable properties.

            Pointing out there is no understanding of the brain that provides an immediate counter-argument — that is, there are no facts contrary — is just an argument to keep an open mind. It’s just pointing out there are no counter-facts. It’s not an appeal to putative “magic”.

            “My point doesn’t require that the laser be an analog of the mind, merely that as a physical phenomenon, its effects and overall role in a broader framework matter.”

            They matter in that particular framework, but the only point here is that there are physical phenomena that only arise from specific physical configurations. (Surely you can’t deny that?)

            Laser light is just a concrete instance of a general proposition: Simulated X isn’t Y. The “broader framework” is irrelevant to that premise.

            No matter how many numbers we crunch, we can’t emit photons. A simulation of an overloaded resistor doesn’t heat anything up, let alone destroy the resistor. A simulated explosion doesn’t blow anything up. Etc. The argument is general; the laser light itself has nothing to do with it.

            “I think we magnify the mysteries of the mind because it’s us, and who or what we currently think is like us.”

            I’m not magnifying anything, but you have to admit: currently there are mysteries.

            Like

          8. My comments on magic after “a lot of people” were more general, aimed at people like Philip Goff, Barnardo Kastrup, or even new age spiritualists, religious apologists, and their ilk. (Sorry, could have made that more clear.) Science minded people may throw up their hands in frustration and wonder if knowledge gaps can be closed, but people like that seem to live in those gaps.

            If by “ineffable” you mean unknowable, I think it’s more productive to think of the limitations you mention as blind spots. Unknowable things can’t be compensated for. Blind spots can. For example, Godel puts limits on what a system can know about itself, that is, what it can directly know about itself. If outside systems are brought into the loop, those blind spots can be compensated for. (Ex- a mirror enabling you to look at your own face.)

            Does the human mind have blind spots. I think the answer there is obviously yes, far larger ones than anything Godel might imply. Are there things about it unknowable from all vantage points or techniques? I don’t currently see a reason to suppose that. But I’m what Chalmers calls a “Type A materialist.”

            Of course, if we consider the universe as a whole, where there is no “outside the system”, then the blind spots of what the universe can know about itself may become truly unknowable things, which may be an issue for fundamental physics or cosmology. But for any composite system within the universe, such as the human brain, I think we can bring in enough outside capabilities to overcome any blind spot.

            On mysteries, I think the actual mysteries are the ones along the lines of what Chalmers labels “easy problems”. Not that they’re actually easy, but they are solvable.

            Like

          9. I think the take away from Gödel is what it says about math and how it ruined Hilbert’s program forever. What it says about math isn’t a “blind spot” that can be compensated for (except in very specific conditions).

            Turing’s result, which is a subset of Gödel’s, can’t be compensated for, nor can the Uncertainty associated with Heisenberg.

            “But I’m what Chalmers calls a ‘Type A materialist.'”

            Even Type A materialists are subject to the fundamental limits I’m talking about here.

            More to the point in terms of this discussion, there remain mysteries, and while it’s reasonable to assume we’ll plumb their depths, it is currently an assumption that might yet prove false.

            Liked by 1 person

  2. I think I will hold significant discussion until you have read/commented on two other recent works:

    1. Wanja Wiese‘s “The science of consciousness does not need another theory, it needs a minimal unifying model“ (https://academic.oup.com/nc/article/2020/1/niaa013/5870169)

    2.Kanai’s “ Information Closure Theory of Consciousness” (https://www.frontiersin.org/article/10.3389/fpsyg.2020.01504/full)

    The first discusses Kanai et al.’s work from 2019 describing the role of information in consciousness (Kanai R , Chang A , Yu Y et al. Information generation as a functional basis of consciousness. Neurosci Conscious 2019;2019:niz016, doi:10.1093/nc/niz016.). Wiese’s point is that Kanai’s theory is not so much a theory of consciousness as a model of the minimum requirement of a conscious process, which you might also refer to as a psychule (ahem).

    The second is the introduction of Kanai’s full blown theory, which just came out like … today? It seems to meet the criteria described above, but I agree with Wiese that it describes a MUM (minimal universal model) and so is compatible, more or less, with most of the standard theories.

    *
    [psychule!]

    Liked by 1 person

    1. I’ll try to get to them sometime soon. Unfortunately, my reading list is getting away from me.

      I do need to read up on exactly what is meant by “minimal unifying model”. As noted in the post, I think many of these theories successfully model aspects of the overall system we label “consciousness”, but none of them fully capture the full concept. (To the extent consciousness can be said to be an overall consistent and coherent concept.) From my first pass at the information generation theory, it fit well within that genre.

      Like

  3. “Many don’t want to acknowledge it, but the scientific study of consciousness is ultimately the study of report.” That’s a fascinating point, especially since it’s denied by illusionists such as Frankish. They reject the idea that there’s an essentially private inner life. But if that’s correct, why is the scientific study of consciousness ultimately the study of report?

    Liked by 1 person

    1. I don’t know that Frankish would actually disagree with the report point. It’s actually fully compatible with illusionism.

      Why is the study of consciousness ultimately the study of report? Well, consider how we know whether someone is conscious. We know because a person can report on their conscious experience. Many will say that we can use behavior or brain activity, but that’s only true to the extent such behavior or brain activity has previously been correlated with someone reporting their conscious experience (and consistently not correlated with unconscious behavior). No matter who or what we’re assessing consciousness in, there must be a chain of evidence leading back to report. If there isn’t, then a subjective guess has been introduced which undermines our conclusions.

      Like

  4. You run a kitchen, and you’ve experimented with two cleansers, Spiff’s Spiffy Cleanser and Mark’s Remarkable Cleanser. Pots that are cleaned with either one look clean, but when you cook a stew in pots cleaned with Mark’s, it tastes funny. Spiff’s pots are fine. Along comes a chemist with a ToC : Theory of Cleanliness. She says the difference is rancid fats which are left behind on Mark’s pots. She puts a Mark’s pot under the microscope and shows you a vast quantity of rancid fat molecules.

    But when you put Spiff’s pot under the microscope, you see one rancid fat molecule in this crevice, and another over on this wall! Oh no, is the chemist’s ToC refuted?

    Of course not. You know that some pots are very dirty, and that other pots are *comparatively* lacking in dirt. But you don’t go into the investigation absolutely sure that Spiffy pots are totally dirt-free.

    The authors do have a fair “accusation” about small networks vis-a-vis GWT, HOTT, and PPT. But it’s an accusation that brings no guilt. It’s also a misnomer to label widely-distributed psychism as panpsychism.

    Referring again to what we can take as a starting point before serious scientific investigation begins, there is no “problem” of a combination of small networks into a large one. The large one is the only thing we set out to explain. However the small networks combine into it, is utterly irrelevant to the success of the theory, as long as the explanation works on the large-network scale.

    Liked by 1 person

    1. I’m not sure if I’m catching the message you’re trying to convey about Mark’s vs Spiffy’s cleaner, except perhaps an admonition about the unfolding argument? That maybe low level mechanics make an undetectable difference? If so, perhaps, but ithe analogy depends on the microscope eventually detecting the actual difference, identifying an actual explanation, which hasn’t happened yet for structural theories. (Apologies if I utterly missed your point.)

      There is sometimes a tendency if a theory can’t delineate when consciousness begins, that it must be panpsychic. I agree that’s jumping to a conclusion. It ignores the emergentist perspective, a point I didn’t think to make it in the post. Just because we can’t clearly identify when a hill becomes a mountain doesn’t mean there isn’t a difference between most hills and most mountains.

      I have to admit I haven’t spent a lot of time researching or thinking about the combination problem. I’ve never been seriously tempted by panpsychism, so it’s never been an issue for me. I think most panpsychists view consciousness as a sort of field anyway, and field excitations can be cumulative.

      Like

      1. The point of Spiffy’s cleanser was to dismiss the Small-Networks “problem”. We don’t know that the things we call unconscious are absolutely lacking in even the slightest degree of consciousness, any more than we know that un-dirty things are absolutely lacking in dirt. In fact, it turns out that clean pots do contain microscopic amounts of dirt.

        As for the combination “problem”, it’s also not really a problem, in the sense of being evidence against IIT, or against panpsychism. In the case of panpsychism, though, the very fact that combination is not a problem – because emergence is clearly possible – undermines a crucial reason offered for panpsychism.

        Like

        1. Ah, I did totally miss it. Thanks for clarifying!

          On whether unconscious things have slight degrees of consciousness, as a functionalist, it seems to me a bit like asking whether small portions of my laptop have slight degrees of Windows, or small portions of my iPhone have slight degrees of phoneness. Maybe you can have mini-workspaces, but if so, it’s not going to be workspaces with sensory images, adaptive affects, or motor action plans in them. In other words, it matters what’s being workspaced, integrated, predicted, higher ordered, or whatever, at least to me.

          Like

  5. It seems like the science of consciousness is still struggling to understand what it is studying.

    I think that addressing what distinguishes the consciousness from the unconscious is particularly interesting. If consciousness is just computation, what makes one computation consciousness and another not conscious? If it is just brain circuits firing, what makes one group of circuits conscious and another not?

    Despite what paper says, I’m not clear that any current theories address this well. GWT might say it is conscious if it gets broadcast but what sorts of computations or circuit firing constitute broadcasting. The “broadcasting” seems like a higher level description of something that is missing the specification of its lower level parts.

    Liked by 1 person

      1. That theory isn’t doing much for me. I’m not seeing any predictions about how certain the particular qualities of information organization it argues constitute consciousness relate to actual brains. Almost any theory can divide neural processing into X and Y and contend that X is conscious but unless it can be related back to some neurophysical, hopefully observable and measurable, aspect of the brain and nervous system it seems like hand waving to me.

        Like

    1. Most of the theories do address the conscious / unconscious divide. GWT’s is that for it to be conscious, it must be broadcast throughout the system, in other words, it must have causal effects throughout the system. If the effects are localized in only one or a few specialty systems, then it remains unconscious. If you read Baar or Dehaene at length, they go into details on they way it’s expected to work, although there are many variants to GWT.

      HOTT is that a higher order representation has to be formed. In that theory, first order representations by themselves aren’t conscious.

      IIT addresses it with phi. I have a lot of issues with that theory, but it does at least take a stab at the conscious / unconscious divide. With RPT, recurrent processing is the divide. (The authors give RPT green, but there is reportedly evidence for unconscious recurrent processing.)

      PPT doesn’t really address it, but honestly I think listing it as a theory of consciousness is misleading. It’s really a theory of neural processing, both conscious and unconscious.

      I do think it’s an issue with many theories that simply say consciousness=X, where X is some exotic thing, but then don’t explain why X is conscious. In my mind, these aren’t explanations of consciousness, but merely theories about where consciousness is. They don’t get at the fundamental question. Orch-Or gets singled out in the paper, but I can think of many similar theories that commit the same fallacy. Sorry, but from what I’ve seen, EM theories fall in this category.

      Like

      1. But none of those are really explanations.

        For example, the broadcasting in GWT, what exactly is that in terms of the brain and neural circuits? Without some level of detail down to the brain level it is just hand waving. It is missing the neurological correlates for the global access and the broadcasting. Various have been proposed but it is incomplete without them.

        Like

        1. Dehaene and Changeux go into a lot of detail in their papers. And in his book, Dehaene describes a simulation they developed and have been tuning over the years, although I’m sure it’s got a lot of simplifying assumptions (no one has a neuron by neuron accounting yet).

          You just reminded me of the most recent review paper on GNW I’ve been meaning to read: https://www.cell.com/neuron/pdf/S0896-6273(20)30052-0.pdf

          Together with all the empirical data, it’s definitely beyond hand waving. That’s not to say there isn’t still room for the theory to be wrong, particularly on many of the details. I think something like GWT is probably the reality, but some aspects, like GNW’s global ignition, might be more about attention than consciousness itself. I still wonder if Dennett’s variant of GWT, the multiple drafts model, doesn’t have some important insights.

          Like

          1. That particular link looks to be paywalled.

            I found this with a comment about Dehaene’s view.

            https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2771980/

            “Dehaene correlated the original global workspace with neurons, so much so that their model replaces this global workspace phrase with that of global workplace neurons. Again, there is no specific neuroanatomical location or designation of these Workplace neurons; it is a fleeting entity like that of the global workspace memory capacity in Baars’ concept. In spite of its nonspecific location, Dehaene proceeds with the fact that the global workplace neurons show a long-distance connectivity. This long distance connectivity is, again, not a clear phenomenon, but his theory as well as his works subsequently highlight the importance of synchronicity in this connectivity phenomenon.”.

            I think in the end I’m objecting to the lack of specificity and the fleeting nature of key concepts when we try to map them to real brains.

            I’m leaning toward a much more simplified approach to the whole problem which could include EM fields but may not actually need them. It is simply oscillating neurons generate consciousness. The firings themselves that compose the oscillations have conscious and unconscious aspects but the coordinated activity itself acts as a system to select the conscious components. Rules for how TBD. It may not be all oscillating neurons generate consciousness, for example very slow oscillations may not, but faster patterns that reflect some degree of networking and coordination do. This would include even random neurons put together in a Petri dish if they become networked. BTW, oscillations have even been found in insect brains associated with olfactory learning. The oscillations themselves do not indicate an algorithmic or computational model, although such models may be useful in approximating how it works.

            Like

          2. Just sent you a copy of the review paper. It goes into a lot of detail. Obviously there isn’t a full accounting yet, but it’s far more developed than what is implied in that quote.

            It seems like oscillating neurons would include a lot of unconscious processing. I’ve seen some speculation that gamma band oscillations may be significant indicators, but even those seem prone to happening for unconscious processing. Oscillations with synchronized frequencies across disparate regions probably do indicate binding, and wide scale binding seems like it might be a good indicator, and it’s what we would expect with GWT type models.

            My overall feel is that simple explanation will always be insufficient, although there may turn out to be relatively simple indicators of consciousness in a brain.

            Like

          3. As I wrote:

            “The firings themselves that compose the oscillations have conscious and unconscious aspects but the coordinated activity itself acts as a system to select the conscious components.”.

            Like

          4. Yeah, me too. 🙂

            However, i did notice about the thing I am complaining about in the other theories I wrote: “Rules for how TBD.”

            But certainly in EM field theories we might actually expect most of the contribution to conscious experience to be hidden from the experience because the final wave-forms would result from the addition and subtraction of component waves. Most of the component contributions would get washed out and the final result would be a relatively simplified version of a complicated set of interactions that remains unconscious.

            Liked by 1 person

          5. Definitely there’s a lot of TBD in theories of consciousness. My experience is that the narrative from theories are often more compelling when read at length. Often they have good explanations for their TBDness, but it’s there nonetheless.

            Like

  6. I also like the scan of the person with strongly reduced brain volume. Definitely a challenge for any theory except maybe the one that places consciousness in the brain stem.

    Despite the limitations of the gamma-band oscillations theory, it might be any group of networked neurons that oscillate together (which may be the evidence of the networking) have coordinated activity that produces some degree of consciousness. That might even include random neurons throw together in a Petri dish. That would be a pretty expansive view of it.

    Like

    1. Yeah, I’m suspicious of that scan. It gets cited a lot, but it’s actually a brief letter from a physician, not a peer reviewed study. No one else has reviewed how the results were obtained, and I know of no corroborating evidence from other studies.

      Like

  7. A last few comments before I let this go.

    In their other systems arguments III.4 they say:

    “This question is particularly difficult given the strong multiple realizability of many phenomena. 2 2 Multiple realization has been a fruitful topic in philosophy (Fodor, 1974; Kripke, 1972; Putnam, 1967, 1988). Classically, multiple realizability has been used in metaphysical debates such as identity-theories vs. functionalism, or in the context of theory reduction. Here, we use it in a different way: ToCs need to offer a gauge to determine whether or not a given system is conscious. It is well known that any function can be implemented by different physical systems (Bechtel & Mundale, 1999; Fodor, 1974). A word processing system such as Microsoft Word can be run on many operating systems such as Unix, MacOS or Windows”.

    I’m not really sure this statement is even correct but, to the extent it might be, it certainly is a judgment based on unstated assumptions. For one thing, there are numerous things which cannot be realizable in many phenomena. For example, the properties of water cannot be realizable in hydrogen sulfide (H2S) or nitric oxide (NO) or any combination of elements other than hydrogen and oxygen. Magnetism and electricity, mass and energy, may be interchangeable but they do not have the same properties. Secondly, unstated is the assumption that anything that is functionally equivalent is the same. An electric car and gasoline car might outwardly do some of the same functions but they are not the same in all aspects which become apparent if you drive the electric car to filling station and try to recharge with gasoline. Functional equivalence is always achieved by selecting some features of the phenomena and ignoring others; otherwise, the different phenomena would be the same. So there would need to be a high degree of specification in this criteria about what functions constitute consciousness for this to be useful criterion. That specification itself, I think, could be quite a subject of contention so I would question how useful it is.

    Finally, the Class IVa Identify processes with biological phenomena

    Ultimately I think this should be a criterion rather than a class of theory or maybe a criterion in addition to a class of theory. Certainly a theory that identifies consciousness with biological phenomena might seem incomplete but that is only because of the assumption in criteria 4 that consciousness can be realized outside of biological phenomena. Any theory which argues that consciousness can be realized outside of biological phenomena still must be able to show how it is realized in biological phenomena. This is where almost all of the theories fall down. Some theories are so abstract it is hard to even see how they can be related to biological phenomena. Others, such as GWT, could be related to biological phenomena but there is nothing in theory itself that immediately compels a relation with particular biological phenomena as can be seen from the diversity of opinion from various factions about where the global workplace resides and how the broadcasting works.

    Like

    1. I think you’re right that multi-realizability always involves a subset of what the original system does. In terms of consciousness, the question is whether the subset that produces it can be isolated and produced by another system. More specifically, it’s whether the subset that produces behavior (output) that triggers our intuition that there is a fellow consciousness there, can be isolated and reproduced.

      Of course, many people will insist that only a living thing will trigger that intuition for them. Others are much more liberal and willing to see it in more places. So it comes down to whether the system can produce behavior that the majority of us will agree trigger our intuition of consciousness.

      Consciousness is in the eye of the beholder.

      Like

      1. If consciousness is in the eye of the beholder, then this criterion really does make no sense. I can’t see also how you can use a majority rule either since the majority can frequently be wrong. The majority can also be dependent upon who you ask, how you ask, and when you ask. It can change over time. It be easily influenced by prior opinion.

        However, if you try to propose some more objective criteria, you will likely be forced to find that thermostats or self-driving cars are conscious.

        So I would say throw out this criterion and add a criterion that requires all theories to demonstrate how their theory is realized in biological organisms with predictions that can potentially be proved or disproved though observations and experiments with living organisms.

        Like

        1. Well, it could be seen as criteria for meeting the consensus intuition of consciousness. I think the first set of criteria is the strongest and equivalent to the one you laid out, particularly since the only systems the consensus intuition currently agrees on are biological ones, with the widest consensus only on mentally complete humans.

          But the other criteria are basically saying, if a theory asserts that only a certain structure or type of system (such as biological) is conscious, then it should have to justify that statement. I think that’s a valid requirement. Otherwise people are just sneaking in ideology.

          Now, a theory can take the more epistemically cautious approach and say it’s only a theory of consciousness in humans, mammals, or vertebrates, without taking any stance on whether it could exist elsewhere, and evade that criticism, although they’re then admitting the theory isn’t complete yet. Since no one theory is really going to be a complete accounting anyway, it’s a valid move.

          Liked by 1 person

          1. I think I mostly agree. I was thinking, however, that most people would likely find anything with a head and two eyes that seemed to track them to be conscious even if whatever it was had serious deficiencies in other respects; whereas, a super-intelligent and articulate machine without a head and eyes would likely be thought of as a machine.

            I think if we ever get a theory that gets wide agreement it will be fairly obvious that it only works for biological organisms or has applicability beyond. What irritates me are primarily the information-based theories that never address exactly how whatever they are proposing actually works with real brains.

            Like

          2. Take the two papers James linked to. I don’t know whether you’ve had a chance to look at them but I do not see any clear predictions about something that could be measured or observed by experiment or otherwise in a brain that would support any of the arguments. The five implications in Kanai’s theory don’t even mention brains or neurons. The arguments are abstract and mathematical. That’s fine as far as it goes but, unless the theory comes back to earth with some references to brains, neurons, and something that can be observed or tested, I can’t take the theory seriously as a theory of consciousness. It may be an interesting theory for information science. It may be useful for something but I can’t see it to be anything more than reality-free conjecturing as far as explaining consciousness.

            Like

          3. Thanks. I haven’t had a chance to really parse those papers yet, so can’t comment on them. I was wondering what might be missing in that review paper I sent you, or in The Evolution of the Sensitive Soul.

            But I agree that tying it back to neuroscience is important. It’s one of the reasons I dislike IIT. It ostensibly ties back to neuroscience, but not in a way I can really see the connecting dots. But despite its name, it isn’t an information theory.

            The computational ones, like GWT, HOTT, or AST, all seem fairly grounded in neuroscience. And RPT seem very grounded in it, although it probably also counts more as a structural theory than an information one.

            Like

          4. I can’t speak for HOTT or AST, but there seems to be multiple opinions about how GWT actually works and how it relates to the brain. If the theory can be used to predict multiple conflicting things, then it is still lacking a compelling argument. If the global workspace is in a nonspecific location and the broadcasting isn’t clear, then the theory isn’t that predictive.

            Like

          5. I still need to look at the paper some more, but I’ve pretty much concluded that broadcasting per se has almost nothing to do with consciousness – that both conscious and unconscious content is “broadcast” if by the term we mean transmitted over long distance in the brain.

            Like

          6. To me, talk about where the workspace is, is a misunderstanding of the theory. The entire thalamocortical system is the workspace. There is a question about which regions might serve as central communication hubs, and establishing that empirically is important for the theory, but a variety of answers can be possible without falsifying GWT. Baars originally thought it might be in the reticular formation / midbrain / thalamus core, where we all start when we first start reading about this stuff. But recent work seems to be driving a consensus around the frontoparietal network and some surrounding regions.

            My own sense, just from reading neuroscience, is that consciousness is fundamentally about ongoing communication between the sensory, reflexive, and planning regions of the brain, with the contents of consciousness being what is currently dominating that conversation. In that sense, I think GWT, or something like it, is on the right track. But it might be closer to Dennett’s multiple drafts variant.

            Strictly speaking, I don’t expect those theories to be right, but I expect them to be less wrong than many of the alternatives.

            Like

          7. If the entire thalamocortical system is the workspace, then basically we are saying the workspace is the brain. There isn’t anything explanatory about saying consciousness resides in the brain. I think almost everybody can agree to that.

            I think it is oscillating neurons that generate consciousness – not simple communication but a coordinated process that the largely self-generated by the brain itself.

            Like

          8. James, I’m pretty sure you know there’s a lot more to GWT than that.

            Oscillation seems like an indicator of processing, and synchronized frequencies across several regions do seem like an indicator of consciousness. But the idea that oscillation, in and of itself, generates consciousness would, to me, generate more questions than answers. Of course, if the evidence pointed there, we’d have to deal with it. But I don’t see that it does, at least not currently.

            Like

          9. It was your statement that the entire thalamocortical system is the workspace.

            If we are just talking about widespread connectivity in the brain, then I can’t see that the concepts of either the global workspace or broadcasting brings anything to table that isn’t just explained by the connectivity itself. And, if unconscious content is also part of the widespread connectivity, it would seem to invalidate the theory. Is there offered anywhere evidence that long distance connectivity only involves conscious content?

            Like

          10. The best succinct description of this I’ve seen comes from Lamme’s recurrent processing theory, even though he’s arguing against the GWT.

            Stage 1: Superficial feedforward processing: visual signals are processed locally within the visual system.
            Stage 2: Deep feedforward processing: visual signals have travelled further forward in the processing hierarchy where they can influence action.
            Stage 3: Superficial recurrent processing: information has traveled back into earlier visual areas, leading to local, recurrent processing.
            Stage 4: Widespread recurrent processing: information activates widespread areas (and as such is consistent with global workspace access).

            https://plato.stanford.edu/entries/consciousness-neuroscience/#NeurTheoCons

            Stage 2 fits your question about unconscious content being widely transmitted.

            Stage 4 is the one GWT posits as conscious, with all the other stages being unconscious. (Lamme argues that Stage 3 is conscious.) I see Stage 4 as equivalent to ongoing interactive communication between the brain regions.

            It’s also worth noting that what’s happening in Stage 4 is a very small subset of the neural circuits are firing, with all the rest being massively inhibited. This means that the specific neurons in the workspace for one conscious episode, may not be in the workspace for the next one. It’s a continually shifting thing. Which is why thinking of the workspace as one particular location, or even particular neurons, is not what the theory proposes. But also equating it with just the whole brain isn’t right either. It’s a description of the brain wide dynamics that lead to phenomenal experience.

            If you’re aware of evidence for Stage 4 that is unconscious, then it would falsify GWT. If so, I’d be very interested in a link! But even if GWT per se is falsified, I still think the answer is interactive communication between brain regions, albeit perhaps with differing dynamics.

            Like

          11. It’s the task of GWT to prove that the elusive GW actually exists.

            Inhibition is critical to the non-linear, complex behavior of oscillating networks.

            Like

          12. BTW, this guy is really interesting.

            https://buzsakilab.com/wp/

            I don’t think he would agree with me on quite a bit but his book Rhythms of the Brain is one of the best I’ve seen for a comprehensive view of how neurons and neural circuits work. I ordered it originally because I was interested in biofeedback but found a lot of really good stuff in it. He doesn’t pretend to offer a theory of consciousness and only mentions the topic in passing. His publications page has his books (I’ve ordered the more recent) and a large number of publications with links.

            Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.