The unfolding argument: why Integrated Information Theory is not scientific

There’s an interesting new paper in Consciousness and Cognition on why causal theories such as IIT (integrated information theory) or RPT (recurrent processing theory) aren’t scientific:

How can we explain consciousness? This question has become a vibrant topic of neuroscience research in recent decades. A large body of empirical results has been accumulated, and many theories have been proposed. Certain theories suggest that consciousness should be explained in terms of brain functions, such as accessing information in a global workspace, applying higher order to lower order representations, or predictive coding. These functions could be realized by a variety of patterns of brain connectivity. Other theories, such as Information Integration Theory (IIT) and Recurrent Processing Theory (RPT), identify causal structure with consciousness. For example, according to these theories, feedforward systems are never conscious, and feedback systems always are. Here, using theorems from the theory of computation, we show that causal structure theories are either false or outside the realm of science.

To be clear, the main assertion of IIT and RPT, is that it isn’t the functionality of these neural networks that make them conscious, but their specific causal structures.  According to these theories, something about those specific recurrent feedback structures generate subjective experience.

The main point the authors make is that any output that can be produced from a recurrent feedback network, can also be produced from a “unfolded” feedforward network with ϕ  (phi), the metric IIT uses to supposedly measure the amount of consciousness present, equal to zero.  In addition, any output that can be produced by a feedforward network, can also be produced by a “folded” feedback network organized to have arbitrarily high levels of ϕ.  They discuss these assertions in detail in the appendices to the paper.

Of course, the proponents of IIT and RPT can always claim that even though the output (spike trains, behavior, etc) are identical, the feedforward networks are not conscious.  But the problem is that this leaves consciousness as purely epiphenomenal, a condition that has no causal influence.  A lot of people do accept this notion of consciousness, but as the paper authors note, it leaves these theories completely outside of any scientific ability to validate or falsify them.  It makes them unscientific.

Conscious seeming behavior can, in principle, be produced by networks that these theories would say are not conscious.  This brings back our old friend (nemesis?), the philosophical zombie (the behavioral variant), along with all the problems with that concept.

Why then do we see recurrent feedback networks in the brain?  Efficiency.  If you’ve ever done any computer programming, you’ll know that program loops can save a lot of memory and code.  Recurrent feedback loops have a similar role.  They enable a lot more processing to take place than could otherwise happen with the available substrate.  Although it’s always possible to “unfold” the network, at least in principle, into a larger one and accomplish the same result.

A neural network unfolded
Image credit: François Deloche via Wikipedia

But just as in computer programming, this comes at a cost in complexity and performance.  Which is probably why the neural networks in the cerebellum are predominantly feedforward.  For what that part of the brain needs to do, speed and reliability are the priorities.  But for much of what happens in the cortex, the additional computational capacity is well worth the trade off.

But this does set up a conundrum.  It means that consciousness is more likely to be associated with the recurrent feedback regions in the brain.  Not because recurrence is equivalent to consciousness, but because cognition requires a lot of processing in a tight space.  That means ϕ could still end up being a usable measure of whether a particular brain is currently conscious, but not necessarily for telling whether other systems are conscious, a nuanced distinction that I fear the proponents of IIT will ignore.

This paper gets at a nagging issue I’ve long had with IIT.  It takes an aspect of how neural systems are organized and posits that that aspect, integration, in and of itself, somehow produces subjective experience.  How exactly?  Essentially the answer seems to be magic.  It’s more a theory aimed at explaining the ghost in the machine than the functionality.  And as I’ve indicated before, there’s no evidence, at least not yet, for any ghost, whether generated or channeled by the brain.  There’s just the brain and what it does.

Unless of course I’m missing something?

The paper does make clear that functionalist theories of consciousness, such as GWT (global workspace theory), HOTT (higher order thought theory), or PPT (predictive processing theory) are unaffected by the unfolding argument.

h/t Neuroskeptic

51 thoughts on “The unfolding argument: why Integrated Information Theory is not scientific

  1. I suspect that any evolution of consciousness would involve a great deal of sloppiness/inelegance. Many of these theories are looking for the elegant solution to the problem, whereas evolutionary constructs seem to be slapdash, as anything that work, even a little bit, is supported.

    Liked by 2 people

    1. Any halfway decent programmer would have deleted all that unused junk code in our DNA eons ago!

      ((Although, what if that junk code turns out to be The Programmer’s code comments, and we just haven’t learned to read it, yet. What if someday it turns out those sections say things like, “Don’t modify this part!” or “I don’t remember why I coded it this way, but it works.” or the famous Unix comment, “You are not expected to understand this part.”))

      Liked by 1 person

      1. Given how much epigenetic functionality has been found in “junk DNA”, the term doesn’t seem descriptive anymore. Of course, we know at least some of it is junk, such as old viruses that inserted themselves long ago.

        Liked by 1 person

      2. Or other well-known techniques for making the code unmaintainable from:

        https://www.doc.ic.ac.uk/~susan/475/unmain.html:

        Lie in the comments. You don’t have to actively lie, just fail to keep comments as up to date with the code.

        Pepper the code with comments like /* add 1 to i */ however, never document wooly stuff like the overall purpose of the package or method.

        If, for example, you were writing an airline reservation system, make sure there are at least 25 places in the code that need to be modified if you were to add another airline. Never document where they are. People who come after you have no business modifying your code without thoroughly understanding every line of it.

        In the name of efficiency, use cut/paste/clone/modify. This works much faster than using many small reusable modules.

        Liked by 3 people

        1. Heh, you jest, but I was often given “ball of mud” code like that and asked if I could make it work “better” or make changes (or in one case, make it work at all).

          “What does it do?”

          “Well,… something to do with our key accounts, and we have these new procedures it has to know about… can you make it work?”

          “!!!”

          Liked by 2 people

          1. Anyone who has been a developer for more than a year or two eventually ends up working on a complete mess of code base.

            Frequently it begins with code poorly written from the start that gets maintained over a period of years by bad programmers. Nobody wants to refactor. Probably no budget to do it and nobody wants to risk breaking it.

            So by the time you get it you are in the predicament that something needs to be done quickly and so the best you can do is tinker with it at the edges and hope nothing breaks.

            Like

          2. Just imagine how much one sees in 33 years. 😮

            I made myself fairly unpopular over the years for refusing to do quick fixes and, in some cases, insisting on a complete redesign. Which always took longer than they wanted.

            But when years later you find out one of your designs is the centerpiece of the factory guests tour… it was all worth it.

            Like

          3. I’ve got the 33 years beat by 6 years.

            I did have the experience of someone telling me in the early 2000’s that a system I designed in the 1980’s didn’t need to have any Y2K work done because it was designed to handle century dates except… for a small part that some contractors came in and added years after the initial design. I was also pleased and a little surprised the system was still up and running.

            Liked by 1 person

          4. I’ve found it almost disconcerting to find out people are still using an application I wrote years ago! (But I have ancient apps I wrote that I still use, so [shrug].) It does give us a flavor for the joy those who build roads, bridges, and buildings, get when they see the products of their labor being used.

            We had a Y2K thing at The Company, too. We spent months converting all the “Y”s to “K”s in every bit of documentation we had. Got it done just in time, too, but management still wasn’t happy for some reason… 😮

            Like

        2. Oh, and I worked with a guy once. And engineer, not a programmer, but he wrote code to support his own prototype designs.

          Most obfuscated crap I ever saw, and it was deliberate in his case. He saw it a form of job security. He wasn’t the only one I ever met who thought like that, but he was the worst. Deliberately opaque code. [shudder]

          Like

    2. I think the scientists behind GWT, HOTT, and PPT are more sophisticated than you give them credit for. But certainly there is some conceptual simplification in all of these theories. We primates from the savanna have to try to understand things in any way we can. The alternative is to simply be happy with ignorance, waiting for some perfect understanding that will never come.

      Like

  2. Of course, the proponents of IIT and RPT can always claim that even though the output (spike trains, behavior, etc) are identical, the feedforward networks are not conscious. But the problem is that this leaves consciousness as purely epiphenomenal, a condition that has no causal influence.

    Wrong. (I’ve made this point before, and I’ll keep making it as it keeps coming up, i.e. forever, evidently.) There is causal influence aplenty. We can look under the hood and see the difference. That’s causal influence: the structure of the network influences what is seen by the observer who looks under the hood.

    What the authors are really saying (unless they’re complete idiots) is that the network structure has no influence on the important things. And how did they decide which things are important? Ah, there’s the rub!

    Liked by 1 person

    1. So, what do you mean by “look under the hood”? Do you mean brain scans? Dissections? If so, what about those things would tell you about subjective experience? (At least tell you them independent of behavior such as self reports?)

      Like

      1. Yes, brain scans, dissections – anything is fair game, if you want to know what something is and what it does. You wouldn’t give your girlfriend a shiny yellow metal ring made of copper, silver, and bismuth, and tell her it was gold, would you? Sometimes it matters what’s under the surface.

        Liked by 1 person

          1. Can two two systems could exist such that we would be truly confused about the matter (modulo panpsychism or definitional issues)? The question might be entirely moot in any real sense.

            Unless we’re talking comatose system, one of those systems is lying when it reports consciousness. That seems a pretty weird and artificial scenario outside philosophical conundrums. Maybe it’s something we don’t have to worry about?

            Or, as you’ve asserted, to act conscious is to be conscious.

            Like

          2. I think it could eventually be an issue for AI systems where multiple implementations and strategies might produce similar conscious seeming outputs.

            On acting conscious, if you can identify another primary way to tell, I’d be very interested. (“Primary” in the sense that it’s not derived from properties of systems we’ve already decided are conscious, such as looking for structures like a cerebral cortex.)

            Like

          3. It’s certainly a really good question.

            One heuristic for me is: Consciousness attests to itself. (I think you mentioned someone saying it had to do with the reaction when kicked. Same thing.) If we’re talking human-level, I’d expect to see music, humor, imagination, art, curiosity, etc. Traits I consider hallmarks of human-level consciousness.

            Think of it as a Rich Turing Test over time. I suspect that, with something as complex as consciousness, you’d have to get to know it. You’d have to see what it thought over time.

            For me, I think ultimately it boils down to understanding what consciousness really is. Then it’ll be like supersonic movement — we’ll know it when we see it.

            Someone raised an interesting point the other day: There’s a generational thing. In the same way older generations thought actual gender mattered a lot (and now we don’t so much), future generations may think actual consciousness is a silly distinction. Looks like, acts like, talks like, is good enough, why worry?

            The idea is that future generations will see thinking machines as normal parts of the background. That’s already happening. (Or happened.)

            Like

          4. I like the term “Rich Turing Test”. The biggest problem with the traditional Turing test (based on remarks Turing made in his paper) is how weak it is, with the threshold being the ability to fool 30% of interrogators after five minutes, a test a mindless chatbot could conceivably succeed at. (Although as far as I know, none have yet, at least without shenanigans.) But fooling 50% after an hour of conversation would be much more convincing.

            The generational thing is interesting. That’s similar to stuff I’ve read saying that consciousness is a cultural concept. I think that’s only partially true. The modern concept of consciousness seems like a way to implicitly (and typically unwittingly) discuss an immaterial soul. But dualism appears to be a deep and universal intuition, with even hunter gatherer cultures having ghost concepts. That’s why much of the ancient Greek writing can be interpreted as talking about consciousness, even though they were writing about the soul (psyche).

            But definitely ideas change over generations. Read material from 90 years ago, and the idea of animals as thinking feeling beings will be rare. I suspect when the word “sentience” was introduced in classic SF, it was because most writers thought only intelligent creatures could feel.

            Liked by 1 person

          5. Mike, this is the point I tried to make, inelegantly, in my other reply. And sorry for forcing this into my paradigm, but that’s how I make sense of it: input—>[mechanism]—>output. If you have two mechanisms with the same function, and you suspect one is conscious and the other not, you have to look “under the hood”, i.e., break the mechanisms down into sub-mechanisms. You need to find the sub-mechanism[s] where the outputs are different, and that difference makes a difference for the question of consciousness.

            Now IIT will say you can do that. When you look at the sub-mechanisms, one will have a conscious-type structure (feedback) and one will not (feedforward). The sub mechanisms in the former will have a measurable value (Phi > 0) whereas the sub mechanisms in the latter will have a different value (Phi = 0). Note: this value depends not on any single sub mechanism but on the structure of sub mechanisms and the inputs and outputs between them. The problem with IIT is that while the Phi value seems correlated with consciousness, there’s not a good explanation of why it is casually associated with consciousness. I could just as easily say Consciousness is carbon atoms doing what they do. All of the processes in the brain use carbon atoms, so the correlation is good. Sorry, silicon based robots.

            The article in question, in essence, I think, is saying that if you only have one mechanism, and you cannot look inside that mechanism, then there is no way to tell in IIT whether that system is conscious, because there is no way to tell if the mechanism is feedforward or feedback without looking inside. And they are right. But it turns that there is a variation on IIT, yes, my variation, in which the Phi of a feedforward system is not zero, but instead is the very minimum of integration and consciousness, a psychule. (This variation has not been rigorously worked out, so don’t ask, unless you want to get into that.) In that case, you possibly can determine consciousness based simply on input and output. This is where Mike and I agree. The nature of the consciousness may be significantly different, but that nature can not be determined without looking inside the mechanism.

            *

            Like

          6. Thanks for the explanation James!

            The authors don’t say we can’t determine whether a system has feedback or feedforward networks. We can. Their point is that you can always get the same output by unfolding a feedback network into a feedforward one. (The proof of this is mathematical, which I won’t pretend to understand, but it feels intuitively correct to me.) The issue is that in the published version of IIT, feedback is a crucial attribute.

            The issue I can see with your version using psychules (assuming I understand it, which I may well not), is that it doesn’t seem like IIT anymore. Psychules seem like such a general description of reality that the only place I could see your version of Phi perhaps being zero is in a non-rotating black hole or maybe in an absolute vacuum. And it seems something like a star might have profoundly high psychule-Phi without showing any signs of consciousness.

            But the main question remains. How do we test that any version of Phi accurately represents the amount of consciousness present? How do we really know that it’s telling us where consciousness actually is?

            Like

          7. ”But the main question remains. How do we test that any version of Phi accurately represents the amount of consciousness present? How do we really know that it’s telling us where consciousness actually is?”

            First, I don’t think that’s the main question. And I’m going to compare it to the question of food. The main question is not how much food is there. It’s not even how many different food items are on the menu (which I think is what Phi measures). The main question is: is that food or not. (“Chocolate covered grasshoppers? People eat that?”) And asking where consciousness is like walking into the restaurant and asking where is the blue cheese cheeseburger right now? The question should be, what different kinds of meals can you make?

            As for psychules, the key is the symbolic sign (as defined in semiotics). So if you have two mechanisms: an analog thermostat and a digital thermostat, so, same inputs and outputs, you have to look at the sub mechanisms. The one that uses symbolic signs has psychule(s). Though, it’s better to say that the one with symbolic signs is capable of psychule(s).

            *

            Like

          8. But in the case of food, don’t we have a clear test on whether it is or isn’t food? If we can eat it and it gives us sustenance, it’s food. There may be some things that don’t sustain us but also don’t harm us. That could be optional food. Then there are things that would harm or kill us if we attempt to consume them, such as rocks, glass, or paint. The main thing is we have a way to test the proposition.

            Or are you using food as an analogy for psychules and/or symbols, with the meals being consciousness and menus being the types of consciousness possible? If so, I might see where you’re coming from, since what is or isn’t a meal would be an amorphous and shifting thing, which sounds a bit like consciousness to me. But then we’ve reached my position, that there’s no fact of the matter on what is a meal, just as there isn’t for what is or isn’t consciousness; there’s only what we’re prepared to accept as these things.

            Like

          9. The point is that first you should decide what counts as food, and then apply that to any particular thing. And then you talk about varieties of food.

            So are these food?
            Grass?
            Salt?
            Pebbles? (Chickens, and presumably other birds, need to ingest stuff like this to help grind up other food)
            Oyster shells? (See chickens, source of calcium)

            ”But then we’ve reached my position, that there’s no fact of the matter on what is a meal, just as there isn’t for what is or isn’t consciousness; there’s only what we’re prepared to accept as these things.”

            But there is a fact of the matter if you have established a definition. So if you say food must give you both energy and nutrients, then yes grass, no salt, no pebbles. If you say food must give either energy or nutrients, then yes grass, yes salt, no pebbles. If you say food must give energy or nutrients or aid digestion, … yes pebbles.

            So my definition requires symbols. What’s your definition again? 🙂

            *

            Like

          10. Definitely if we establish and agree on a definition, then there is a fact of the matter. (At least provided our definition is itself coherent.) The difficulty with consciousness is getting everyone to agree on a definition, at least beyond things like “subjective experience”, which itself needs to be defined.

            What’s my definition of consciousness? I’ve given up on a simple concise one. At this point I just describe the hierarchy, each level of which someone points to and declares “consciousness”!
            1. Reflexes
            2. Perception
            3. Attention
            4. Imagination / sentience
            5. Recursive metacognition

            These are like increasingly sophisticated versions of meals. Meals are based on food. Consciousness, in my view, is built on information processing.

            Like

          11. How do you test that assertion? The same way you test the assertion that an engagement ring is golden, and not an amalgam of other metals. You look under the hood. Of course, if you don’t already know that the defining feature of gold is that it has atomic number 79, you have a lot of work ahead of you before you can tell fool’s gold from the real thing.

            In that case, you line up a whole lot of putative examples of gold, and investigate them looking for a core property or set of properties that explains why these things are yellow, malleable, non-tarnishing, etc. Note that this explanatory core cannot simply be the list of casually observable properties themselves; it should be the thing in the real world that actually explains these. The thing in the real world, not some future possible thing that might also someday produce the features that we associate with gold.

            Or maybe it’s not gold you’re interested in, it’s jade. You might find that there are two types of mineral involved, jadeite and nephrite. Well that’s OK; you can still explain the features of the examples, and you can disambiguate as necessary.

            Now apply the same quest for explanatory power to consciousness. You might have to disambiguate intentionality vs phenomenal consciousness (and maybe more sub-types). And then you need to explain each one, focusing on central examples, like mammals and birds for phenomenal consciousness. Might integrated information be what explains the features that we associate with phenomenal consciousness? Probably not, but if “phi” fails to explain, it won’t be for the reasons the authors identify.

            Like

    2. Paul, I think you’re getting at an important point when you say the authors are saying the network structure has no influence on the important things, i.e., the difference that makes a difference. In my understanding, the authors are saying that for a theory to be scientific you have to get to a point where you can no longer look under the hood. You have to get to a point where you make a measurement, and one result says “conscious” and a different result says “not conscious”, regardless of what is under the hood. If you logically cannot get to that point, then the theory is not scientifically testable.

      *

      Like

      1. James,
        I think the authors’ actual contention is that we ultimately have no way to measure consciousness aside from what the system produces. If we have system A and system B, both producing the same output, but with radically different internal mechanisms, and I tell you that one is conscious and the other isn’t, how will you go about scientifically ascertaining which is which?

        Of course, if the system’s internal structure is similar to others who do exhibit conscious behavior, then we might infer that the system in question is conscious, but to get to that point, we had to start with another system that was exhibiting conscious behavior.

        I think you and I agree that if both systems are producing conscious behavior, they’re both conscious, albeit with possibly different implementation strategies. (Although I think you’re more liberal than I am on what conscious behavior amounts to.) But that’s purportedly not the contention of causal structure theories like IIT and RPT.

        Like

  3. “[IIT] takes an aspect of how neural systems are organized and posits that that aspect, integration, in and of itself, somehow produces subjective experience. How exactly?”

    How, indeed. It’s cargo cultism, is what it is. Built something that looks like the thing and it will be the thing. (Certain forms of magic canonically work that way. Voodoo dolls, for instance.)

    “And as I’ve indicated before, there’s no evidence, at least not yet, for any ghost, whether generated or channeled by the brain. There’s just the brain and what it does.”

    I agree with the caveat that (the appearance of) our phenomenal experience (for many of us — or at least me) is the ghost. The “autobiographical” self. The *I* that is me.

    Maybe nothing more than what it feels like to be a 60-year-old instance of a neural net evolved over millions of years, but the whole point is that it does feel like something, and that feeling is the ghost.

    Liked by 2 people

    1. Never ending debate which will, I suspect, remain just that. Nothing I have thus far ever read explains to me what it is to be me. But there again what is electro magnetism or any rule or force of nature. Perhaps the ghost is, as many have thought, a law of nature, a rule if physics init own right.

      Liked by 1 person

    2. Using the point I just mentioned on another thread, when I say “ghost in the machine”, I don’t mean the informational ghost in the sense that Microsoft Windows is a ghost in my laptop, but in Gilbert Ryle’s original sense as an entity either generated or channeled by the brain, something separate and apart from it, encompassing both Cartesian and naturalistic forms of dualism, either spiritual or a physical field of some kind.

      Liked by 1 person

      1. Ah, right, and The Concept of Mind does predate Ghost in the Shell by 40 years. So do we have four kinds of ghost, then?

        1. The Ghost of Windows (mindless).
        2. The Ghost in the Shell (autobiographical mind).
        3. The Ghost in the Machine (dualism).
        4. The Ghost of the Soul (theism).

        I’m not sure where to put the Ghost of Christmas [time period].

        We’ll have to name our ghosts. Machine Ghost and Shell Ghost? m-Ghost and s-Ghost? 😀

        Who ya gonna call?

        Like

          1. We’re just self-aware versions of an O/S? That’s pretty relentless.

            (Of course, the age-old problem remains: How does that account for my (having the illusion of) being self-aware while Windows 10 and iOS aren’t and don’t, as far as we know, have any illusions one way or the other. You lump together what I see as Yin and Yang.)

            Like

  4. Your current post invokes IIT, which got me thinking…

    What if IIT is correct, at least in a network being a requirement. And what if, given a known conscious network, it’s unfolded such that it provides the same outputs, but it turns out to not be conscious?

    The unfolding argument assumes system states are sufficient, and any network, recurrent or feed-forward, that generates the same states necessarily has the same overall effect. But what if recursion turns out to be necessary in some way we haven’t quite realized?

    It’s again the argument that system configuration may matter.

    Like

    1. The thing to remember is that the unfolding argument pertains to whether IIT is scientific, in the sense of being testable. An IITer can always say the unfolded system is a zombie. The problem is how do we scientifically establish that an apparently conscious system is actually a zombie?

      Like

      1. The whole “unscientific” thing, for me, is a non-starter. IIT is the idea that connected networks are sufficient for consciousness. (Do they also claim necessary or do they admit to other possible physical configurations?) That idea can be tested eventually, so it’s plenty scientific enough for me. Until then, everything else is just hand-waving by people with opinions.

        What I’m getting at here is the presumption folded and unfolded networks are identical at the scale involved. I know we can demonstrate a mapping for small networks, but I’m not so sure it scales when billions of analog nodes are involved.

        How this shows up (I suspect) is in small scale nuances — resonances — in an analog recursive system that would be extremely hard to recreate in a feedforward system. It’s possible chaotic effects could rule it out. It might be analogous to the difficulty of unbreaking an egg. (Something I actually believe to be impossible even in principle. How do you fuse shell fragments?)

        In any event, I’m questioning whether the unfolding argument is valid due to scaling. But I’m just a hand-waver with an opinion. 🙂

        Like

        1. “Do they also claim necessary or do they admit to other possible physical configurations?”

          Standard IIT is that it is necessary, and that’s arguably what makes it vulnerable to this type of argument. We shouldn’t wave it away with accusations of hand waving. 😉

          IIT’s best defense is to back off of claims to universality. Some IITers do, but the main proponents seem pretty convinced they’ve got the one and only solution.

          Like

          1. True, I am more than sympathetic to the idea that a physical network is necessary, but I think the behavior of the nodes (neurons) and their connections (synapses) is more where the meat is than in the mere connectivity. (I reject the notion that any complex network is conscious.)

            A thing that occurs to me about feed-forward versions is their size. Say there are 1000 mental states per second. That requires 1000 distinct levels of the unrolled network per second of thought — each level presumably involving most, if not all, the neurons.

            The obvious advantage of a recursive network is that sequences of mental states reuse the same nodes. Such a network can run indefinitely without requiring an infinite number of nodes.

            There is also the interesting situation that input and output neurons need to repeat many times along the unfolded network.

            That, combined with possible scaling issues, makes me a bit skeptical the unfolding argument works, and even if it does, I don’t see how it makes IIT “unscientific” — that part, as I said, is a non-starter in my book. I don’t think it demonstrates anything of the kind.

            Liked by 1 person

      2. It appears that our comments here don’t make this post appear in Conversations in the Reader. Not sure if that’s deliberate (post too old?) or just another Reader bug.

        Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.