A response to the unfolding argument: a defense of Integrated Information Theory

Back in May,  I shared a paper that made a blistering attack on the integrated information theory (IIT) of consciousness.  A major point of IIT is that a specific causal structure is necessary to generate phenomenal experience, namely a feedback or recurrent neural network, that is, a neural network with structural loops.  To be clear, IIT asserts that this causal structure must be physical.  Implementing it in a software neural network wouldn’t be sufficient.

However, the unfolding paper mathematically demonstrated that any output that can be produced from a recurrent network, can also be produced from an equivalent “unfolded” feed forward network.

If so, there might be no observable differences in the behavior between a system IIT predicts is conscious vs a computationally identical but unfolded one that IIT predicted would not be conscious.  In other words, IIT would be labeling the feed forward system a philosophical zombie (a behavioral one).  Zombies are untestable, making this aspect of IIT untestable, leading the authors to label IIT as unscientific.

Now, a group of IIT researchers have produced a response paper.  One of the response authors shared what appears to be a draft version.  (Which means this link might go dead at some point.  I’ll try to remember to update it when the actual paper gets published.)

The response authors start by labeling the stance of the unfolding paper as one of methodological behaviorism, one they say advocates studying the external behavior of subjects and basing theories entirely on that.  Their stance, on the other hand, depends on “the link – known to each of us by first-hand experience – between conscious experience and behavioral reports,” and builds theories based on conscious experience.

There is some discussion about assessing the consciousness of a human compared to a black box system of some kind.  They think it’s fair game to take what is known about the human’s internal structure to privilege it in this assessment.  This seems like an implicit argument against the Turing test.

The response authors go on to assert that the analysis of the unfolding paper was done within the behaviorist and functionalist mindset.  They take the stated conclusions from that paper and amend language to them, adding in effect: “except for consciousness.”

For example:

“(P3): Two systems that have identical input-output functions cannot be distinguished by any experiment that relies on a physical measurement (other than a measurement of brain activity itself or of other internal workings of the system).”

becomes:

P3’ “Two systems that have identical input-output functions (except where conscious experience is concerned) cannot be distinguished by any experiment that purely relies on input-output measurements. We can distinguish these two systems, however, by understanding the internal working of the respective systems, through the systematic investigation of the links between reports about consciousness and physical stimulations/measurements of the internal mechanisms.”

It’s worth noting that their proposed methods still crucially depend on reports, which is just a form of output measurement.  In other words, they’re still doing what the unfolding authors say is the only way to proceed.  They’re just implicitly asserting that the specific internal structure details will make a difference in those measurements.

But if they turn out to be wrong about that, and the output of some system is the same irrespective of specific internal structure, then under IIT, they’d be forced to regard that system as a zombie, which again brings us to the point that the unfolding authors made.  We still only have the output of the system as evidence.  We can only use internal structures as an indicator if they happen to match that of another system whose behavior has already convinced us of its consciousness.

Interestingly, the response authors could have attacked this at a computational level, asserting that another physical causal structure that is identical in terms of inputs and outputs might not be identical in terms of efficiency or performance.  It might be that IIT is wrong in principle but right in terms of effectiveness.  But that would have meant engaging the argument on functional grounds.

Finally, the response authors make the case that just because IIT makes some untestable predictions is no reason to label it unscientific.  Many scientific theories make such predictions.  The question is whether they make other predictions that actually are testable.  Theories, they point out, should be judged on their testable predictions.

I actually felt like this was the strongest part of their argument.  In retrospect, I think the unfolding authors overstated the case against IIT.  The predictions they discussed are untestable, but not all of IIT’s predictions are.

On the other hand, the response to the specific issues raised didn’t strike me as successful.  And theories should also be judged on whether they’re the simplest explanation for the evidence.

Admittedly, this comes from the perspective of a functionalist who finds little theoretical merit in IIT.  But maybe I’m missing something?

12 thoughts on “A response to the unfolding argument: a defense of Integrated Information Theory

  1. I think the key here is the role of structure vs. function, which I just saw you hit on in your previous post, and I think I didn’t appreciate until now, and their (structure and function) relation to behavior.

    I want to start out saying the response authors mischaracterize the “behaviorism” associated with the unfolding authors, and with Dennett (who they single out in their paper). The response authors address Skinner type behaviorism, which said don’t bother looking inside a subject’s head. Just look at the external behavior, including verbal reports. In contrast, Dennett’s behaviorism says you can look inside the head, but then all you see is the behavior of organs like the brain. You can look inside the brain, but then you only see the behavior of sub structures like the cortex. You can look inside the cortex, but all you see is the behavior of cells like neurons. You can look at inside neurons, but all you see is the behavior of molecules. And so on. Dennett would say that to explain Consciousness you can stop digging deeper and just look at the behavior at some level, and the level for explaining consciousness is almost certainly neurons.

    *
    [more soon]

    Like

    1. I agree that characterizing the unfolding authors or Dennett as old school behaviorists is misleading. Dennett’s heterophenomenology doesn’t reject introspective report. It just takes it as data of a certain type (how things seem to the subject) to be used in relation with objective data. It’s similar to what Dehaene and others use, report data that is one of many data sources.

      In truth, behaviorism is one of those notions, similar to scientism, that seems to get pulled out when people dislike the limitations of scientific evidence. When I saw it in the paper, I was pretty sure I wan’t going to be impressed by much of what followed.

      Like

  2. My take on this is that you can’t prove something conscious using a black box (input/output) analysis, although things like verbal report of inner states might give you strong hints that the light is on and someone’s home. You have to take a white box approach that breaks apart the conscious system and explains how the thing that subjectively believes itself to be conscious (which is not the whole organism, it is a distributed process operating in part of the brain) works to convince itself it knows about things, including of its own existence.

    For me a lot of the discussion seems to miss the point that we don’t need a standard scientific account of how consciousness exists. We only need to demonstrate that the conscious thing has a mechanism that enables it to decide that some things, including itself, exist, and to act accordingly. That’s what the subjective nature of consciousness means.

    Like

    1. On black vs white boxes, my question would be, weren’t humans effectively black boxes throughout most of history? And yet people have been writing about consciousness for millenia. (The Greeks didn’t have a specific word for it, but their writing about vegetative, sensitive, and rational souls amounted to the same thing.) The white box approach seems like a very recent luxury. (And arguably, our boxes are more grayish so far.)

      I think I’m in full agreement with your second paragraph.

      Like

      1. “Weren’t humans effectively black boxes throughout most of history?” I’d say no, because each of us is able to introspect and thus turn our black boxes grey to ourselves, and communicate that to others. Although we may not have fully objective (white box) access to how our own consciousness works, practices like meditation do seem to be getting close to enabling the practitioner to suppress or simplify the contents of consciousness in order to better perceive its mechanisms.

        Like

        1. Good point. Consciousness wouldn’t even be a subject if we didn’t have access to our own, and healthy humans being anatomically the same, it’s rational to assume their behavior comes with similar internal experiences. But it’s striking how often we’re wrong about other people’s experience, or our own state of mind.

          And when it comes to other systems, particularly ones very different from us, such as cephalopods, what choice do we have? We could rule out consciousness in anything that isn’t anatomically similar to us (many do), but that seems to risk being callous to a lot of feeling beings, and missing opportunities to learn about their version of consciousness.

          Like

    2. I wanna push on that second paragraph a bit. Seems like a demonstration of a mechanism that enables [an entity] to decide things exist is (or necessitates) a scientific account, assuming the demonstration shows how the mechanism works.

      *

      Like

      1. James, I think there’s a subtlety here that I haven’t quite nailed down, which means that an account of the mechanisms of consciousness is consistent with science, but feels a bit inadequate to us when set against our subjective experience of being conscious.

        Science relies on generating objectively true accounts of the world, that others can verify. However the mechanisms of consciousness mean that each of us senses and discriminates things in individual, quirky ways. While we might get quite close to the same detection and classification categories if we are genetically similar, raised in a similar culture and speak the same language, we are not replicants, and so each subjective consciousness is different from every other one. How do you bring scientific objectivity to something that is, by its nature subjective, a one-off, not repeatable?

        I think a study of consciousness may be closer to being an engineering activity rather than science, because it’s about building something that has particular characteristics, including characteristics of its inner experience.

        The fundamental looping, self-referential and self-modifying architecture of consciousness might be expressible in a mathematical notation (a function that can read and write its own functional identify), and that might capture the core distinctive feature of consciousness….but maths isn’t science, it’s more fundamental, since it is able to express many possible sciences!

        Like

        1. I don’t think I’m arguing against anything you’re saying, so I may be missing a point, but I’m wondering if you’ve read Ruth Millikan’s Beyond Concepts. In the first part she describes unitrackers, which are mechanisms that track one thing (thus, “uni”). So you might have a unitracker for “cat”. This unitracker receives inputs from various systems, including other unitrackers. Further unitrackers can be created using input from this one, such as “Felix the cat”, or, the cat I am looking at right now.

          My point is that these unitrackers can explain “[t]he fundamental looping, self-referential and self-modifying architecture of consciousness”. And we need science to show exactly how this is done in the brain. But like you say, if the unitracker idea pans out, it becomes something of an engineering issue. (And maybe an artificial intelligence engineering path!)

          *

          Like

  3. That unfolding argument is a pretty good one, but I already (and still) agree with Scott Aaronson when it comes to IIT, so there’s a sense of beating a dead horse.

    You’d think I’d be more sympathetic to structure arguments, and I do think a network is required, but I think there’s a lot more going on. The unfolding argument seems to show, if IIT or anything like it is true, that structure matters and merely producing the correct outputs may not.

    That part of the argument I’m definitely sympathetic to. 😀

    Liked by 1 person

    1. I will say this for IIT, it at least takes a shot at explaining why specific physical structures might be required. It’s something usually absent in those types of propositions, going all the way back to Searle. I suspect a lot of biological naturalists find IIT enticing for that reason.

      The question is whether IIT is trying to explain something like the motion of heavily bodies, or the Aristotelian spheres.

      Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.