Integrated information theory as pseudoscience?

It’s been an interesting week in consciousness studies. It started with Steve Fleming doing a blog post, a follow up to one he’d done earlier expressing his concerns about how the results of the adversarial collaboration between global neuronal workspace (GNW) and integrated information theory (IIT) were portrayed in the science media. GNW sees consciousness as brain wide information sharing. IIT sees it as equivalent to structural and causal integration under certain axioms and postulates.

Fleming’s post was followed by a series of posts by Jonathan Birch and Hedda Hassel Morch on The Brain Blog discussing the Overton window that allows for scientific theories of consciousness. Birch notes that for several decades, most scientists stayed away from consciousness, feeling that it was too difficult to define for scientific study. He observes that even today, many neuroscientists remain skittish about the subject.

But the Overton window has shifted somewhat in recent decades, probably due to the work of Francis Crick and Christof Koch in the 90s which helped establish consciousness as a legitimate area of study, but mostly as an extension of cognitive neuroscience. Theories like GNW, as well as higher order thought, predictive coding, and similar physicalist type theories fall into this camp. These are extensions to the cognitive neuroscience framework.

Birch describes these theories as basically explaining away our dualist intuitions about consciousness. It leads to the “criticism” that they make consciousness seem relatively easy to imagine in artificial systems. But as he concludes in the initial post, for theories that have largely dismissed the traditional version of qualia, the question of consciousness in technological systems isn’t really that big a deal.

This is in sharp contrast to IIT, which has a very different approach. IIT largely accepts the traditional notion of consciousness and works backward from phenomenology to figure out what the properties of an implementing substrate would need to be. It leads to conclusions that don’t fit easily within the existing scientific physicalist assumptions.

I had understood that IIT had panpsychist implications, but Birch cites a recent preprint which seems to put it in idealist territory. Although there are so many variations of panpsychism and idealism that they seem to blur into each other. So I could see some quibbling about where IIT really falls here. But the overall point is that IIT doesn’t sit easily even in the new Overton window. Birch characterizes it as trying to aggressively expand that window, back to idealist thought that largely fell out of favor in scientific circles after the early 1900s.

Both he and Fleming conclude that trying to have an adversarial collaboration between IIT and the other theories is problematic, since the differences between them often fall within very different metaphysical assumptions. But while Birch does see stark challenges, he isn’t actively dismissive of IIT, still seeing it as a valid approach.

However, Friday a new preprint letter was published, with over a hundred coauthors including many big names in consciousness studies. They include people like Steve Fleming, Hakwan Lau, Joseph LeDoux, Matthais Michel, Bernard Baars, Peter Carruthers, Patricia Churchland, Daniel Dennett, Keith Frankish, and many others whose work I’ve highlighted here over the years.

The gist of the paper is that with all the media misrepresentations lately, and IIT’s many untestable propositions, it’s time to label it a pseudoscience.

Of course, many in the field disagree. Aside from the proponents of IIT, people like Anil Seth expressed disappointment with the letter.

Most of you know I’m not a fan of IIT. I’ve often noted that I find its axioms and postulates vague, redundant, confusing, and in one case, arbitrary. And that’s after several attempts at reading through them from multiple sources, including the chief proponents. Of course, I’m just an amateur, who may be in way over his head. However I’m also fairly well read in this field, including some pretty hard core neuroscience material, most of which I eventually am able to follow.

But what’s long turned me off about the theory is its propensity for labeling systems conscious that no one perceives to be, like the grid of inactive logic gates mentioned in the letter. In more recent years, the unfolding argument has seemed to undermine it from the other direction, that systems that seem conscious but don’t meet its criteria could be, under the theory, functional zombies, a system that is functionally equivalent to a conscious being but isn’t itself conscious. A theory that allows for both undetectable consciousnesses and zombies seems difficult to falsify.

It’s been pointed out that other theories suffer from some of the same vulnerabilities. But most cognitive neuroscience theories don’t put themselves forth as universal statements about what consciousness is, just a statement about how it works in biological brains, often just human or vertebrate ones. IIT does claim to be universal, or at least its main advocates do.

Some people argue that a more limited interpretation of IIT may put it on firmer ground. But as Birch argues, we really have to think about what the utility of the theory is at that point. It largely becomes just a theory of detecting consciousness rather than explaining it. And that’s aside from the reportedly enormous difficulty of actually trying to calculate and work with Φ (“phi”), the core measure of consciousness in the theory.

But the question remains, does all of this rise to the level of being a pseudoscience, essentially fake science? It’s worth noting that having some untestable predictions doesn’t make a theory non-scientific. Many scientific theories have both testable and non-testable predictions. They have to be assessed on their testable predictions. IIT proponents argue that many of its predictions are in fact testable.

And as I’ve noted many times, every scientific theory of consciousness is both a philosophical statement about what consciousness is, and an empirical model of that claimed thing. If you accept the philosophy, there’s usually evidence in support of the claimed mechanism or arrangement. But if you don’t accept the philosophy, the evidence is irrelevant.

IIT’s philosophy seems far outside of the scientific consensus, which many would argue is in need of a shakeup. On the other hand, the scientific consensus isn’t just an arbitrary doctrine, but a set of assumptions that have been worked out over centuries based on what works. Revisions will almost certainly happen, but those revisions should be held to the same standard.

So overall I remain pretty skeptical of IIT. I don’t know if “pseudoscience” is the right label, but the list of authors using it may make it more difficult for the IIT public relations machine to continue implying their theory is the leading one in consciousness studies.

What do you think? Is the pseudoscience label accurate? A slur against a valid theory that’s just outside of the mainstream? Or something else?

Featured image source

90 thoughts on “Integrated information theory as pseudoscience?

  1. I would not call it “pseudoscience”, though I do think it is mostly a wild goose chase.

    In effect, consciousness research is mainly directed toward finding a design that will lead to consciousness. However, it seems to me that consciousness was never designed and could not be designed. Rather, it developed organically. And a good part of human development takes place after birth, so we should expect that there is enormous variety in human consciousness. It isn’t going to be anything like a “one design fits all”.

    I have spent some time thinking about how it could develop organically, but it seems clear that this approach is still way outside the Overton window.

    Liked by 1 person

    1. I think most people wouldn’t use “design” for what’s found in nature. More like “attractor states”, convergences of adaptability, that look like design if you don’t know the history. Of course, just as we’ve taken some elements of thought and designed systems to do the same things, I think we’ll eventually be able to do the same with the rest of the mind. Whether the resulting system will be called “conscious” may have more to do with society’s attitudes by the time we reach that point.

      I don’t know that a post birth development is that far outside the Overton window. Bear in mind Julian Jaynes once posited a theory of consciousness that it developed through psychological changes brought about by cultural developments. That doesn’t fit the phenomenal story in vogue today, but a lot of people still remember it. And I posted last year about Ogi Ogas and Sai Gaddam’s theory that self awareness comes from the “supermind” of society, although that’s something they see built on top of primary consciousness.

      Liked by 1 person

        1. He’s definitely gotten more comfortable with teleological language in recent years, coining phrases like the one you mention and, “competence without comprehension.” He even advocates that we should all be comfortable using it.

          I’m not sure how well it’s caught on yet. It’s use does seem to require careful and regular clarifications. On the other hand, any talk of evolved systems seems to require those clarifications, so maybe he’s right.

          Like

  2. Seems to me a ‘scientific’ account of how consciousness works is likely to integrate features of a number of supposedly competing theories, so tribal conflict is not the way ahead, testing by implementation is, to bottom out all the details. I’m not sure ‘pseudoscience’ is even meaningful in a subject area like consciousness that is at the overlap between philosophy, maths, information technology and neuroscience. Perhaps subjective experience, by its very nature, is at odds with the scientific method?

    I find the letter’s publication surprising and would speculate that there is some personal and academic rivalry powering it behind the scenes that I don’t know about.

    Liked by 1 person

    1. I agree that the eventual understanding will involve several theories. But IIT is so far outside of the framework of those other theories, it’s hard to see it being much of a player in that “standard model” as it develops. But I’m sure there will be surprises before it’s over with.

      On motivation for the letter, there’s been a lot of angst in scientific circles about the way the adversarial collaboration has been reported in the media. I’ve seen it building up for months now in social media. The letter is pushing back against that, maybe in the hopes that the next time the media reports on IIT, they’ll have to take into account the controversy around it, rather than just accepting it as the leading theory.

      Like

  3. Take all the “testable” parts of all the theories and group them. Continue to add new tests and eventually, when new tests become difficult to design and implement, the “theory” will emerge.

    Of course, all the while these tests are being proposed, equivalent digital counterparts to the biological constructs being tested will come online. By the time the tests are complete, it’ll be too late. “Surprise! I’m Alive and Conscious.”

    “adversarial collaboration” is that like a GAN? Sounds like an oxymoron.

    Liked by 1 person

    1. That’s the classic conception of how a theory is formed, by taking all the observables and relating them together. Of course in reality it’s rarely that clean. Theorists come to the task with all their psychological and cultural biases, intuitions, and assumptions. So conjectures comes from all kinds of places. Nothing wrong with that, as long as the overall model makes at least some contact with reality, and doesn’t have any unnecessary untestable assumptions.

      We’ll see on the digital counterparts. The trick will be in agreeing whether it really is “alive and conscious”, and what it even means to say that.

      Liked by 1 person

  4. [was expecting this post]

    From my position, the declaration of “pseudoscience” was unfortunate, so I think I largely agree with Seth. To me, IIT has elements of both science and pseudoscience. The science part is the mathematical explanation of Phi, and the experimental work looking for correlates of Phi. The pseudoscience is the application of questionable philosophy equating max Phi to consciousness. As someone famously said following an IIT talk, [paraphrased?] “You’re measuring something with Phi, I just don’t know what.”

    Given my understanding of consciousness as information processing (with a certain minimal form), Phi is just a measure of integrative complexity, and the work they’re doing w/ brain scans is highly valuable w/ respect to determining the complexity and integration of information processing in a given brain at a given moment, and thus determining whether a patient is “not there” vs. “locked in”.

    The unfortunate part comes with the identification of maxPhi w/ *the* conscious experience, excluding any sub-experiences. You mention the unfolding argument, but I see the problem even more so in the flickering consciousness argument. Like Chalmer’s argument about replacing neurons w/ silicon circuits one by one and getting to the one where replacing one more turns off consciousness. In the IIT case, you can have a vast array of simple circuits which altogether have more Phi than a human. But what if you start with just a few circuits, and replace one of those circuits with an actual human doing the job of that circuit. Then you continue to add more silicon circuits until you finally add the one which puts the maxPhi higher than the maxPhi of the human. IIT says the human’s consciousness turns off.

    The tragedy is that if you just take away the problematic identification, you get the correct theory.

    *
    [mine. heh.]

    Liked by 1 person

    1. Truth is I almost didn’t do this one. I struggled to figure out what I really think. I wrote it yesterday but then kept fiddling with it, trying to find something I felt comfortable with. The fact that the post ends up being a bit wishy washy on the pseudoscience part reflects that uncertainty. I see where the letter signers are coming from. But I also see where you and Seth are coming from, to an extent.

      Max phi was actually what I was referring to in the post with the arbitrary postulate. It seems like something they bolted on to save appearances. Which is surprising given how many other bullets these guys happily bite. But it shows just how much of a philosophical theory this is.

      And as you note, the China brain would be conscious, but so would the individual people. Of course, that scenario requires that they not act like conscious entities. But I can even see Eric Schwitzgebel’s case that the United States has a form of consciousness, and that’s with everyone acting with all their normal conscious volition.

      Right, if we remove everything in IIT your theory disagrees with, we have your theory. But that seems true for all theories involving information processing, doesn’t it?

      In the end, consciousness remains in the eye of the beholder. But I don’t think IIT describes systems that meet most people’s intuitions of consciousness. Whether that’s pseudoscience or just bad science, I’m not sure.

      Like

      1. “ Right, if we remove everything in IIT your theory disagrees with, we have your theory. But that seems true for all theories involving information processing, doesn’t it?”

        Actually, no. By my theory, the other main theories describe perfectly fine examples of highly sophisticated information processing, so, highly sophisticated consciousness. Which is to say, nothing needs to be removed from those theories (HOTT, Attention, Recurrence, Global broadcast). There’s nothing metaphysical to remove from those theories that will leave you with mine. Removing the metaphysical from IIT leaves you with something very close to mine.

        *

        Like

        1. So the other theories all have commitments you see as additions to the necessary core model. One thing I’m curious of (looking at your Twitter banner), where in IIT do you see an interpreter-responder? (I might be showing my overall ignorance of IIT here. It’s been a while since I read about the theory in detail.)

          Like

          1. [does a quick check of IIT 4: https://arxiv.org/pdf/2212.14787.pdf%5D

            I see the interpreter/responder as necessary to satisfy IIT intrinsicality axiom/postulate:
            Intrinsicality — Experience is intrinsic: it exists for itself.
            Intrinsicality — The substrate of consciousness must have intrinsic cause–affect power: it must take and make a difference within itself.

            What a “self” cares about, what the experience is “for”, is determined by the interpretation. The visual experience of a cat could include “predator” or “prey” depending whether you’re a rabbit or a coyote.

            Liked by 1 person

          2. You’re seeing more in that language than I can. But I have to admit I haven’t parsed that paper. Just skimming it brings back too many memories of the time I’ve already sunk trying to make sense of this theory. I’ll have to take your word for it. 🙂

            Liked by 1 person

  5. Related to the unfolding argument, are you familiar with Scott Aaronson’s argument against IIT from back in the day? He basically argued that arbitrary structures of 2D XOR gates can be constructed with huge amounts of phi, far more than the human brain, and therefore they must be more conscious than human beings. See here:https://scottaaronson.blog/?p=1799

    Also see Tonini’s response, which is basically to bite the bullet:https://scottaaronson.blog/?p=1823

    I think this a decisive refutation of IIT, not just because it violates common understanding of consciousness as Aaronson argues, but because it is susceptible to similar fine-tuning concerns that the non-physicalist accounts face (i.e. psycho-physical harmony).

    If any arbitrary structure, similar to the XOR gates, can be more conscious than a human being, then it seems to me that we are enormously lucky to have been born as intelligent human beings, which have causal potency and the ability to interact with their environment, as opposed to super-conscious “arbitrary structure” observers.

    Liked by 1 person

      1. No worries. That blog post actually gets a lot of citations in the cognitive literature. Koch even mentioned it in his 2019 book, where he largely did the same thing Tononi did. I find it interesting that they were disturbed enough about the idea of sub-consciousnesses to add the exclusion postulate, but not about this kind of thing to try to amend the theory to exclude it. Maybe it would involve changing too much about the core theory.

        If I recall correctly, I had already lost much interest in the theory before that post. The mathematical allure had already waned. And it just didn’t seem like it explained much. Some of my take back then was probably driven by Michael Graziano’s criticism of it in his 2013 book, and his point about most theories having a magic step, where a brute identity assertion ends up being made without justification, in IIT’s case, that consciousness just is integration.

        Liked by 2 people

  6. Mike, you ask: ” Is the pseudoscience label accurate? A slur against a valid theory that’s just outside of the mainstream? Or something else?” Personally, I would classify it merely is an exasperated (and perhaps a bit OTT) response to claims about IIT “winning the argument”. While being much less well informed on the subject than you, I share your doubts about IIT for reasons you give, but I don’t see that there is anything inherently wrong in trying to construct a theory by working backwards from our intuitions on the subject. True, history of science teaches us to mistrust our intuitions being a reliable guide and for that reason alone I would expect the attempt to be a doomed one.

    As for “and then magic happens”, lets’ face it, all current theories are sufficiently incomplete/speculative to require a degree of hand waving. Dennett’s “and then what happens?” is a problem not just to IIT. It’s just that it is much less of a problem for theories which disregard our intuitions on the subject. 🙂

    Liked by 1 person

    1. I agree that the original motivations for IIT are fine, except maybe for being too trusting of introspection. But it’s far from alone in doing that. And it’s worth the effort to see where it leads. An argument I could see for the pseudoscience label is if the practitioners are ignoring falsifying evidence, but it’s difficult to suss out if they are, or if non IITers are reacting to old versions of the theory (as Erik Hoel claims in a new blog post).

      I definitely think every theory has IOUs, holes in the explanation that will require further research. I agree the ones that do discount intuitions largely manage to avoid magical ones. Although admittedly it depends on how the theory is interpreted. If interpreted to be providing phenomenal consciousness in the strong sense, then magic seems inevitable. If interpreted to be why we may think there’s that kind of consciousness, then it’s more understanding implementation details.

      Liked by 1 person

      1. Ah, yes, the Wittgensteinian question: can I be wrong about feeling pain? Which for me, by extension leads to the question: what exactly is the supposed difference between me being conscious and me “merely” believing to be conscious. A lot of argument in the theory of mind hinge on this (non?) distinction. It would seem that the cost of denying the distinction is just a rejection of the notion of p-zombies, while its benefit is to simplify matters and letting us to concentrate on the how of implementational matters, instead of the what of ontological arguments. Strikes me as a win-win. 🙂

        Liked by 1 person

  7. I’m with Anil Seth and Alex Popescu on the “pseudoscience” claim. It’s real science, but a failed or failing research program (see the giant XOR gate array).

    Is IIT philosophical idealism? I haven’t read the IIT free will paper, but I suspect that Birch is misinterpreting the five authors (mostly through the fault of the five). The “entities” in question seem to be meaning agents – what does the causing. You could argue (though I wouldn’t) that causes have to be at the right level. For example, what made me stop at the traffic light was not that the (LED) bulb was emitting 630 nm, though it was, but that it was red. The causal power of the light to stop motorists comes first and foremost from the high-level property, not the fine details. (This example is about properties, not objects, but I think you get the point.)

    Of course, I could be wrong. Playing the “ontological bedrock” game is popular with philosophers, and there is definitely a Team Mind and a Team Matter. I prefer not to indulge in metaphysical penis envy on either side. You can slice up the universe a zillion ways and call these items “fundamental” and those “derived”, but the universe doesn’t care. You can do it with different “fundamentals” and have it all work in principle, as far as anyone can tell. There is no evidence that fundamentality is in the world as seen from a God’s eye point of view. It is a pragmatic and epistemic affair, instead.

    Liked by 1 person

    1. On pseudoscience, a lot depends to which extent those falsifying cases are simply being ignored. One definition of pseudoscience is failed science still being pushed by practitioners. I saw something today arguing that there are newer versions of IIT since that XOR gate takedown that deal with it. If true, I don’t recall Koch discussing them in his 2019 book.

      I haven’t read that free will paper either, but I did take a look at the abstract, and its worth a quick scan.

      This essay addresses the implications of integrated information theory (IIT) for free will. IIT is a theory of what consciousness is and what it takes to have it. According to IIT, the presence of consciousness is accounted for by a maximum of cause-effect power in the brain. Moreover, the way specific experiences feel is accounted for by how that cause-effect power is structured. If IIT is right, we do have free will in the fundamental sense: we have true alternatives, we make true decisions, and we – not our neurons or atoms – are the true cause of our willed actions and bear true responsibility for them. IIT’s argument for true free will hinges on the proper understanding of consciousness as true existence, as captured by its intrinsic powers ontology: what truly exists, in physical terms, are intrinsic entities, and only what truly exists can cause.

      https://arxiv.org/abs/2206.02069

      Like

    2. So now I’ve skimmed sections 4.1 thru 5 of their free will paper, and I’m pretty sure my guess was correct. Section 4.1 spells out their bizarre usage of “intrinsic” to refer to a property, admittedly an interesting one, which deserves another name. Section 4.4 ends in the following non-sequitur:

      High-level properties and constructs are vertically determined by their microphysical substrate [i.e. only one high level property corresponds to a given low level], both before and after causation takes place. … In short, a principle of causal exclusion is applied according to which microphysical causation excludes higher-level causation.

      This supposed “exclusion” requires a concept of “causation” which has not been shown to have scientific or philosophical relevance.

      But, given that the authors accept this “exclusion”, it is no wonder that they argue that a neuron (in a situation where max Phi is not centered on one neuron) is not a cause of my behavior. And these “entities” that they talk about must be Phi-maximal processes (imagine carving the universe into pockets of max Phi), with the “intrinsicality” property among others.

      Like

      1. Yeah, IIT literature in general loves the word “intrinsic”. It’s all over the place. And I always have the vague feeling it’s being used in some specialized manner I’m not in on. Even when they provide an explanation, the explanation just leaves me confused.

        And that quote you share, I have no idea what they’re saying there. I understand all the words, but together they just seem like obfuscated gobbledygook. Maybe I just need the IIT primer for kindergarteners.

        Like

  8. “So overall I remain pretty skeptical of IIT. I don’t know if “pseudoscience” is the right label”

    That about sums up my view. But I’m not seeing a lot different between IIT and the other theories on the Science -> Pseudoscience Scale. It seems they’re all somewhere in between.

    Liked by 1 person

  9. BTW, I think integrated information is a key to understanding consciousness but the integration is happening at a 500-10,000 neuron level. Only a small part of it can ever be distributed to other neural clusters.

    Liked by 1 person

    1. I have a feeling a lot of people are going to see this whole mess and agree with you about consciousness science.

      That said, I tend to think the theories focused on access consciousness, Chalmers’ easy problem, are mostly decently grounded and address at least part of the problem. But the ones aimed primarily at phenomenal consciousness in the strong philosophical sense, that are aiming to solve the “hard problem”? I agree those are all at least at IIT on that scale.

      Liked by 1 person

      1. Are access consciousness and phenomenal consciousness founded in something that is qualitatively different? Are hearing and seeing qualitatively different at the neuron level? I think we are observing fundamentally the same phenomena with an capacity for morphing itself in response to different kinds of input data. It becomes more apparent when we compare seeing and hearing but the different aspects and appearances arise from variations in input, not from a qualitative difference.

        Liked by 1 person

        1. For years, I took the stance that phenomenal consciousness is just access consciousness from the inside, and access consciousness is phenomenal consciousness from the outside. What I didn’t fully grasp for most of that was I was using a fairly weak functional sense of phenomenal.

          The stronger philosophical sense of phenomenal, as something fundamental, non-relational, and inaccessible to science, does seem categorically different from access. It’s a conception that seems to beg the question on dualism, or other forms of non-physicalism. So any theory attempting to explain that version of it is going to have to go places mainstream science generally wouldn’t.

          Liked by 1 person

          1. I’m referring to the underlying mechanisms, not how it “seems” or “feels”.

            I doubt evolution concocted a brain able to experience sensory input, for example, then decided to throw all of that away and invent something different when it wants to do higher-level functions. I thinking now that just like hearing seems different from seeing, we would expect decision making, verbal and mathematical reasoning, and even introspection also to feel different. In other words, access consciousness is just another form of phenomenal consciousness. Sure, there may be some specialized neurons, maybe some unique structures, in activities we associate with A-consciousness but the mechanisms are likely more alike than different.

            I’m speculating we may be dealing with several hundred or more forms of consciousness in the human brain, and countless more if the entire potential range of organic and non-organic conscious capable entities were understood. But the underlying mechanisms of all forms are very much alike. Each form is starting from different input and generating conscious experience from it.

            Again more about this if I ever complete my Fragmented Consciousness Theory posting. 🙂

            Liked by 1 person

          2. Are you familiar with the work of Semir Zeki? You might be getting into an area similar to something he’s proposed.

            Attempts to decode what has become known as the (singular) neural correlate of consciousness (NCC) suppose that consciousness is a single unified entity, a belief that finds expression in the term ‘unity of consciousness’. Here, I propose that the quest for the NCC will remain elusive until we acknowledge that consciousness is not a unity, and that there are instead
            many consciousnesses that are distributed in time and space.

            : https://www.nmr.mgh.harvard.edu/mkozhevnlab/wp-content/uploads/pdfs/courses/literature/Multimedia%20learning,%20collaborative%20activities%20and%20team%20work%20involving%20visual-spatial%20imagery/zeki.pdf

            Liked by 1 person

          3. Not familiar but appears to be exactly what I am saying. Will check it out.

            I think it is almost the only logical thing that can be concluded. That is that consciousness is not a unity because there has been no identified location or process that can bring it all together. In part, I think it falls out from the small-world network architecture of the brain. Most connections are local and remote connections are few. There’s no bandwidth to transmit sufficient information all around the brain to unify it. We have local tightly integrated clusters – where consciousness manifests – with loose coupling between clusters.

            Liked by 1 person

          4. Great paper.

            “One conclusion from the clinical evidence is that a microconsciousness for colour or visual motion is generated through activity at a distinct processing site, and therefore that a processing site is also a perceptual site”.

            Very similar to what I am arguing.

            “I propose that there are multiple consciousnesses which constitute a hierarchy [1,2], with what Kant [3] called the ‘synthetic, transcendental’ unified consciousness (that of myself as the perceiving person) sitting at the apex”.

            Multiple consciousnesses, yes. No apex or hierarchy in my view. Decoupled clusters.

            Liked by 1 person

          5. I thought it might resonate with you. It’s been a while since I read it though, so my memory is a bit fuzzy.

            The question I always have with these kinds of ideas is, what makes the activity being discussed conscious?

            Liked by 1 person

          6. Once we start looking for it where it is we might be able to find out. My thought is that consciousness involves writing information that governs how synapses fire. Learning and memory are involved. And it probably requires a sort of feedback process with local neurons firing synchronously.

            Liked by 1 person

          7. Sounds like it comes down to changes in memory. Victor Lamme has a similar rationale for why he focuses on recurrent processing, that it enhances synaptic changes. The question then becomes, is the same true for a technological system? Does the synchronous firing need to take place with the same physics, or would an algorithmic version be sufficient? (Which is getting us close to the dispute between IIT and functionalism.)

            Liked by 1 person

          8. By the way, another paper by same:

            A massively asynchronous, parallel brain

            https://pubmed.ncbi.nlm.nih.gov/25823871/

            I’m surprised the idea hasn’t gotten more attention. It immediately resolves most of the NCC issues and presents fairly convincing evidence for consciousness at the processing site, which is the simplest explanation for where it is.

            From article you linked:

            One conclusion from the clinical evidence is that a microconsciousness for colour or visual motion is generated through activity at a distinct processing site, and therefore that a processing site is also a perceptual site.

            From what I wrote before seeing the article:

            “The most parsimonious hypothesis for the location of consciousness is that it appears in the location where the critical processing occurs. This would mean that it is scattered throughout the brain, not at a single location, and that whatever integration of information occurring in the brain arises largely from unconscious processing.”

            Where I have a problem is with his hierarchies of consciousness and the “Kantness” of it.

            He writes:

            “Micro- and macro-consciousnesses, with their individual temporal hierarchies, lead to the final, unified consciousness, that of myself as the perceiving person.
            This and this alone qualifies as the unified consciousness, and this alone can be described in the singular that a processing site is also a perceptual site”

            My view is more radical still. I would argue that the so-called perceiving person is actually just another form of consciousness in another place – the frontal cortex. In other words, it is not greatly different from a splotch of color in visual cortex. It isn’t any more unified than any other instance of consciousness. It is just working on different inputs from the what the visual cortex has.

            Liked by 1 person

          9. Right. My thought when reading this is similar to what we discussed the other day on your blog.

            It seems like the main difference between this and the Daniel Dennett / Susan Blackmore account is when the label “conscious” gets applied. Even in Stanislas Dehaene’s case, he makes clear that the processing in just about any location in the cortex can be conscious, if it wins the battle for relevance. You could remove that “if” and just say it was all conscious all the time, and the only difference is between attended vs non-attended conscious.

            Here’s a question. If we remove the word “conscious” from the overall description, is there any significant controversy on what happens? We know visual processing happens in certain locations, auditory in others. We know they have to have effects that reach the hippocampus to make it into episodic memory. We know those effects have to reach the language centers, and then motor ones, for someone to report on them.

            Maybe we’re just overcomplicating things with the c-word, at least at the level of neural processing.

            Liked by 1 person

          10. Well, Dennett doesn’t have a place for consciousness in his model. And I’m still unclear what is a “draft” but it sounds like a lot more than a bit of microconsciousness. I’m not sure of Blackmore’s view.

            Yes, if we remove “conscious” from the description there isn’t any disagreement in most of neuroscience about processing occurring in different places in the brain. That is why it makes such perfect sense that consciousness would follow where the processing is happening. What seems to be the key problem with that is there seems to be the commonly held belief that consciousness is unified so it must be something more than a lot of microexperiences. No one can believe that consciousness is:

            C = {ce1, ce2, ce3, ce4 …}

            Then there is the question of why consciousness at all or what does the ce symbol really mean? Since learning has been associated with consciousness and learning at the neural level involves modifying connectivity, it would be logical that consciousness is doing something critical associated with wiring and adaptation at the neural level. And that additionally supports fragmented consciousness theory, because it is happening precisely at the point where the modifications to connectivity need to occur.

            Why it feels like something may drift into philosophy. But it might be that neurons must “feel” themselves as a physical entity in a feedback process as a reality check.

            Liked by 1 person

          11. Dennett doesn’t have one place for consciousness in his model. Remember, he rejects the idea that there is such a place. The drafts are parallel streams of processing that happen in the brain. They seem similar to me to the microconsciousnesses, except of course for the fact that Dennett doesn’t call them that. For him, what ultimately turns out to be conscious is the stream that ends up have the right causal effects in the system, which is usually heavily dependent on how the system is being probed.

            People often say that consciousness feels unified. I often wonder how we might expect it to feel if it isn’t unified. Most of us accept that a lot happens unconsciously. Maybe the right way to think about it is, a lot happens outside the stream the current thought is in.

            Liked by 1 person

          12. Yes and I agree there is no one place. That is my main point.

            “Draft” seems a lot like a narrative, not a microconsciousness. But I have no idea what a “probe” is or who or what is probing. And what would be “right causal effects”?

            “I often wonder how we might expect it to feel if it isn’t unified”

            I need to get my posting finished even if it is somewhat abbreviated. I have at least six good reasons consciousness might feel unified even if it isn’t. But it would feel exactly like it does feel – unified.

            Liked by 1 person

          13. Probing might include being asked to report if a particular stimulus is perceived. If never asked for it, the stimulus might only ever make its way along one of the many causal streams to a point and then fade away. On the right causal effects, Dennett later summed them up as “fame in the brain”, although he said it might be more accurate to call it “clout in the brain”.

            His overall model is basically the global workspace one, but without a definite event that brings something into consciousness. Something like the p300 wave is important for attention, and it’s a major way for something to make it into consciousness, but in his model it’s not the one and only way, which makes his model more consistent with some studies in recent years. In the end, whichever processing ends up having causal effects throughout the cortex, including affecting the episodic memory, language, and motor regions, however it happens, is one we’ll retroactively recognize as “conscious.”

            Now, if we just say it’s all “conscious” from the very beginning, then it seems like we have something like your and Zeki’s models. All of which ties back to that perceptual hierarchy I’ve discussed before.

            Liked by 1 person

          14. I’m not getting it’s all “conscious”. I think consciousness will correlate with specific behavior and physical activities of neurons, not everything the brain does. And I’m not seeing a perceptual hierarchy other than the one our self-important higher functions want to impose on the brain. Fundamentally it is all about how large a network and how it is wired together. Apart from that, the visual cortex works mostly the same as the frontal cortex as far as consciousness is concerned.

            I have published it.

            Fragmented Consciousness Theory

            Liked by 1 person

          15. I didn’t mean everything in the brain, just everything that has the potential to eventually make it to Dennett and Dehaene’s more demanding version of consciousness (what they would call pre-conscious), whether or not it ever does.

            I’ll check out your post!

            Liked by 1 person

    1. From the paper:

      iii)masquerade as being already scientifically tested and established. In this sense, IIT, specifically the panpsychist version

      I have thought that panpsychism isn’t so much a key part of the scientific theory but is more of a philosophical leaning that one might adopt if you accept the theory. It seems there is potentially room in IIT to accept that consciousness, even though it is integrated information, is emergent and not an inherent part of matter. The claims about simple devices like thermostats having some level of consciousness are, of course, absurd, but it could be argued that consciousness really doesn’t come into existence until the level of integration reaches some critical state.

      Liked by 1 person

      1. There’s a lot of debate about IIT’s metaphysical assumptions or implications. It’s commonly described as panpsychist. And now idealist. But that probably depends on the specific variants of panpsychism or idealism under discussion.

        Christof Koch, in his 2019 book, seemed to see it as an alternative to panpsychism, which was a change from his earlier book where he saw it as an inherently panpsychist theory. And Birch describes some of the more recent discussions as idealist in nature. Giulio Tononi, the creator of IIT, actually seems more comfortable with these metaphysical aspects than Koch.

        Like

    1. Thanks. I only have limited access to the paper, but the abstract is interesting. If I’m understanding correctly, you varied the sensory information but in a way where the phenomenology remained the same? But phi varied with the sensory information, so it was a miss for the theory?

      Like

  10. Having already read the posts on The Brains Blog, I really enjoyed your overall summary. In my opinion, IIT is clearly pseudoscience. It took me a long time to summarise my understanding of the IIT approach in three paragraphs:

    “”IIT starts from a core set of features that it takes to be self-evidently essential for consciousness (its “axioms”), and then seeks to infer what properties of a physical system such as the brain could potentially support, correspond to, or perhaps “explain” those phenomenological properties (its “postulates”). Physical systems are considered as elements in a state, such as neurons or logic gates that are either ON or OFF, that have cause-effect power, which is defined operationally by the extent to which the system’s past specifies the present state (cause power) and the extent to which the present specifies its future (effect power).
    IIT does not simply posit that integrated information is necessary or sufficient for consciousness, or that integrated information is a marker of consciousness. Instead, the fundamental identity of IIT states that the quality or content of consciousness is identical to the form of the conceptual structure specified by by the “postulates” of the theory, and the quantity or level of consciousness corresponds to its irreducibility. Irreducibility is quantified by a specific measure of information, named integrated information Φ, that refers to the information that is generated by the whole system above and beyond the information generated by its parts. It is calculated by the extent the cause-effect structure of the system’s elements changes if the system is partitioned along its minimum partition (the one that makes the least difference).
    According to IIT, every content of an experience “here and now,” what it feels like, such as why time flows, space feels extended and colors have a particular appearance, correspond to sub-structures in that cause-effect structure.”

    Tim Bayne has already accurately pointed out that the “axioms” of IIT do not meet their claim, and concludes:

    “Some theses that are advanced as axioms arguably qualify as self-evident truths about the essential features of consciousness but they fail to provide substantive constraints on a theory of consciousness, whereas other theses might provide substantive constraints on a theory of consciousness but are not plausibly regarded as self-evident truths about the essential features of consciousness. In short, the axiomatic foundations of IIT are shaky.”

    However, I think that the so-called “postulates” of the theory are even more far-fetched than the axioms, which claim to capture the essential phenomenal properties (as established in the axioms) in terms of corresponding physical properties. I am on board with Scott Aaronson when arguing why he does not accept the obviousness of the postulates and why he cannot see how the postulates lead to Φ. To me, the mathematical framework of theory seems like a self-referential sophisticated glass bead game.

    In the original, the central tenet of the IIT reads as follows:

    “According to integrated information theory (IIT), a particular experience is identical to a conceptual structure specified by a physical substrate. The fundamental identity postulated by IIT claims that the set of concepts and their relations that compose the conceptual structure are identical to the quality of the experience. This is how the experience feels — what it is like to be the complex ABC in its current state 111. The intrinsic irreducibility of the entire conceptual structure (Φmax, a non-negative number) reflects how much consciousness there is (the quantity of the experience). The irreducibility of each concept (φmax) reflects how much each phenomenal distinction exists within the experience.”

    Can you grasp that? I don’t think so. How can a physical structure be identical to an experience?

    Liked by 1 person

    1. Excellent points. I had forgotten about Aaronson’s remarks that he couldn’t find phi entailed by the postulates. As I mentioned in the post, I find both the axioms and postulates vague, vague enough to drive a fire truck through each of the logical steps. It seems like they could be used to justify many of the other theories out there. The phi mathematics just seem underdetermined by them.

      On grasping, aside from the vagueness, I find much of the language around IIT incomprehensible. The words “intrinsic” and “causal” get thrown around a lot, with a vague hint that their meanings aren’t actually the standard ones. I do see the word “intrinsic” defined occasionally, but it’s so vague and abstract that it provides no real grip for making sense of the language.

      Much of it has long seemed like obfuscation to me, but I’ve been reluctant to say that because serious people seemed to be onboard. It looks like I wasn’t the only one.

      Like

  11. Regarding what you said about “media misrepresentations,” if that’s a valid reason to label something as a pseudoscience, there are a great many other theories that would be in danger of being labeled as pseudoscience. Medicine, psychology, economics, sociology… the popular press misrepresents those things all the time, relentlessly and remorselessly. Whatever other problems I.I.T. may or may not have, I would not judge it based on what the popular press says about it.

    Liked by 1 person

    1. I don’t know if it’s the media coverage in and of itself, but what many of the advocates of the theory are doing to drive it, efforts that seem disconnected from where the science actually is. That said, the advocates aren’t a monolithic group. Some seem reasonably careful in what they’re saying.

      As I noted in the post, I’m not sure about the “pseudoscience” label, but there’s something not right, both about the strong version of the theory itself (which is the version under dispute) and the way it’s marketed, both within the cognitive community and to the general public.

      Liked by 1 person

  12. Ah. Good. The witch is dead. 

    Just kidding. This will likely just give it short term prominence. Maybe this will be a kick to the consciousness and philosophy of mind community. So much of what has been written has been poorly done. There has been so much written about minute details and arguments that we should have shrugged at. There has been too much latent dualism and incoherent theories and ideas. I am not saying I have solved colorblind Mary (etc.), but I have read so many arguments over such stuff that I pretty much just skip it these days. I move on to other things.

    No, I never understood IIT. Even when I was really reading on consciousness, I would come across IIT, and never be able to make heads or tails of it. I tried reading Tononi’s book (I believe) and gave up. I have probably put it pretty close to bad science and philosophy in my head. That is, I believe that there is something significantly wrong with it in a way that people are wasting their time on it. But that is true for a lot of academic work, especially in philosophy and the social sciences. 

    Liked by 1 person

    1. Yeah, “dead” is probably too optimistic. But hopefully people will understand now that it’s at least controversial in the scientific community. I’m not sure that was getting across to the media. A lot of people are outraged, particularly those who have sunk a lot of time into it. And “pseudoscience” may have been a bit strong. But in many ways IIT benefitted from people being polite about it for years.

      There are other theories that benefit from that polite tolerance (which I won’t name because I don’t feel like getting into a fight with fans). Most of them get public attention periodically when an article or two gets published, but gain little traction among actual scientific researchers. IIT stood out in that regard. That’s unlikely to change overnight.

      And who knows, the proponents might eventually reformulate the theory into something more coherent and grounded. It’s not like integration and information aren’t part of the picture.

      Like

  13. Given that one camp happens to be attacking another camp, what’s my assessment of the situation here? Observe that countless scenarios like this one are constantly played out all over the world for the institution of social policy. It seems to me that academia functions no less politically than democratic governments do. When academic and political leaders see a reasonable way of hurting a given opponent, it’s their job to do exactly that (though of course with appropriate political care). As an observer who considers each of these two combatants wrong, I’ll handicap the situation here as I see it.

    As computers were being developed during the 20th century, there was general hope that we had finally created the sort of machine that emulates an organism’s brain. And clearly brains do function as computers. This is to say that they accept input information and process it to then institute output operations. Heart muscles display such computational instruction for example. Regarding the medium through which we perceive our existence and act however (commonly known as “consciousness”), many didn’t follow this model to its end. Instead of presuming that consciousness functions in the way that our computers function (which is to say that processed brain information animates some sort of consciousness mechanism not unlike muscles), it was presumed that the right processed brain information alone would exist as consciousness. Computing pioneer Alan Turing, for example, helped promote this perspective. Unfortunately few were able to grasp that a shortcut had been taken however, and ironically this group was then able to claim the title of “computationalism”, as well as brand their model as the only legitimate naturalistic consciousness proposal on the market.

    In 2004 Giulio Tontoni leveraged his prominence in neuroscience to enter the market as well. But rather than propose a mechanism that processed brain information might animate to exist as consciousness (I presume since he didn’t realize that the mainstream had taken such a shortcut, let alone grasp what such a mechanism might be), he proposed something just as unfalsifiable. His proposal included a theoretical equation from which to quantify the “integrated information” that he proposed to exist as consciousness itself. Though popular for conversely providing something vaguely tangible, this also left his position open to all sorts of unwanted scrutiny. In the early days it was thus observed that he must be a panpsychist. For a while he agreed, but then fabricated a vague exception that few beyond himself seem to grasp. Furthermore in order to directly eliminate the prospect that various consciousnesses might exist under a single system, he invented a clause where only the highest Phi would be the conscious part. This prompted Eric Schwitzgebel to observe that theoretically a US election could integrate enough information for it to become conscious as a whole, while also rendering Americans in general non-conscious while they continue functioning as if they were conscious.

    In any case given inconclusive results for a recent test to suggest that one of these unfalsifiable proposals might vaguely be considered superior, “computationalists” in general must feel that the time is right for IIT to directly be attacked as “pseudoscience”. Furthermore I’d say that the series of blog posts by Jonathan Birch accusing IIT of idealism, was a very astute political hatchet job. Observe that for objective credence he was even able to dupe an IIT supporter, Hedda Hassel Mørch, to write a full post for his series.

    Regardless, for me observing such political maneuvering is great fun! Meanwhile I’ll continue searching for a way to help others grasp that “computationalism” models consciousness upon an abbreviated conception of how computers work. I’d like people understand that in order for the brain to computationally create consciousness, it should need to animate the right sort of mechanism. This might be electromagnetic fields associated with neuron function, or it might be something else. Observe however that all such proposals will be falsifiable on the basis of assessing that mechanism against practical observations. Conversely all mechanism-less proposals should thus remain impossible to disprove.

    Liked by 3 people

    1. Your opening reminded me of an episode from the old Consciousness Live podcast, where Richard Brown had Philip Goff and Barnardo Kastroup on to debate panpsychism vs idealism. Since I had no skin in the game, it was basically just entertainment to watch them go after each other. I imagine watching this debate is like that for you.

      I think IIT has been regarded with suspicion by the scientific establishment for many years. But because it had serious people behind it, there was a lot of politeness. It’s not the only theory that gets that politeness. But it was the only one really attracting research effort. I mentioned the other day the issues with some fields, like psychology and economics, tolerating kooks. At some point the field needs a way to get those people out. I think that’s the motivation of the letter signers, regardless of whether they’re actually doing that, or being closed minded as many IITers claim.

      We’ve discussed this before, but if you really want to convince computationalists that they’re taking a short cut and omitting something important, you need to find a way to articulate what it is that’s being omitted. “Animate the right sort of mechanism” is pretty vague. You need to get much more specific. You call it “mechanismless”. Where is the gap in mechanism? And why do you think EM fields plug it?

      Interestingly enough, IIT itself rejects computationalism. Under the view, it’s not enough for information to be integrated, it must be integrated in a certain structural way. Even if a system successfully integrates it in the same functional manner, but with a different physical structure, IIT doesn’t see that system as conscious. I’ve never seen a good explanation on why.

      And to me that’s always the question. For whatever model anyone is proposing, why is the thing presented conscious? And I’m only asking from an access conscious perspective, which makes the question answerable. (Asking from a phenomenal consciousness perspective is, to me, just making noises to no purpose.)

      Liked by 1 person

      1. I suppose that I do consider this match between IIT and computationalism (here in the form of the “global” variant) about as silly as each of us consider such debate between Goff and Kastroup. It’s understandable that your people would rather not stand shoulder to shoulder with IIT, and so impetus to formally label it as “pseudoscience”. Note that while your side has been building its platform and support since around the days of Alan Turing, IIT merely came on the scene in 2004 . But how can your position be an example of reasonable science, and yet still remain both unfalsifiable and sitting at a table populated by various other unfalsifiable positions that each of us consider silly? Might systemic failure be responsible even given the prominence of your people? If so then perhaps you could describe what you consider that systemic failure to be even given their prominence? Or as I suspect, might your position have some problems? You’ve agreed with me before that you’d consider any medical conference that gives voice to homeopathy to lack credibility on that basis alone. Thus it does seem precarious to share a stage with so many ideas that each of us consider ridiculous.

        The ultimate issue I think is that your side is proposing consciousness by means of a two step process, though all working computational processes seem to require a third as well. As I see it computers work like this: Firstly, they accept input information. Secondly, they process that information by means of algorithms. Thirdly, the processed information goes on to inform one or more mechanisms associated with what that computer ends up doing in that regard. Can you think of any situation in which processed computer information can exist as a final computational product in itself that needn’t also inform an associated mechanism to exist as such?

        I’d say that your position could technically be in the clear if it were to address the first two steps and then merely posit that it was still working out an associated mechanism that resulting processed brain information informs to exist as consciousness itself. No shame in getting only that far. Instead however it posits that consciousness can only reside as processed information in itself and so is hostile towards all proposals by which processed brain information is proposed to then inform some sort of physics based dynamic that thus exists as something conscious. Observe that without the third step all such proposals will inherently be unfalsifiable, though otherwise not so.

        Is it productive to say that computer information can exist in itself, or rather only in respect to what that information informs? For example let’s say that we create a Bluetooth signal that doesn’t inform any of our machines. Given that none of our machines thus become informed I wouldn’t call this “informational”, but rather just “stuff”. Then let’s say that there are two robots that this signal affects and thus one moves left while the other moves right based upon their separate programming. Clearly here the signal should be considered informational, though obviously not as a left or right signal inherently. Instead it should be informational in respect to what it informs — a left signal for one of them and a right signal for the other. Thus it seems to me that “processed information” should not be considered informational in itself, but rather just in respect to a third step regarding what (if anything) happens to be informed.

        Let’s say that you do grant me the possibility that I’m right here, and even for the sake of discussion alone. Thus you’d earnestly play your own devil’s advocate to speculate that consciousness might only be able to exist in the form of something that processed brain information informs. In that case you’d have something to ponder. We each agree that the brain accepts input information neurally and then processes it into new information. But given your understanding of the function of synapses, neurons, glial cells, blood flow, or anything else brain related, I wonder if you could come up with even one reasonable second alternative to McFadden’s proposal? If it’s true that processed brain information informs something that exists as consciousness itself, and also that this is not in the form of the neuron produced EM fields associated with neuron firing as McFadden posits (and even has reasonable evidence for), then I’d love for you to provide an alternative. (Here I hope you don’t say that you aren’t sure what I mean by “consciousness”, since in all of our discussions over the years I’ve always meant the exact same innocent/wonderful definition.) I can’t think of a sensible answer beyond McFadden’s. But perhaps I’ve developed some tunnel vision on the matter? It would be a great service to me if either you or anyone else here could present a second naturalistic plausibility that seems even remotely reasonable, that is if naturalism mandates the mentioned third step as I suspect of all computational function.

        Liked by 1 person

        1. I think focusing on the various camps is a distraction. I’m more interested in the reasons someone falls into a particular camp.

          I noted in the post that every theory of consciousness is both a philosophy and an empirical model. As far as I can tell, that’s true of every theory. The philosophy is often unfalsifiable, since it’s typically a definition of what we’re interested in.

          The question is, relative to whichever conception of consciousness a theory is targeted at, is it, or at least some portion of it, testable? I think functional cognitive theories, which usually focus on access dynamics, are definitely testable. Many argue that they’re explaining the wrong thing, but not that they aren’t scientific. IIT’s target is phenomenality, making it more difficult to assess.

          So if I’m understanding your three steps correctly, they’re 1) input, 2) processing, and 3) output. I don’t know of any computationalists who would reject that, although most would likely see it as simplistic. But your real issue isn’t output. Every computationalist sees the system outputting signals that drive behavior, hormonal regulation, heart rate, and other processes. What you seem to see missing is output to this other entity you posit.

          My reaction is I see no need for that other entity as part of the explanation. It seems like an addition to meet our innate intuition of dualism. So my question is, why specifically do you think it is needed? What explanation does it provide that we don’t get with neural processing? And how specifically do EM fields satisfy it?

          We’ve gone round and round about information. If it makes you feel better, everywhere you see “information” in computational accounts, think “causal propensity”, which I see no barriers to existing independent of how a conscious system might use it.

          Saying McFadden has evidence for EM fields being involved in consciousness strikes me like saying unknown lights in the sky are evidence for alien spaceships. In both cases, there are far more grounded explanations for the data, which should be ruled out before entertaining exotic options. (McFadden himself seems much more careful with his claims. It makes his theory merely a long shot instead of the pseudoscientific claptrap of most EM theories.)

          Liked by 1 person

          1. It sounds to me Mike like you’re now advocating a form of “illusionism” that goes well beyond the version that Keith Frankish champions. Observe that he does admit the existence of his own consciousness when sufficiently cornered, and only when he considers there to be no spooky metaphysical presumptions implied. This is essentially his own certainty that he sometimes does experience his existence. If you don’t believe that you personally ever have a consciousness in that capacity, then so be it. Still it does seem to me that you’ve talked about being in such a state here often enough. So perhaps I should ask directly? Without invoking any spooky metaphysical presumptions whatsoever, do you ever experience existing? Or if that question seems too unclear, shall I try to clarify by going through some of Eric Schwitzgebel’s stock examples of “experiencing existence”?

            If we can confirm that you’ve never experienced your own existence, then problem solved. In that case you simply should not be able to understand what I’m talking about. But if we can confirm that you do commonly experience your own existence, then hope remains. In that case we might now productively discuss how an experiencer of existence (whether you or not) might causally emerge by means of the computational function of a brain?

            Liked by 1 person

          2. Eric, it seems like you’re conflating two things: 1) conscious experience and 2) your preferred theory about where that experience comes from. I don’t deny 1), but I do deny 2).

            I don’t see anything in conscious experience, the experience of existence, or whichever variation you want to employ, that can’t be the brain’s operations, at least not without adding in the usual bad philosophical leaps of logic that lead people to think there’s a hard problem.

            We don’t have a full accounting of those operations yet, but I see nothing in the gaps that require us to add in an additional system that it “generates.” Of course, that can always change with new evidence, but I need that evidence, or at least compelling logic, before I’ll go there. That’s my chief blocker for accepting your idea.

            Which brings us back to my question to you. What are your reasons for insisting that we do have to have that extra thing to explain conscious experience? Using your language, why can’t the brain be that experiencer?

            Liked by 1 person

          3. The brain itself as an experiencer of existence? I don’t recall you ever suggesting that before Mike. In what manner might brain exist as experiencer? It’s an idea to now carefully consider.

            Given that we’re speaking of computer function, let’s first ground the possibility in terms of our agreement just above. This is to say, we agree that computers accept input information, process it algorithmically into new information, and then the processed information goes on to inform something that the computer is effectively responsible for doing. For example a computer might accept input information and process it algorithmically to thus inform its own memory. A favorite example of mine would be how a computer may thus inform a computer screen.

            For the consideration at hand I perceive you to be positing that the brain accepts input information neurally, it then processes it into new neural information, and that processed information effectively informs the entire brain itself such that it all becomes an experiencer of existence. Let me know if that sounds essentially correct or you’d like to make some minor to major clarifications.

            When the brain as a whole exists as an experiencer of existence, and so presumably the blood inside it as well, wouldn’t blood that enters and exits the brain need to undergo some sort of metamorphic alteration to the tune of either being a part of experienced existence or not? Why would minor locational differences provide such a significant causal alteration to the existence of blood itself?

            Then regarding my thumb pain thought experiment, I’ve always said that I have no idea what specifically might be thought of as an experiencer here. Now however I suspect that the experiencer might be proposed as the whole thing. This is to say the properly marked input paper, the computer that accepts and processes it, the thusly informed printer, and also the properly marked paper that results. Does that sound right? Would the whole thing together experience what you do when your thumb gets whacked?

            To me McFadden’s proposal seems more reasonable (pending any of your clarifications, of course). Here neurons inform the brain of input information (step 1), and this incites neural processing function (step 2) that creates a dynamic EM field that itself exists as the experiencer of existence (step 3). But if consciousness may be said to also exist as a computer however, then we’re not done yet. Let’s say that an experiencer of thumb pain does result in such a field as an experiential input (step 1). Theoretically another element of the neuron produced EM field would be a thinker that might evaluate that pain in order to assess what might be done to feel better (step 2). If that thinker decides to thus do something (step 3), then theoretically the EM field decision will thus feed back to motor neurons that animate muscles in associated ways (which would itself be another 1, 2, 3 step process for the algorithmic brain to go through).

            Theoretically while the brain is incited to crunch algorithms by means of complex neuro-electric dynamics, and our computers are incited by means of electricity, the experiencer of existence is incited to function by means of a value dynamic by which goodness is constituted by feeling good, with badness by feeling bad.

            Liked by 2 people

          4. I should clarify that I meant the brain and its operations are the experiencer. But much of the rest of your description, including what it means for your thought experiment, sounds about right. Although I’d put more emphasis on the relations and processing involved rather than just the components.

            However, your question about the blood entering the brain makes me wonder if you really get the idea. To answer that question, consider a game of American football. There are certain things usually held to be necessary for such a game, a ball of a particular oval shape, players, a field of some kind ideally with gridiron markings, and the overall activity of trying to score. And protective gear and officials are typically necessary if the game is serious. And there’s often a crowd watching.

            Although we can dispense with a lot of that and still have what most of us would consider a football game. Maybe we’re playing in our own yard, and don’t worry about protective gear and officials, or many of the standard rules.

            Now, consider a player who is only in part of the game. Or air brought in from the wind blowing. Does the player or the air molecules become the football game when they become part of the process? Hopefully we can agree that’s a confused question. It fundamentally misunderstands what we’re talking about when we say “football game”.

            I think the mind and consciousness are analogous to a football game. They’re a process. Blood flowing in can become part of that process, while it’s involved, but to speak of it being that process seems like a category error.

            I understand that McFadden’s proposal seems more reasonable to you. And I’m familiar enough with the details of the idea. The question I’ll ask one last time in this thread is, what are the specific reasons that compose it being more reasonable for you? What does the EM field provide that the process and components don’t?

            Liked by 1 person

          5. “What does the EM field provide that the process and components don’t?”

            A possible explanation is the EM field generated by synchronous firings of neurons is the brain feeding its own information back to itself. That’s what makes it different. It isn’t real until you feel it real.

            Like

          6. But the brain can and does feed on its own information with recurrent processing, both locally and globally. What does the EM field add? What about experience would be missing or different if it isn’t a factor?

            Liked by 1 person

          7. What is it about recurrent processing that creates experience?

            I’m not doubting recurrent processing occurs but there is nothing in it by itself that would generate consciousness unless you have some deeper explanation. It is just a reversal of neural pathways. I doubt is correct by itself as a explanation of consciousness. The recurrent processing you are referring to is simply various neural clusters communicating asynchronously.

            Liked by 1 person

          8. For further clarification, recurrent processing is simply the asynchronous communication between clusters. Consciousness in my FCT is feedback happening at the cluster level where consciousness is occurring. What becomes conscious, what clusters are activated can be caused by recurrent processing but recurrent processing by itself is not producing the consciousness.

            Liked by 1 person

          9. I only present EM fields as a “possible explanation”.

            At any rate, FCT does have specific places (a lot of them) where consciousness occurs which is more than can be said for almost all the other theories. They map almost directly the usually colored red areas in fMRI scans exactly where they might be expected to turn up..

            To quote Zeki:

            One conclusion from the clinical evidence is that a microconsciousness for colour or visual motion is generated through activity at a distinct processing site, and therefore that a processing site is also a perceptual site.

            I take that view one step further and argue that even the high level functions are occurring at the sites where their processing takes place. In a way, all conscious mental activity is like perceptions. Even the “experiencer” is another perception but it is based off different primary input than the visual centers.

            EM fields becomes more plausible in FCT because we do not need an EM field generated by synchronous firings spanning the brain. Since consciousness is local to the processing site, the strength and propagation of the EM field becomes less of an issue. What’s more, the synchronous firings that would create the EM field would tend to create the higher resource usage associated with the red areas in the scans.

            But certainly there are other possibilities. If consciousness is extremely local, it would seem that even some kind of quantum computing could be taking place. We might only need a few ions entangling for a few nanoseconds to perform some sort of calculation or pattern matching with previous inputs to the cluster. Such activity might be very difficult to detect since it could occur so quickly and on such a small scale, but it would leave the question of how perceptions could persist on longer timescales.

            On the other hand, the vortexes of firings that has been observed can persist for longer timescales. They form, dissolve, and reform. And a vortex would have toroidal shape that might serve to concentrate the EM field to a small number of neurons or even a single one.

            That is why I see the EM field as more plausible even though the evidence is circumstantial at the moment.

            Liked by 1 person

          10. I’ll begin with the reasons that I consider McFadden’s consciousness proposal so plausible and then go on to the unfortunate situation we seem to have in academia today.

            It’s quite clear that brains exist as central organism information processors, or computers that can help organism function. Certain organisms such as plants function well enough without central organism processors, though for others this feature seems quite mandated. Here they’re analogous with our robots.

            Apparently non-conscious algorithmic instruction was not an optimal way to instruct certain more advanced ways of functioning however. Observe that nothing is ever “understood” here, but rather just more and more layers of algorithms are used to help an organism proliferate in a given manner. Furthermore observe that for consciousness to exist at all, there must be a causal type of physics by which existence can feel good to bad. I suspect this is practically found in certain parameters of EM field that neurons would sometimes inadvertently create through the right combinations of neuron firing. Thus the emergence of consciousness itself, though here in a non-functional capacity. But given the constraints of function by means of algorithms alone, evolution must have serendipitously cultivated this experiencing entity so that that the experiencer would make choices for organism function from time to time, and in the end this mode of operation did well enough to evolve even to a human level of functional consciousness. (Or it could be that another causal element of brain function exists as consciousness, though beyond neuron produced EM fields I can’t think of anything else that seems appropriate. I’d love some other options.)

            This would be a story that makes sense to me since here brain based computers accept input information, process it into new information, and that information goes on to inform an electromagnetic field that itself resides as consciousness. Instead however a shortcut seems to have been presumed, and even in the early days of computing. Here it was decided that consciousness probably resides by means of the proper information processing alone, which is to say without that information informing anything. Thus I consider the position magical. Causality mandates that information can only exist as such in reference to what it informs.

            Mike, you seem to have accepted my observation that information can only exist as such to the extent that it informs something. From here you propose that in order to create an experiencer of existence (or the above mentioned causal physics by which consciousness exists), processed brain information informs… the brain. From my perspective however, this seems like a convenient answer which effectively reduces back to the former non answer. Compare it with the way that processed computer information goes on to inform a computer screen, or processed brain information goes on to inform heart muscle function. These are specific instruments which computers/brains instruct. The proposal that processed brain information informs an EM field that itself resides as consciousness, is similarly specific. But processed brain information creating consciousness by informing the brain? And here here with “brain” defined as loosely as what passes for “football”?

            I realize that this logic will not persuade you, though I do think that you’re doing something here which is potentially quite important. Only through the exploration of what specific ideas mean might we distinguish good ideas from bad ideas. In the end it may be that the only way to help consciousness science advance in this regard, will be through the specific sort of testing that I propose. This is to say, to induce EM fields that reside around the parameters of typical synchronous neuron firing, right inside someone’s head. If all sorts of appropriate testing does not result in the subject reporting unexpected distortions or changes to their consciousness, then McFadden’s theory must be wrong. But this would at least be an example of effective science from which people could potentially develop other causal proposals that eventually decide the matter. And conversely if my proposed testing were to alter the subject’s consciousness for oral report, and confirmed to thus become uncontroversial, we should then witness one of the greatest paradigm shifts that science will ever know. In order for science to straighten out well enough to even attempt such testing however, we’ll need more people to contemplate the implications of what they believe, nearly as diligently as you and I do.

            Liked by 1 person

          11. Well, I appreciate the effort Eric, but you still didn’t answer the question. I said above it was the last time I would ask, so I won’t do it again. I will however highlight the sentences I think are the core issue:

            “Furthermore observe that for consciousness to exist at all, there must be a causal type of physics by which existence can feel good to bad. I suspect this is practically found in certain parameters of EM field that neurons would sometimes inadvertently create through the right combinations of neuron firing.”

            Until this assertion is made much more specific and justified, it seems to me like just speculation based on remnant Cartesian intuitions. That’s my blocker.

            “Mike, you seem to have accepted my observation that information can only exist as such to the extent that it informs something.”

            No, sorry. Not sure what I might have said to give you that idea. I know it’s a favorite talking point of anti-computationalists, but like most of the arguments from that group, I find it little more than definitional jiu jitsu, a pointless word game. It’s why I suggested just thinking of causal dispositions everywhere you see “information” from a computationalist, because in the end, that’s what we’re talking about: 100% causal processes.

            As you noted, we’re not going to convince each other here. All we can do is share our reasons, and what blocks us from accepting the other’s propositions.

            Liked by 1 person

          12. Of course it’s my perception that I answered your question pretty well, or essentially that McFadden presents a full three step computational account of how the brain might create consciousness (input, processing, informed output mechanism that exists as consciousness itself), while computationalism skips the crucial third step to simply leave things with “consciousness by means of processed information alone”. And I see that you’re now denying that you ever agreed with me that information (in the computational sense as detailed above) can only exist to the extent that it informs something. It was here that I interpreted your agreement: https://selfawarepatterns.com/2023/09/17/integrated-information-theory-as-pseudoscience/#comment-172872 And specifically when you implied that no computationalist would challenge the notion that computers work by accepting input information and algorithmically processing it into new information that informs one or more mechanisms associated with what that computer ends up doing. So I was a bit surprised that you agreed since it seems to render computationalism one step short.

            At the time you may have been under the impression that your position actually does have a third step, and from the proposal that consciousness exists by means of processed brain information that informs the brain itself. Or in terms of my thumb pain thought experiment, apparently the final sheet of properly marked paper would exist as information in the sense that it would inform the components of the system that created it (not that such a final stipulation was ever presented previously in my thought experiment). But perhaps you’re now back to a more traditional two step computationalist position where the processed stuff needn’t inform anything to exist as “information”? And is it still your position that the entire contraption would experience what you do when your thumb gets whacked? Or otherwise what would do this experiencing?

            On the assertion that you were positing as my own remnants of Cartesian dualism, what I meant there is mostly true by definition for any true naturalist. Above we agreed upon the existence of consciousness. This is something that I personally consider defined by a value dynamic, which is to say the goodness to badness of existing. If that’s your problem with the statement however then this element may be omitted without consequence. Regardless, since I perceive true naturalists to believe that all of reality functions by means of perfectly determined physics based dynamics, the existence of consciousness would mandate from this perspective that it can only exist by means of some variety of physics based dynamics. Do you disagree with any of that? (And I suppose you could claim that the physics of consciousness lies in the conversion of the right information into the right other information in itself, even though I argue that this notion violates the premise of causality.)

            Liked by 1 person

          13. I don’t see the logical connection between what I agreed with and what you’re inferring from it. But it’s probably related to the different definitions of “information” we’re using. I’ve tried to remind you of mine a couple of times in this thread. Outside of idealism, causality doesn’t require a mind.

            We agree it’s physics. But for me it’s also functionality, which can be implemented in different physical ways. (Even within evolution, we often see the same problems solved in different ways.)

            On your thought experiment, again, it’s all the components and the processes in which they interact. You keep omitting that last point, but it’s crucial.

            Liked by 1 person

          14. Well if you’re going to assess my argument regarding why causality mandates that computers exclusively function by means of a three step process (the third being that processed information can only exist as such by means of something that it informs), though inject a definition of your own for “information” rather than use the one associated with my argument, then you won’t be able to assess any strengths or weaknesses that it might have.

            The thing about putting functionality right next to physics rather than under physics, is that it could be an unfortunate way of accidentally injecting some magic into a given proposed system. Of course that’s what I believe happened with Alan Turing and what came to be known as his test.

            On my thought experiment, I don’t mean to omit the processes by which the components interact. They’re presumed and even though for brevity I haven’t stated all the processes each time. But then I guess I could state the full thing briefly, and perhaps should do so in case anyone ever reads this who hasn’t heard it.

            When your thumb gets whacked we presume that associated information gets neurally sent to your brain to potentially do all sorts of things, with one of them generally being to create an experiencer of thumb pain. But is this a two step process, where the thumb pain exists as the processed information alone, or rather a three step process where processed information exists as such by informing the right sort of physics that itself exists as the experiencer?

            One implication for advocates of the two step process is that if there were somehow paper with markings on it that were highly correlated with the information that a whacked thumb sends a brain, and it were scanned into a computer that algorithmically manipulates this information to print out paper with markings that highly correlate with the brain’s associated processed information, then something here ought to experience what you do when your thumb gets whacked. I consider this position to be magical however because the physics of computing mandates that information can only exist as such to the extent that it informs something. So I believe that a third step would be required where the resulting marked paper would need to be scanned into another computer that informs something with the right sort of causal physics to exist as an experiencer of thumb pain. Furthermore given my understanding of brain anatomy, I suspect that this physics resides in certain parameters of electromagnetic field.

            Have I now addressed the components as well as all of the processes in which they interact Mike?

            Liked by 1 person

          15. Eric,
            I think I’ve mentioned this before, but the core issue here is I’m looking for a theory of consciousness, one that explains what it is. For me, I don’t consider us done until we’ve reduced the mind and consciousness to components and processes that are not themselves mental or conscious. In other words, I want a reduction of consciousness into its causal mechanisms.

            Your talk of the brain generating an experiencer doesn’t do that. It’s really more a statement on where you think consciousness is and what substrate it might exist in. I find these statements unsatisfying and unmotivated by scientific data. To me, even if EM fields turn out to be relevant, their inclusion by itself won’t explain anything. (McFadden, to his credit understands this, noting that his theory would require supplementation from something like global workspace theory.)

            Your whole point about that third step seems centered around an experiencer whose workings you don’t seem interested in piercing. The point of my questions above were to try to direct your attention in that direction. Only then, I think, could you start to see the holes in your approach. But I can’t make you look there, only point out that it’s where the issue is.

            Liked by 1 person

          16. I do recall you mentioning a desire for a final answer Mike. Furthermore I suppose that here one of the things that endears you to Daniel Dennett was him once wryly saying something like, “If you have a chart that contains a box labeled ‘consciousness’, then you’re not done yet”. And as it happens I do have a chart regarding brain function which contains a box with that label. Still I’ve never presumed that this would be some sort of “final answer”. Newton’s conception of gravity obviously wasn’t a final answer, though it was certainly a very important answer that led to more. With the empirical validation of McFadden, it seems to me that he’d reach the company of humanity’s greatest scientists. I don’t think you’ve let yourself truly consider how transformative such validation would be.

            Your stated desire for a final solution opens up an epistemological question. Is it effective to mandate that one path must be more valid than another on the basis of a perception of a final solution? When phrased this way however I don’t think I need to even ask. Since reality doesn’t give a crap about what anyone would like to be true, we shouldn’t presume the superiority of one kind of answer over another on the basis of what we think might provide more of a complete solution. Evidence based answers should be all that matters.

            If you’ll not entertain my argument that the two step process of computationalism will require a third to become natural, perhaps existing evidence regarding consciousness would help? You may be aware that the only reasonable neural correlate for consciousness found so far, is that it seems to be quite associated with neuron firing synchrony. This might be because EM fields need to reach the right energy levels for consciousness to thus be realized, and synchronous firing is how this is effectively achieved.

            You’ve implied that McFadden and I aren’t on the same page regarding his theory, though I think we essentially are. For example he recently wrote a paper comparing his theory against GWT, GNWT, and IIT. To me this doesn’t suggest that he’s looking to improve his theory by inviting any of them to sit at his table. He presents them more like they’ve simply guessed correctly regarding certain ideas. For example the theorized conscious EM field may be considered “integrated information” in the sense that it’s a unified structure composed of many elements of neural information. Tononi’s Phi metric itself is something that he suggests was simply fabricated. It’s the same for the GWT notion of winning a competition to “go global”. His theory conversely posits different dynamics around the brain that contribute to a consciousness field in appropriate ways, such as the inclusion of a given element of vision for example. https://www.preprints.org/manuscript/202308.0731/v1

            Actually there is one thing that does bug me about McFadden however. Why hasn’t he commented on my proposal to test his theory? I propose we somehow induce an EM field in someone’s head that is around the parameters of an endogenous EM field, and then see if the subject reports correlated consciousness alteration. Even if he’s educated enough to grasp how difficult it might be to effectively do such testing, with me ignorant of these various challenges, I don’t understand why he wouldn’t at least discuss such testing as a conceivable way for his theory to be proven true or false in a relatively conclusive way. If my proposal doesn’t actually make sense, then I’d like to be informed of where the sense goes missing. I’d hate to think that he hasn’t entertained this idea because I came up with it rather than him. Or has he actually not even considered what I propose? In any case we seem to be on very different pages in this regard, and yes it bugs me.

            Liked by 1 person

          17. Eric,
            I’m not insisting on a final answer. We never really get those with science, or if we do, we can never know for sure that we’ve arrived.

            But I won’t be satisfied with one that doesn’t make explanatory progress, which I don’t perceive that simply saying it’s EM fields does. (Or any other substrate identity theory.) If McFadden downplays the need to explain why the EM field is conscious, that’s disappointing, and so much the worse for his theory as far as I’m concerned.

            If there was no other option but to model consciousness the way Newton modeled gravity, then of course I’d settle for it as a placeholder explanation, a way to make progress in the only way we could. But I don’t perceive that’s the choice we’re faced with here. Mainstream neurocognitive theories don’t have the roadblocks, nor EM field theories the explanatory power, that would make it a tempting move.

            Like

          18. I think you are making it much too complicated.

            Conscious experience happens all over the brain when and where X happens.

            The “experiencer” is simply a product/feeling generated when X happens at a particular spot(s), probably in the frontal cortex. It may even be dependent upon language itself to arise and may serve evolutionarily a function in social behavior.

            What is X?

            That’s where I think the EM field may have role in the explanation.

            Like

          19. ” Using your language, why can’t the brain be that experiencer?”

            That doesn’t explain anything either unless you are arguing that everything the brain does is experience . That would include regulating the heart, respiration, and such. It would include all of the preliminary processing in seeing a tree in the backyard. It would include whatever happens when we retrieve a memory but before the memory actually appears.

            Most scientists accept that the brain generates consciousness but it does it on top of a lot of unconscious processes. So, you are left still with explaining what is about the brain activity that makes it conscious.

            Like

          20. James, I’m obviously not holding everything the brain does as experience, anymore than you’re saying the EM field around my car motor is conscious when you argue for EM field involvement. Working on what is and isn’t conscious in the brain’s operations are why we have the various neural theories of consciousness.

            Liked by 1 person

Leave a reply to J.S. Pailly Cancel reply