Integrated information theory

Cover of the book The Feeling of Life ItselfI think most of you know I’m not a fan of integrated information theory (IIT).  However, it is a theory proposed by scientists, and I’ve always had a mildly guilty conscience over not having read about it other than through articles and papers.  Some years ago, I tried to read Giuilio Tononi’s book, PHI: A Voyage from the Brain to the Soul, but was repelled by its parable format and low information density, and so never finished it.  So when Christof Koch’s new book, The Feeling of Life Itself, was announced, and that it would be an exploration of IIT, I decided I needed to read it.

Koch starts off by defining consciousness as experience, “the feeling of life itself.”  He muses that the challenge of defining it this way is that it’s only meaningful to other conscious entities.

He then discusses the properties of experience, properties that eventually end up being axioms of the theory.

  1. Experience exists for itself, without need for anything external, such as an observer.
  2. It is structured, that is, it has distinctions, being composed of many internal phenomenal distinctions.
  3. It’s informative, distinct in the way it is, contains a great deal of detail, and is bound together in certain ways.
  4. It’s integrated, irreducible to its independent components.
  5. It’s definite in content and spatiotemporal grain, and is unmistakable.

These then map to postulates of the theory.

  1. Intrinsic Existence: the set of physical elements must specify a set of “differences that make a difference” to the set itself.
  2. Composition: since any experience is structured, this structure must be reflected in the mechanisms that compose the system specifying the experience.
  3. Information: a mechanism contributes to experience only if it specified “differences that make a difference” within the system itself.  A system in its current state generates information to the extent that it specifies the state of a system that could be its possible cause in the past and its effect in the future.
  4. Integrated: the cause-effect structure specified by the system must be unified and irreducible, that is, the system can’t be reduced to independent non-interacting components without losing something essential.
  5. Exclusion: only the set of elements that is maximally irreducible exists for itself, rather than any of its supersets or subsets.

All of this feeds into the “central identity of IIT”, which I’ll quote directly from the book.

The central identity of IIT, a metaphysical statement, makes a strong ontological claim. Not that Φmax merely correlates with experience. Nor the stronger claim that a maximally irreducible cause-effect structure is a necessary and sufficient condition for any one experience. Rather, IIT asserts that any experience is identical to the irreducible, causal interaction of the interdependent physical mechanism that make up the Whole. It is an identity relationship—every facet of any experience maps completely onto the associated maximally irreducible cause-effect structure with nothing left over on either side.

Koch, Christof. The Feeling of Life Itself (The MIT Press) . The MIT Press. Kindle Edition.

All of this factors into the calculation of Φ (pronounced “phi”), a value which indicates the extent to which a system meets all the postulates.  However, as noted in the postulates, there can be Φ values for subsets and supersets of the system.  What we’re interested in is Φmax, the combination of elements that produce the maximum amount of Φ.  According to the Exclusion postulate, only this particular combination is conscious.

The Exclusion postulate allows IIT to avoid talking about multiple consciousnesses within one brain, or of group consciousnesses.  Although it doesn’t rule out scenarios where splitting or combining systems results in new consciousnesses, such as what happens with split-brain patients, or what might happen if two people’s brains were somehow integrated together.

Not all of the brain is necessarily included in its Φmax, but a particular subset.  Koch thinks this is a region he calls the posterior cortical hot zone, including regions in the parietal, temporal, and occipital lobes.  In essence, it’s the overall sensory cortex, the sensorium, as opposed to the action cortex, or motorium at the front of the brain, which is why that Templeton contest between IIT and global workspace theories (GWT) is focused on whether consciousness is more associated with the back or front of the brain.

Koch discusses the evolution of consciousness.  He sees it going back to the reptiles, when the sensory cortex first started to develop.  (Somewhere around the rise of reptiles, or mammals and birds, seems to be where most biologists see consciousness arising, excluding fish, amphibians, and most invertebrates, although as always, a lot depends on the definition of consciousnss being considered.)

Koch in his earlier book, Consciousness: Confessions of a Romantic Reductionist, evinced a comfort level with panpsychism.  In the disccusion of IIT in that book, he implied that IIT and panpsychism were compatible.  But in this book, I got the feeling that he now views IIT more as an alternative to panpsychism, one which resolves some of panpschism’s issues, such as the combination problem.

As noted above, I’m not a fan of IIT, and I can’t say that this book helped much.  All the axioms and postulates make it feel more like philosophy than science.  It continues to  feel very abstract and disconnected from actual neuroscience.  Some of the axioms, such as structure and information, seem vague and redundant to me.  (The book adds examples, but I didn’t find them to help much.)   And others, such as the exclusion principle, seem arbitrary, included to save appearances.

The intrinsic existence one seems to imply metacognitive self awareness, but the theory simply asumes that it emerges somehow from integration, ignoring the actual neuroscience of the regions in the brain associated with introspection.  The postulate also ends up attibuting self awareness to all animals going back to reptiles, despite the lack of any empirical support.

IIT also posits that the feeling of all this emerges from the integration, again ignoring all the neuroscience on affects and survival circuits.  Bringing in all that neuroscience inescapably leads us to the front of the brain, which Koch rules out as having a role in consciousness.

And Scott Aaronson’s classic takedown of the theory remains in my mind.  Koch mentions Aaronson’s criticism, but like Tononi, doubles down and accepts that the arbitrary systems with trivially high Φ that Aaronson envisages are in fact conscious.  If the theory’s designations of consciousness aren’t going to match up with our ability to detect it, how scientific is it really?

But I think my biggest issue with IIT is it inherently attempts to explain the ghost in the machine, particularly how it’s generated.  Most of the other theories I find plausible simply dismiss the idea of the ghost, I think rightly so.  There’s no evidence for a ghost, either spiritual, electromagnetic, or any other variety.  The evidence we have is of the brain and how it functions.

I’ll be happy to go back to IIT if it manages to rack up empirical support.  Until then, it seems like a dead end.

To be clear, I do think integration is crucial, just not in the specific way IIT envisages it.  There are many integration regions in the brain, regions which are themselves integrated with each other.  But Antonio Damasio’s convergence-divergence zones and convergence-divergence regions seem to model this in a much more grounded manner than IIT.

What do you think?  Am I too skeptical of IIT?  Are there virtues of the theory that I’m missing?

This entry was posted in Mind and AI and tagged , , , , , , . Bookmark the permalink.

101 Responses to Integrated information theory

  1. Stephen Wysong says:

    Nice book report, Mike. Thanks! Both for reading this book and, once again, saving me and perhaps others the expense and time.

    To the definition:

    Consciousness, n., The feeling of life itself.

    1. Before plunging into the copious following details and since you have the book readily at hand, would you please copy for us Koch’s definition of the word ‘feeling’? Without understanding what he means by that word, which is centrally crucial to his definition, I don’t think any of us will be able to understand his definition of consciousness.

    2. By “life itself” I presume he means “being a living organism,” and, we must assume, a living organism in a “world” or a “life-supporting environment” but can you confirm that’s his meaning for that part of his definition of consciousness?

    3. The “central identity of IIT” is a metaphysical statement, i.e., a philosophical statement so that, even though “proposed by scientists” as you say in your second sentence, it is not a scientific theory. If you or Koch believe that’s incorrect would you explain why?

    Like

    • 1. I think you’ll like this.

      I also do not distinguish between feeling and experience, although in everyday use feeling is usually reserved for strong emotions, such as feeling angry or in love. As I use it, any feeling is an experience.

      Koch, Christof. The Feeling of Life Itself (The MIT Press) . The MIT Press. Kindle Edition.

      Given everyday usage, as he notes, I think his usage clouds rather than clarifies.

      2. I would assume that as well, but I can’t find anywhere where he makes it explicit. (Searching for “life” in the book gets a lot of hits.)

      3. It is a metaphysical statement. But just because a statement is metaphysical doesn’t mean it’s not scientific.

      Jim Baggot, who is pretty conservative about what he calls science, points out that any scientific theory is a metaphysical statement, just one that makes testable predictions. But given the problem of induction, no amount of testing will ever verify it. No matter how many tests it’s passed, the 10^100th observation a million years from now may falsify it. Therefore, whatever law or rule the theory asserts, it’s a metaphysical one.

      That said, as I noted in the post, I have my own reservations about how scientific IIT is.

      Like

      • Stephen Wysong says:

        Thanks for looking that up Mike. The staggering imprecision of Koch’s definition of the term ‘consciousness’, the term that is crucially fundamental to IIT—the very thing IIT is attempting to explain—not only ‘clouds’ or obfuscates, it is completely invalidating and disqualifying for everything that follows.

        And in “everyday use” the word ‘feeling’ refers to much more than emotions—touch, pain, temperature and all physical feelings are feelings, even more fundamental than emotional feelings.

        Score: IIT 1 Woo.

        “Life itself” is tossed off as requiring no explanation and you found no reference of any kind to life’s core characteristic of being biological. Is that because IIT mathosophy has (and wants) nothing at all to do with consciousness as biological?

        Score: IIT 2 Woo’s, which coalesces to WooWoo

        And that we can conclude immediately from the hopeless insufficiency of Koch’s definition of consciousness.

        As stated, the “central identity of IIT” is a metaphysical statement—a philosophical statement. Your suggestion that it might still be a scientific statement causes me to wonder how metaphysical you believe Maxwell’s theory of electromagnetism is. And perhaps you’re misinterpreting Jim Baggott, who wrote, “I just want us all to acknowledge the difference between empirically based scientific theories and metaphysics, or pseudo-science.” Per Baggott then (and I concur) IIT is metaphysics; it is philosophy; it is pseudo—meaning ‘sham’—and I’m inclined to even omit the “-science” part.

        I suspect 100% ditto for GWT. Can we look forward to a book report for that Mike? 😉

        I note finally that IIT’s unquestionable WooWoo-ness means that it’s impossible for anything at all meaningful to be derived from those IIT “cortical activity experiments” you recently wrote about.

        Like

        • “Is that because IIT mathosophy has (and wants) nothing at all to do with consciousness as biological?”

          I suspect Koch would say that, while he’s open to non-biological consciousness, IIT is very much about biology. For instance, he thinks computationalism is wrong.

          On Jim Baggot, from his book ‘Farewell to Reality’ (a book critizing theoretical physics for non-testable theories) (emphasis added):

          The principal requirement of a scientific theory is that it should in some way be testable through reference to existing or new facts about empirical reality. The test exposes the veracity or falsity of the theory, but there is a caveat. The working stuff of theories is itself abstract and metaphysical. Getting this stuff to apply to the facts or a test situation typically requires wrapping the abstract concepts in a blanket of auxiliary assumptions; some explicitly stated, many taken as read. This means that a test is rarely decisive. When a test shows that a theory is false, the theory is not necessarily abandoned. It may simply mean that one or more of the auxiliary assumptions are wrong.

          Baggott, Jim. Farewell to Reality: How Modern Physics Has Betrayed the Search for Scientific Truth (p. 20). Pegasus Books. Kindle Edition.

          “I suspect 100% ditto for GWT. Can we look forward to a book report for that Mike?”

          Already did it. https://selfawarepatterns.com/2019/06/23/dehaenes-global-neuronal-workspace-theory/
          I think GWT is far more scientific than IIT and has a reasonable chance of being part of the solution. Although I currently think the consilience with higher order thought and attention schema theories I discussed the other day gets more of it.

          On your last point, I think it they do figure out a way to answer the back or front of the brain issue, it will be a major service. My money is on both of them lighting up (which would be a win for the front advocates).

          Like

  2. paultorek says:

    I don’t think Koch’s axiom 5 (It’s definite in content and spatiotemporal grain, and is unmistakable.) is true. Imagine the back of a penny (or Euro coin, or whatever you are familiar with). Visualize it in as much detail as possible, then go on reading.

    Now, how many columns were there on Lincoln’s Memorial, in your imagined picture?

    My answer was: I dunno. So it seems like not everything about consciousness is definite.

    I don’t think Koch’s theory implies that metacognition is universal among conscious creatures. Metacognition requires juxtaposing a view of oneself as an organism (from the outside, so to speak) with a view of oneself as a subject of experience. Awareness of multiple features of experience at the same time, with the features interacting, is much leaner and meaner than that.

    Like

    • Paul, I’m sympathetic, but wanna push back a little. The experience of your memory of the penny was definite: you definitely did not experience the number of columns. (Not being facetious here.). Your experience of actually looking at a penny would have a definite number of columns (possibly assuming the number is 4 or less).

      *

      Liked by 1 person

      • paultorek says:

        Interesting. That might save the axiom, but then I’d wonder if, so understood, axiom 5 actually justifies anything in the theory/postulates.

        Like

        • Actually, I think axiom 5 (definite-ness) might support postulate 1, a set of “differences that make a difference”. If some feature is definitely in the experience, there must be some part of the physical make-up that accounts for that feature being part of the experience.

          *

          Like

    • Koch would probably respond that he’s talking about your experience of remembering the penny, rather than the penny itself, phenomena vs noumena. Of course, this ends up being circular. What you introspected is definite because you introspected it, which isn’t nearly as profound when put that way.

      That is much leaner and meaner, but I’m not sure how we get to “exists for itself” with it. Of course, this might come down to what that phrase means exactly, and despite having read the book, I can’t tell you that for sure.

      Like

  3. [can’t help myself].

    “Φ (pronounced “phi”)”

    Now do “phi”!

    *
    [fee, fi, fo, fum]

    Liked by 1 person

  4. What do I think? I think IIT is almost exactly right, but incomplete, because it maps really well to my understanding but doesn’t provide the correct understanding of qualia. The only wrong parts are postulate 5, and the idea that Phi (fi,fo,rum) just is consciousness.

    BTW, did the book talk about qualia space?

    *
    [No consciousness without representation!]

    Like

    • If you reject Phi as consciousness, then I think you’ve rejected the core of the theory. Like me, you agree integration is important, just not in exactly the way IIT stipulates.

      A search for “qualia space” gets no hits. A search for just “qualia” only gets one, in a quote from Dennett dismissing the concept. Of course, Koch dismises Dennett’s dismissal, but without the q-word.

      Even “representation” only gets a few hits, mostly in the context of an anti-computationalist rant. The IIT camp is just on a whole different channel from the rest of the field.

      Like

  5. john zande says:

    Great review.

    You’re right in pointing to it remaining far too abstract, but I still think it hold merit when deployed as an explanatory element of panpsychism. That central idea of consciousness being an emergent phenomenon built-up of bits of the same thing is zeroing in on something probably pretty close (IMO) to where this whole investigative journey will wind up one day.

    Liked by 1 person

    • Thanks John. You might be right.

      Myself, if it does turn out to be an emergent phenomenon, I hope we can understand how the emergence happans. For instance, we know thermodynamics emerges from particle physics, but we understand how it happens. What makes me uneasy is the idea of strong emergence, of it being some kind of magical step. If we’re forced to evoke that, I think it will indicate that we still don’t understand what’s happening.

      Liked by 1 person

      • john zande says:

        Yeah, it’s the “some kind of magical step” that makes me squirm, too. I’m looking at it like how we look at a so-called “living cell.” It is a protein based robot hosting millions of chemical reactions every second, yet it is nothing but dead matter being moved chemically or mechanically by the business-like laws of interaction. Each individual function is no more “alive” than a mechanical hole puncher performing the same task every second is alive. However, put all those parts and systems together (one working off another, affecting another still) and we get the appearance of a living thing. Indeed, we call it a “living cell.” And yet, nowhere in this contraption is anything that is actually “alive.” Not a single thing. The same, I think, probably applies to consciousness.

        Forgot where I read this, or who said it, but I copied this passage (below) ages ago and it seems appropriate here for another perspective:

        An H2O molecule is not just a little piece of water. Consider what liquid water does: it flows, forms droplets, carries ripples and waves, and freezes and boils. An individual H2O molecule does none of that: those are collective behaviors.

        Liked by 1 person

        • I think the cell analogy is the right one. Life was once thought to have a vitalism that separated it from inanimate matter. The mystery of vitalism was as intractable as consciousness is often regard to be today.

          Hardly anyone talks about vitalism anymore, except to reference it as a historical viewpoint, because while we don’t have every detail, we fundamentally understand what’s going on, that it’s all organic chemistry (and “organic” just means that carbon atoms are in the mix). To be sure, the chemical reaction pathways that make up metabolism are horrendously complex. But we understand that’s what metabolism is. Enough that anyone pushing vitalism today is pushing pseudoscience.

          For consciousness, I think we’re headed for a similar resolution. Like the chemical pathways, the solutions are going to be profoundly complex. Probably few people will really understand them, and no one will understand all of it. But I think we will get to a point where we’ll be confident that we’ve identified the basic mechanisms.

          At some point, people will look back and note how people once talked about consciousness as though it were magic, just as they do today with vitalism.

          Liked by 1 person

          • Linda says:

            I personally, am not seeing anything ‘magical’ about what you two are mentioning. None of it strikes me as magical. It never has. Except for vitalism, I call that magical. But something like panpsychism, not really.

            Like

          • I didn’t mean to imply that James and I were talking about it in magical terms, just that a lot of people do, often implicitly and unconsciously.

            Like

  6. Wyrd Smythe says:

    I tend to agree with your assessment. IIT is necessary, but not sufficient.

    I am a bit curious what, if anything, it says about physical systems versus numerical simulations, and, if it does, how it looks at the physical system executing the numeric simulation — does the computer hardware count in calculating Φ or just what the simulation implements?

    I did crack up when I read “Koch starts off by defining consciousness as experience, ‘the feeling of life itself,’ given we were just discussing that very thing. 😀

    Liked by 1 person

    • I think you’ll like this part of it. The calculation of phi focuses on the physical structure, so mainstream computers get a very low score. Ginger Campbell pointed out that this is ironic given that it’s called the “integrated information theory”, when it’s really more about structure than information.

      And Koch is a fervent anti-compuationalist. He goes through all the standard criticisms of computationalism. So you’d likely find a lot to agree with him in that area.

      Like

      • James Cross says:

        It seems to me that computationalism and IIT ought to go together so I’m surprised Koch doesn’t like computationalism and you don’t like IIT (but not as surprised since I’ve read your objections before). If mind is computation, then measuring computation ought be a way of measuring mind. Is it that you just don’t like the particulars that IIT measures or is there some more basic problem with measuring it at all?

        Like

        • When I first heard about IIT many years ago, it did sound like a computationalist theory, but it isn’t, at least not in the manner Koch portrays it. And phi is not a measure of computation. It’s touted as a measure of integration, but I think that’s only true in terms of structure, and even then structure at a certain level of organization.

          I’m not opposed to measuring something, as long as it’s meaningful, although my deep suspicion is that phi is measuring the wrong thing, and that attempting to boil it all down to one number is like attempting to boil down how alive something is to one number. I’m sure you can come up with a formula for it, but whether it’s meaningful is a different matter.

          I do think the mind is a type of computation, but not all computation is mental, just as not all computation is Tetris. So measuring computation, such as how many MIPS the system can perform, may tell us whether a system has the capacity and performance to be a mind, but not whether it is one.

          Like

          • James Cross says:

            MIPS would seem to me to be an incorrect measure unless mind has a certain performance requirement. For one thing, I would think we could have slow minds and fast minds that get to the same results except that the slow mind takes longer. For another, since every mind is running more or less on the substrate of neurons, it would be hard to imagine that human neurons are significantly faster than snake neurons. Where the difference lies between the human and the snake is in the number of neurons which sort of leads back to a network size or integration sort of number like IIT provides.

            Like

          • MIPS was just an example. But the brain does make up for its pokey perforance with massive parallelism. As you note, a larger physical neural network has higher overall throughput than a smaller one. It’s the crowd sourcing strategy instead of the high performance strategy that technology computers have been using for the last several decades. (Although that’s changing. High performance computing today is far more about massive clusters of parallel processors. And newer designs are using lower powered chips with ever larger numbers of cores.)

            But if you have faster processors, you can get by with less parallelism. Put another way, there are many ways to skin a cat. If IIT were really about the integration of information, it would be open to different ways that might happen. But it seems to be more about reifying specific aspects of structure as a recipe for the ghost.

            Like

          • Wyrd Smythe says:

            “If IIT were really about the integration of information, it would be open to different ways that might happen. But it seems to be more about reifying specific aspects of structure as a recipe for the ghost.”

            A ghost? You seem utterly opposed to the idea that physical structure could matter in something as complex as consciousness. It matters in all other physical systems. What makes the brain special?

            Like

          • I’m not opposed to idea that physical structure matters, and I think I’ve made that clear in our discussions. But for just about any functionality, there are many different ways to accomplish it. For example, a heart is not the only way to pump fluid, lungs are not the only way to provide respiration, bones are the not the only way to provide support structure, etc. Why should the functionality that brains provide only be possible in the exact manner they provide it?

            For that matter, different brains on the phylogenetic tree accomplish similar functions using radically different anatomies. Vision in an intervertebrate has functional convergence with vertebrate vision, but the neural structures are completely different. In evolution and engineering, there are many ways to skin a cat.

            Like

          • Wyrd Smythe says:

            “For example, a heart is not the only way to pump fluid,”

            Absolutely, and I don’t think we’ve ever had any dispute between physical isomorphic systems accomplishing the same physical function.

            “For that matter, different brains on the phylogenetic tree accomplish similar functions using radically different anatomies.”

            Exactly. For example my post about octopus brains.

            “In evolution and engineering, there are many ways to skin a cat.”

            And, generally in evolution, and certainly in engineering, structure matters. The dispute has always been between physical systems and numeric simulations of those systems.

            Like

          • Wyrd Smythe says:

            “It’s touted as a measure of integration, but I think that’s only true in terms of structure, and even then structure at a certain level of organization.”

            It does seem focused entirely on structure, which I approve of, since I think that matters a lot.

            I’m not sure that the “level of organization” matters that much if consciousness is occurring on that level. What would seem to matter would be that the structure supports consciousness.

            “I do think the mind is a type of computation, but not all computation is mental,”

            What makes the difference? Why is crunching some numbers not mental but crunching other numbers is?

            Like

          • The same thing that determines whether the computation is for Tetris, an accounting system, or a face recognition system. It matters what functionality is being implemented. In my mind, computation for a mind involves building and utilizing predictive representations, of sensory information coming in, of lower level reflexive dispositions, and, in the case of humans, recursive representations of the representations.

            It’s a very mechanistic and functionalist viewpoint, one that I know many dislike, but it’s what I think is reality.

            Like

          • Wyrd Smythe says:

            So, to be clear, the only difference between Tetris and a fully operational mind is the numbers?

            Like

          • I see it as the numbers and the correlated physics (which can vary).

            (Just so you know, I’m pretty fatigued on the numerical vs physical thing, unless there’s something new to discuss on it.)

            Like

          • Wyrd Smythe says:

            “I see it as the numbers and the correlated physics (which can vary).”

            I apologize, but I don’t know what you mean by “correlated physics.”

            I think we agree the machine doesn’t know what the numbers it crunches mean — that’s only in the Programmer’s head, right?

            And there’s what the system appears like to the user: Tetris, Excel, a browser, or an emulation of Alan Turing.

            Or so you assume, yeah?

            Like

          • “I apologize, but I don’t know what you mean by “correlated physics.””

            Well, a machine language opcode is a number, but it’s also a physical pattern with causal effects in the system.

            “I think we agree the machine doesn’t know what the numbers it crunches mean — that’s only in the Programmer’s head, right?”

            Sure, but I don’t know what any particular neuron firing in my head means. Meaning comes in the relations between the system and its environment.

            “And there’s what the system appears like to the user: Tetris, Excel, a browser, or an emulation of Alan Turing.”

            Sure. I see where you’re going with this. I’ll remind you that I think consciousness is in the eye of the beholder.

            Like

          • Wyrd Smythe says:

            “Well, a machine language opcode is a number, but it’s also a physical pattern with causal effects in the system.”

            Ah, got it, thanks.

            Those causal effects are things like add and move data instructions — the things the computer is able to do. (What I called the first causal system of a computer in my post.)

            You agree those things have no meaning to the machine, that all meaning is in the intent of the designer. (What I called the second causal system of a running system — the virtual reality that’s encoded in the numbers by the programmer.)

            “I’ll remind you that I think consciousness is in the eye of the beholder.”

            Yep, understood, and this just leaves us at the divide.

            You think a computed b-zombie, a functional replacement, would effectively be as conscious as anything else that claimed to be conscious. I’m not sure I think one is possible, but if it were, and it seemed to survive the Rich Turing Test (conscious interaction over time), then I would tend to agree it was conscious.

            I’m not confident a computed b-zombie is possible, and I guess you are? You seem convinced it’s just an engineering problem, like breaking the sound barrier, not an in-principle problem, like FTL.

            Alternately, you also believe a numerical simulation of the physical brain would work, and unless I’m doing you a disservice, I believe you have high confidence here, too? Again, if one were possible, and it did respond like a conscious mind, I would lean towards saying it’s conscious. (But I’m dubious one is possible for reasons I’ve enumerated.)

            Maybe the bulk of our divide is just over the confidence one should have these things are possible?

            Like

          • I like the phrase “rich Turing test”. I’ve also used “strong Turing test” in contrast with the weak version people have focused on.

            “I’m not confident a computed b-zombie is possible, and I guess you are? You seem convinced it’s just an engineering problem, like breaking the sound barrier, not an in-principle problem, like FTL.”

            The longer the putative zombie can pass the Turing test, the less chance it’s a zombie. If we know it does things like build and utilize representations, it would strengthen my sense that it’s not a zombie. (Of course, a b-zombie that only needs to briefly fool someone is trivial to construct.)

            I do think it’s an engineering problem. FTL has specific laws of nature that get in the way, and while there are speculative solutions, they all seem to depend on speculative physics which we have no evidence for, or astronomical energy levels. I don’t (currently) see those kinds of obstacles for brains.

            “I believe you have high confidence here, too? ”

            “High confidence” is too strong a term. I don’t see any laws of physics in the way. But I won’t be confident until we’ve done it with a reasonably complex animal; a vertebrate at least, and I won’t have high confidence until I’ve met an uploaded human. (Which of course I probably never will since this is likely centuries in the future.)

            Like

          • Wyrd Smythe says:

            So, yeah, the divide is mainly about skepticism and confidence in future results.

            Based on a perception of where the problem lies on the “Mach Speed vs FTL” spectrum. You’re more confident the engineering problem has a viable solution whereas I perceive the potential for unsolvable problems and am skeptical.

            Like

          • Wyrd Smythe says:

            p.s. Are we now on the same page regarding a computer having two distinct causal systems, one regarding the “physical pattern[s] with causal effects in the system” and the other regarding the programmer’s intent? That the physical one enables the virtual one, but the two represent completely different causal models (i.e. a computer and a Tetris or whatever)?

            Like

          • I think we have to just agree to disagree on this one.

            Like

          • Wyrd Smythe says:

            Okay, I’ll drop it. Just to be clear, this is a new topic on which we’re disagreeing: that hardware implements one physical causal system and software encodes another, unrelated, virtual causal system. You seem to be disagreeing about objective facts, so I walk away confused on this one.

            Like

      • Wyrd Smythe says:

        “And Koch is a fervent anti-compuationalist. He goes through all the standard criticisms of computationalism.”

        Ah, yes I would enjoy that part. 😀

        (But I still think IIT is necessary, but not sufficient.)

        Like

  7. James Cross says:

    Interesting article here by Koch:

    https://christofkoch.files.wordpress.com/2019/08/koch-building-a-consciousness-meter-17.pdf

    They zapped people in various with a magnetic pause and measured the complexity of the EEG in the responses. He says this provide a crude measure of Phi. Unconscious people respond with low complexity. Conscious people with higher complexity.

    Seems like some experimental validation of the general idea of integration being a rough measure of consciousness.

    Like

    • I haven’t read the article, but I think he discussed this study in the book. The thing is, all the major theories of consciousness posit widescale communication across the brain, although they differ on the specific regions necessary for consciousness. When you look at these studies, the specific data is underdetermined in terms of which theory it supports. Typically IIT can claim victory, but so can GWT and HOT (and thalamo-cortical loop theory for that matter).

      The theories that do seem challenged by these kinds of results would be local first order ones.

      Like

    • Wyrd Smythe says:

      “Seems like some experimental validation of the general idea of integration being a rough measure of consciousness.”

      Or, per your discussion of MIPS above, at least a measure of the system’s capacity to support consciousness. Maybe that’s all IIT gives us: a measure of the potential for a system to be conscious.

      I would think synapse behavior, for one, would be important to the process, but (AIUI) IIT only measures connectivity.

      Like

      • James Cross says:

        The more I think about it the more I’m tempted to think the results are just showing a general alertness or wakefulness, which is possibly a precondition for consciousness but otherwise tells us little else about how it works.

        However, it does seem that everything that we think might be conscious seems to go through cycles of wakefulness and sleep. This is controlled primarily by the RAS. My knowledge of electronics isn’t great but it seems somewhat like a carrier wave that is on or off. When it is on, consciousness can modulate it and do what it does but without it there is no consciousness. I’m speaking strictly by analogy.

        Like

  8. geekgirljoy says:

    Hierarchies of CDZs sounds like a good way to build an artificial mind.

    Liked by 1 person

  9. James Cross says:

    This is interesting and popped up on my WordPress reader feed.

    https://authordavidwolf.wordpress.com/2018/12/23/how-a-trippy-1980s-video-effect-might-help-explain-consciousness/

    “Brains, I argue, are not squishy digital computers – there is no information in a neuron. Brains are delicate organic instruments that turn energy from the world and the body into useful work that enables us to survive. Brains process energy, not information.”

    Article discusses IIT and Koch’s magnetic pulses specifically.

    “What is special about the conscious brain, I propose, is that some of those pathways and energy flows are turned upon themselves, much like the signal from the camera in the case of video feedback. This causes a self-referential cascade of actualised differences to blossom with astronomical complexity, and it is this that we experience as consciousness. Video feedback, then, may be the nearest we have to visualising what conscious processing in the brain is like.”

    Like

    • James Cross says:

      I’ve suspected energy is more critical to this than information, although there is probably a relationship. But especially the low amount of energy overall that the brain uses and the evolutionary premium evolution would place on energy usage.

      Like

    • James Cross says:

      Full paper here:

      https://www.frontiersin.org/articles/10.3389/fpsyg.2018.02091/full

      Talks about brain as difference engine which sort of parallels my analogy above about modulation of carrier wave.

      Like

    • I saw this article back when it came out in The Conversation. My question then and now is, isn’t this just describing an information system at a lower level of abstraction? Any information processing system is also going to be an energy processing system. Is there any other way to process information?

      And I’m not clear on how viewing the brain as a difference engine isn’t just viewing it as a type of computational system.

      Like

      • James Cross says:

        Probably Wyrd can answer this better.

        Consciousness and energy seems more analog. Analog can be converted to digital but usually with a loss of information. Perhaps at a certain point things reduce to binary choices but if every calculation along the way is converted to binary you could be dropping a lot of information.

        Liked by 1 person

        • Wyrd Smythe says:

          James, you quoted another post (I haven’t read it yet):

          “What is special about the conscious brain, I propose, is that some of those pathways and energy flows are turned upon themselves, much like the signal from the camera in the case of video feedback.”

          Which sounds very similar to Douglas Hofstadter’s “Strange Loop” concept. He also uses the feedback loop analogy. I didn’t think too much of it at first, but the more I thought about it, the more I thought there might be something to it.

          Hofstadter is really big on self-referential loops. If you remember the geek cultish book Gödel, Escher, Bach, that’s all about self-referential loops. In I Am a Strange Loop he condensed GEB down into a much smaller and more readable book that focuses on his loop theories without wandering around through so much other territory (like finite automata and Gödel).

          “I’ve suspected energy is more critical to this than information,…”

          I agree.

          “Consciousness and energy seems more analog. Analog can be converted to digital but usually with a loss of information.”

          I agree. One concern is the difference between the analog realm and the numeric realm. Chaos theory makes very clear the differences. The question is whether the numbers can be good enough.

          With many digital systems — music for instance — we do a good enough job (although some people complain about standard digital music, it’s not clear if the real complaint is just the difference or an actual problem they hear between live music and digital).

          It kinda depends on what the numbers involved mean and how they’re used. It’s possible consciousness depends on chaotic effects leveraged by nature. If so, computations might have a problem keeping up with the necessary precision.

          I see the analog-numeric divide as one potential source of problem, but the energy aspect you bring up is a second issue. That’s the physical-virtual divide I’ve posted about recently. It’s also a potential source of problem.

          Liked by 1 person

        • You can lose information; the question is whether it’s relevant. The quantization noise from converting to digital needs to be less than the noise in the analog system. We do it all the time with music and TV signals.

          Like

        • “Why bother to convert to digital if there is no need?”

          With digital you get error correction. With analog, noise accumulates.

          *

          Like

          • James Cross says:

            For what it’s worth, signalling in the brain seems to be both analog and digital:

            https://www.technologyreview.com/s/522066/solving-the-neural-code-conundrum-digital-or-analog/

            However, even that looks like a sort of reverse engineering of the signals and may not actually capture what is going on. For all we know now, it might not exactly fall into either of those models.

            Liked by 1 person

          • James Cross says:

            Then this article that makes it seems something is going on in the dendrites too.

            “This hybrid digital-analog, dendrite-soma, duo-processor parallel computing “is a major departure from what neuroscientists have believed for about 60 years,” says Mehta. It’s like uncovering a secret life of neurons, he adds.”

            https://singularityhub.com/2017/03/22/is-the-brain-more-powerful-than-we-thought-here-comes-the-science/

            Liked by 1 person

          • The digital / analog mix seems widely acknowledged in neuroscience. Evolution avails itself of whatever method works, which makes it a very messy engineer. (And a pain to reverse-engineer.)

            Singularity Hub has a tendency to sensationalize things. I think it’s been known for decades that dendrites are not passive mechanisms, but can generate their own potentials. They tend to be weaker and prone to fading than the ones generated at the soma / axon. It does allow some logical processing to happen in the dendritic tree, but it’s limited since inhibitory synapses mostly connect to the soma.

            Like

          • James Cross says:

            You can easily find the abstract:

            “DAP firing rates were several-fold larger than somatic rates. DAP rates were also modulated by subthreshold DMP fluctuations, which were far larger than DAP amplitude, indicating hybrid, analog-digital coding in the dendrites. Parietal DAP and DMP exhibited egocentric spatial maps comparable to pyramidal neurons. These results have important implications for neural coding and plasticity.”

            Also

            “The dendrites generated several-fold more spikes than the soma. ”

            https://science.sciencemag.org/content/355/6331/eaaj1497

            Like

          • The firing rates being higher than in the soma makes sense, since the soma is only going to fire once enough dendrites have fired to sufficiently depolarise it. But as I noted above, the types of logical processing happening in the dendrites is limited by the sparsity (albeit not complete absence since biology is never clean with these kinds of things) of inhibitory synapses (at least according to John Dowling in his book on the brain). In other words, you can get a lot of AND and OR type processing, but little to no NOT processing until the soma.

            Like

          • Wyrd Smythe says:

            But error correction just protects any noise introduced in the analog-to-digital step. That’s where artifacts get introduced in digital music, for instance.

            Like

          • Actually Wyrd, error correction is important in the transmission or communication of information. That’s why DNA is digital, and that’s what all of Shannon’s work was about.

            Even if some neurons are doing digital things, I haven’t seen any error correction associated with that.

            *

            Like

          • Wyrd Smythe says:

            You misunderstand. Absolutely error correction protects digital data in transmission and storage! I’m just saying it can’t do anything about noise introduced as a consequence of the analog-to-digital conversion. That noise is part of the digital data error correction protects.

            For instance, when (analog) music is converted to digital any conversion artifacts are part of the digital data.

            Like

      • James Cross says:

        Some quotes from paper:

        But while IIT is presented as a theory of integrated information, it could equally serve as a theory of how energetic processing is organized since the physical substrate of consciousness consists in the causally interrelated patterns of neural firing that are identical with the conscious experience.

        Treating brains as neural information processors does not help us to understand consciousness as a physical process because information, according to the commonly accepted definitions, is not a physical property of brains at the neural level; there is no information in a neuron.

        Like

  10. Hariod Brawn says:

    I don’t know.

    Same as everyone else.

    It’s all guesswork.

    Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.