The information generation theory of consciousness

James of Seattle called my attention to an interesting paper in the Neuroscience of Consciousness journal: Information generation as a functional basis of consciousness:

  • Drawing upon empirical research into consciousness, we propose a hypothesis that a function of consciousness is to internally generate counterfactual representations detached from the current sensory events.

  • Interactions with generated representations allow an agent to perform a variety of non-reflexive behaviors associated with consciousness such as cognitive functions enabled by consciousness such as intention, imagination, planning, short-term memory, attention, curiosity, and creativity.

  • Applying the predictive coding framework, we propose that information generation is performed by top-down predictions in the brain.

  • The hypothesis suggests that consciousness emerged in evolution when organisms gained the ability to perform internal simulations using generative models.

The theory described broadly matches an idea I’ve pondered several times, that consciousness is about planning, that is, about performing simulations of action-sensory scenarios, enabling non-reflexive behavior.  Although the authors situate their ideas within the frameworks put forth by global workspace and predictive coding theories.

But their theory is more specifically about the internal generation of content, content that might be a prediction about current sensory input, or that might be counter-factual, that is, content not currently being sensed.  In many ways this is similar to the predictive coding framework, but it’s not identical.

In the brain, sensory information flows into early sensory processing regions, and proceeds up through neural processing layers into higher order regions.  But this encoding stage is only a small fraction of the processing happening in these regions.  Most of it is feedback, top down decoding processing from the higher order areas back down to the early sensory regions.

In predictive coding theory, this feedback propagation is the prediction about what is being sensed.  And the feedforward portion is actually error correction to the prediction.  The idea being that, early in life, most of what we get is error correction, until the models get built up, and gradually as we mature, the predictions becomes more dominant.

Importantly, when we’re imagining something, that imagined counterfactual content is all feedback propagation, since what’s being imagined generally has little to no relation to sensory data coming in.  Imagination is less vivid than perceiving current sensations because the error correction doesn’t reinforce the imagery.  (Interestingly, the authors argue that imagery in dreams are more vivid because there’s no sensory error correction to dilute the imagery.)

The information generation theory is that this prediction feedback is what gives rise to conscious experience.  This theory could be seen as similar to recurrent processing theories, although the authors seem to deliberately distance themselves from such thinking by making their point with a non-recurrent example, specifically splitting the encoding and decoding stages into two feedforward only networks.

The authors note that there is a strong and weak version to their hypothesis.  The weaker version is that this kind of processing is a necessary component of consciousness, and is therefore an indicator of it.  The stronger version is that this kind of information generation is consciousness.  They argue that further research is necessary to test both versions.

The hypothesis does fit with several other theories (each having their own strong and weak claim).  The authors even try to fit it with integrated information theory, although they admit the details are problematic.

This is an interesting paper and theory.  My initial reaction is that their weaker hypothesis seems far more plausible, although in that stance it could be seen as an elaboration of other theories, albeit one that identifies important causal factors.  The stronger hypothesis, I think, would require substantially more justification as to why that kind of processing, in and of itself, is conscious.

That’s my initial view.  It could change on further reflection.  What do you think?

33 thoughts on “The information generation theory of consciousness

  1. This is just another example of conflating mind with consciousness which only adds to the confusion and obfuscates meaning. Consciousness is a word, and we are going to have to learn how to use that word correctly if we expect to move forward.

    Peace

    Like

      1. “But what is the right way to use it?”

        Consciousness is universal, not just limited to mind.

        “And what makes that usage objectively the right way over other ways espoused by others?”

        The correct answer is reality/appearance metaphysics. Ontology is what distinguishes one “thing” from another “thing”, not epistemology. According to RAM, there is no ontological distinction between matter and mind. If there is no ontological distinction, then the mechanisms that govern motion which result in form within the material world of matter must be the same mechanisms that govern motion which result in form within the mental world of mind. This is a position which you previously espoused to be correct.

        Therefore, predicated upon the principles of logical consistency, in other words objectivity itself; if one chooses to correlate consciousness with mind, one must be equally compelled to correlate consciousness with matter. That is the only logical conclusion which can be derived from the ontology. Agreed?

        But here’s the problem: My models assert that in light of the weight of overwhelming evidence, homo sapiens are incapable of being objective. What do you think?

        Peace

        Like

        1. ” if one chooses to correlate consciousness with mind, one must be equally compelled to correlate consciousness with matter.”

          The logical issue I see here is that just because consciousness is matter, it doesn’t follow that all matter is conscious. It would be similar to asserting that, because cake is matter, all matter is cake. You could dilute the definition of “cake” enough to make that true, but it would be missing most of the things we associate with cakes.

          Like

          1. Mike,
            Your defense is incoherent because it is predicated upon inductive reasoning. RAM is derived from deductive reasoning, following the rules of logical consistency, and therefore becomes a grounding, foundational architecture to which every thing must correspond without any exceptions if objectivity is to prevail.

            Here’s another word game Mike. Just because it can be scientifically demonstrated that if I jump into a swimming pool full of water ten times, and every time I jump in I get wet, doesn’t prove that I will get wet every time that I jump in. The only thing the experiment “proves” is that I got wet ten times. The rest is synthetic a priori judgements predicated upon logical consistency. It’s all we’ve got, induction have proven to be a straw-man.

            Like I said Mike, to my own dismay, my models assert that homo sapiens are incapable of being objective, nor do they possess the ability to engage in logical consistency if that logic does not correspond to an intellectual construct they already believe to be true. It’s called prejudice, bigotry and bias, the stumbling blocks of intellectual progression.

            The sandbox of discourse is full of word games, good luck and have fun. I do not have time, nor do I enjoy this type of foolishness.

            Peace

            Like

  2. “The information generation theory is that this prediction feedback is what gives rise to conscious experience. ”

    The problem as I’ve noted is that this doesn’t really explain how consciousness comes from bit shuffling. It may very well be a lot of what the brain does when it is conscious but it doesn’t explain how conscious experience arises from it.

    Like

    1. Their weaker claim is definitely a lot more plausible. To rise to actual consciousness requires something more. Although I do think information processing can provide it, but we have to resort to higher order theories to get there.

      Like

        1. I think GWT is right as far as it goes, but it’s incomplete. Though it does identify an important part of the infrastructure. But to me HOT points toward an explanation for how we’re aware of our primal reactions (feelings) and how we’re aware of our awareness.

          Of course, no theory of consciousness is currently complete. They all carry promissory notes. But HOT’s strike me as more likely to pay off. Only time will tell.

          Like

    2. James, how about this: consciousness come from purposeful (self-optimising) bit shuffling with two added loops that mean that (1) some of the input bits represent the shuffling, and (2) some of the output bits change the shuffling.

      The bits are then the content of consciousness; the shuffling is the process of consciousness and subjective experience arises because the process is conscious of its own operation and content.

      Like

      1. I don’t see how adding some extra bits for whatever purpose makes consciousness come about. What does my experience of a blue sky with billowing clouds have to do with extra tracking bits?

        Like

        1. Your experience of a blue sky comes about from photons in certain energy patterns striking your retina, exciting patterns of photoreceptors and then ganglion cells, which result in patterns propagating up the optic nerve to the thalamus and then occipital lobe, when the patterns excite various processing layers, forming image maps, predictive representations at various levels.

          The “bits about the bits” are additional processing making use of those predictive representations. This processing triggers various innate and learned dispositional firing patterns, feelings, about the visual input, what it feels like to see the blue sky. And there are bits about the bits about the bits, enabling you to be aware that you’re having the experience. And even more bits we’re using right now to talk about this stack of bit processing.

          Like

          1. Who asserted every piece of information needs to be tracked? But when thinking about consciousness, there can’t be any free lunches. If we are aware of our awareness, then that implies representations of our representations.

            Like

        2. James: Conscious experience is experienced by, and only exists subjectively to, the you that is just your bit shuffling machine (neurally implemented), not flesh, blood and bone. It only needs to exist to the extent that anything is taken to exist by the bit shuffler.

          The conscious you is a bit shuffling machine that knows about itself and modifies itself.

          Qualia are multi-faceted because different aspects of bit-shuffling are made available in the same neural space: what is sensed, predicted, whether that is good or bad for us, possible action and attention sets, actual actions taken, likely consequences. That brings richness and meaning to our experience of a blue sky with billowing clouds.

          You don’t need more and more bits because it is recursive across cognitive time steps of a few 100ms, with reduced bit resolution at each recursion.

          Like

    3. James, part of the problem is terminology. Every time neurophilosophers refer to “prediction” I have to translate that for myself into “recognition mechanism”. To say that your brain is predicting there is a cat out there is really to say that your brain has a mechanism for recognizing cats. When this mechanism actually recognizes a cat, i.e., receives sufficient inputs, it generates an output which is the equivalent of saying “cat”. This output influences the sensory mechanisms, thus providing “feedback” making it more likely that those mechanisms will produce cat-appropriate signals. The important point of the paper, for me, is that inputs to trigger the “cat” recognizer do not have to come from sensory inputs, which is why you can think about cats without cats being present.

      As for Consciousness, the key is the representations. The authors “propose a hypothesis that a function of consciousness is to internally generate counterfactual representations ”. I think it’s the generation of those representations that the authors would claim *is* Consciousness (strong version) or is otherwise associated with consciousness (weak version). For myself, I think those representations are the inputs to the consciousness-related event. The outputs might be actions, or memories, or newly learned concepts, but there would be no consciousness if there were no outputs based on those representations.

      *

      Like

  3. hypothesis that a function of consciousness is to internally generate counterfactual representations detached from the current sensory events.

    Emphasis added. I like this hypothesis, because it uses the right article before “function of consciousness”.

    Like

    1. I like it too but can you tell me what it means?

      It seems to apply only to organisms with a capability for imagination, which would be limited probably to humans. So doesn’t seem it can generalized more broadly. That’s okay, I guess, if you think only humans are conscious.

      Like

    2. Paul, I think you’re keying in to their more modest claim, which I agree is pretty plausible.

      James, there is considerable disagreement over when episodic memory evolved. Some limit it to humans, great apes, or other relatively intelligent species. Others see it in all mammals and birds. But episodic memory and imagination are basically the same mechanism. So the counter-factual part wouldn’t necessarily restrict consciousness to humans. And if we just focus on the generative part, it’s definitely broader.

      Like

  4. This sort of theory seems to me to be heading the right direction. I’d move slightly further away from representing incoming sensory data, as really what is needed is attention to the right inputs, and discrimination between those attended inputs, sufficient to take the right and timely action, whether mental or physical. While predictive representation of sensory data is one way to look at that, the action based view seems more to the point, and means you don’t quite need a representation of sensor data as such, more the actionable causes of that data.

    The addition I’d want to make is that prediction needs to explicitly needs to distinguish between self and world. I’m not sure we get to consciousness without that distinction.

    Like

    1. One of the things the hypothesis doesn’t really address is what motivates the counter-factual information generation, which I think is the action selection activity you mention. And I totally agree. That aspect of it is necessary for any accounting of what we call consciousness.

      On the self and the world, agreed. But for most animals, that only amounts to the bodily-self, which for them is sufficient. For humans, and perhaps a few other species, there is also the mental-self, which adds an extra dimension.

      Like

  5. “Drawing upon empirical literature, here, we propose that a core function of consciousness be the ability to internally generate representations of events possibly detached from the current sensory input.”

    “According to this view, consciousness emerged in evolution when organisms gained the ability to perform internal simulations using internal models, which endowed them with flexible intelligent behavior.”

    Aren’t these statement just repeating the common definition of consciousness? I don’t think either statement provides any additional insight.

    Given that a lot of what goes on in the brain is unconscious, should we draw the conclusion that the unconscious is unable to:

    1- Generate representations of external events
    2- Perform simulations
    3- Exhibit flexible intelligent behavior

    I don’t seem to feel consciously my brain generating the representations of my backyard. The representations seem to be generated unconsciously then presented to consciousness. I spend some time simulating alternate scenarios (for example, which route to use to get to the grocery store) but I can easily enjoy the backyard without performing simulations, unless “performing simulations” means something really low level like I need to simulate before each foot placement so I don’t fall on my face.Flexible intelligent behavior needs to be better defined. I’ve pointed out before that slime molds can exhibit flexible intelligent behavior.

    Like

    1. “Aren’t these statement just repeating the common definition of consciousness?”

      I think it’s a common definition. There isn’t one common definition of consciousness (other than near synonymous terms like “subjective experience”). I don’t think they meant this statement itself as an insight, but merely as a philosophical frame for what the paper discusses.

      I do agree that the simulations, in and of themselves, don’t make consciousness. (Although, as always, it comes down to how we define “consciousness.”) Certainly their preparation is mostly pre-conscious. We (usually) just enjoy the result. And there are likely multiple simulations happening (partially or in full) in parallel, many of which never make it into consciousness. Only some of the simulations “win” and influence behavior, and perhaps even a smaller number make it into the scope of introspection and are available for self report.

      On flexible intelligent behavior, my thinking is that it needs to involve situational predictions of cause and effect. I don’t see any evidence for anything like that in slime mold. It definitely behaves adaptively, but it never seems to rise about reflexive action. It does raise a caution about interpreting behavior in animals. As Dennett noted, it’s possible to have substantial competence without comprehension.

      Like

      1. I think we agree then that a lot of what the theory proposes for functions of consciousness are also done unconsciously. So we are still missing the critical part that causes something in the processing to emerge into consciousness from the background of unconscious processing.

        I don’t have an answer to that either. I’m just saying that most or all of the theories floating around seem to be describing things the brain (often a complex, sophisticated brain) is doing but they don’t seem to address the gap of why part of it becomes conscious and other parts don’t.

        Like

      2. Actually there might be an something of an answer in the EM Wave/L5 pyramidal neuron theory.

        It could be that all processing in these neurons is potentially consciousness and is representative in the same way but for something to rise to consciousness there needs to be a critical mass of neurons firing in sync to generate some threshold of EM potential.

        Like

        1. I think for something to “rise to” consciousness, there needs to be processing about the processing. For me to say “I am conscious of X”, I must contain a representation of X, as well as a representation of my representation of X, otherwise how are the language centers getting information about me knowing about X? (This isn’t meant to be all inclusive. Obviously there are a whole lot of other representations needed as well.)

          Like

          1. There is already processing about the processing all through the processing as you explained about the blue experience in another comment. That wouldn’t make all of it conscious unless you are saying only processing about the processing is conscious. But that would imply that multiple intermediate layers of processing would also be conscious unless there is some magic, final processing that makes it conscious.

            This statement “I must contain a representation of X, as well as a representation of my representation of X” seems to talking more about meta-cognition. Also, an implicit self.

            This still is similar to adding bits about the bits argument. I don’t see how some added level of processing automatically involves consciousness.

            Like

          2. Pondering this, I think all of the above are required, but ultimately it comes down to a certain location and locus of processing. In other words, like they used to say for retail success: location, location, location. In this case, it’s functionality and location. This makes me realize why so many neuroscientists gravitate toward global workspace theory. It’s not controversial that consciousness seems correlated with massive activation of the frontoparietal network. Maybe in the end, that’s all it’s about.

            Like

          3. I’m open to almost any idea but not persuaded so far by the frontal area arguments. And probably nothing I write will persuade you otherwise either.

            Is there something especially unique about the frontoparietal area other than it just happens to be what shows up on brain scans? Some special neurons? A unique organization? A extra layer of something? Maybe if there was really something different in a major way, the theory would make more sense.

            My impression is that structure of cortex – the layers and neurons – is pretty similar everywhere in the cortex.

            The frontoparietal areas do seem have special import for the cognitive abilities of primates, and humans in particular, so there may be an anthropocentric bias at work in singling it out. Certainly the areas are bound to show up big on consciousness studies because they are big part of the cognitive capabilities of humans.

            Like

          4. Christof Koch discusses that the posterior regions have more of a grid structure, which he thinks is significant. And the PFC, from what I’ve read, is more interconnected, both among its own regions as well with others, than other parts of the cortex. But fundamentally it’s still the same six layer structure as everywhere else. I think in the case of the frontoparietal regions overall, it’s their location, the fact that they’re at the center of sensory and motor processing regions of the cortex, and it’s where the planning communication between the sensorium and high order motorium happen.

            Primates aren’t the only species that have frontoparietal networks. Any mammal is going to have them. All vertebrates (I think) at least have a pallium. And if consciousness is a frontoparietal phenomenon in mammals, it doesn’t mean that some other region can’t take that role in other species.

            Like

  6. I like this … of course I do, since it dovetails nicely with my ideas. It seems pragmatic and provides ways to explore, so this seems like good hypothesizing to me. The evolutionary advantages provided are built in, so the follow-up questions will probably focus upon how such a thing evolved. I am not sure that a mutation is needed, it may just be emergent properties coincident with other adaptions.

    Like

    1. I’m not sure if I’ve seen your ideas in this area before. Did you ever do a blog post on them?

      I tend to doubt there was only one mutation. It seems more likely to me that there were many spanning long stretches of evolutionary history.

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.