The information generation theory of consciousness

James of Seattle called my attention to an interesting paper in the Neuroscience of Consciousness journal: Information generation as a functional basis of consciousness:

  • Drawing upon empirical research into consciousness, we propose a hypothesis that a function of consciousness is to internally generate counterfactual representations detached from the current sensory events.

  • Interactions with generated representations allow an agent to perform a variety of non-reflexive behaviors associated with consciousness such as cognitive functions enabled by consciousness such as intention, imagination, planning, short-term memory, attention, curiosity, and creativity.

  • Applying the predictive coding framework, we propose that information generation is performed by top-down predictions in the brain.

  • The hypothesis suggests that consciousness emerged in evolution when organisms gained the ability to perform internal simulations using generative models.

The theory described broadly matches an idea I’ve pondered several times, that consciousness is about planning, that is, about performing simulations of action-sensory scenarios, enabling non-reflexive behavior.  Although the authors situate their ideas within the frameworks put forth by global workspace and predictive coding theories.

But their theory is more specifically about the internal generation of content, content that might be a prediction about current sensory input, or that might be counter-factual, that is, content not currently being sensed.  In many ways this is similar to the predictive coding framework, but it’s not identical.

In the brain, sensory information flows into early sensory processing regions, and proceeds up through neural processing layers into higher order regions.  But this encoding stage is only a small fraction of the processing happening in these regions.  Most of it is feedback, top down decoding processing from the higher order areas back down to the early sensory regions.

In predictive coding theory, this feedback propagation is the prediction about what is being sensed.  And the feedforward portion is actually error correction to the prediction.  The idea being that, early in life, most of what we get is error correction, until the models get built up, and gradually as we mature, the predictions becomes more dominant.

Importantly, when we’re imagining something, that imagined counterfactual content is all feedback propagation, since what’s being imagined generally has little to no relation to sensory data coming in.  Imagination is less vivid than perceiving current sensations because the error correction doesn’t reinforce the imagery.  (Interestingly, the authors argue that imagery in dreams are more vivid because there’s no sensory error correction to dilute the imagery.)

The information generation theory is that this prediction feedback is what gives rise to conscious experience.  This theory could be seen as similar to recurrent processing theories, although the authors seem to deliberately distance themselves from such thinking by making their point with a non-recurrent example, specifically splitting the encoding and decoding stages into two feedforward only networks.

The authors note that there is a strong and weak version to their hypothesis.  The weaker version is that this kind of processing is a necessary component of consciousness, and is therefore an indicator of it.  The stronger version is that this kind of information generation is consciousness.  They argue that further research is necessary to test both versions.

The hypothesis does fit with several other theories (each having their own strong and weak claim).  The authors even try to fit it with integrated information theory, although they admit the details are problematic.

This is an interesting paper and theory.  My initial reaction is that their weaker hypothesis seems far more plausible, although in that stance it could be seen as an elaboration of other theories, albeit one that identifies important causal factors.  The stronger hypothesis, I think, would require substantially more justification as to why that kind of processing, in and of itself, is conscious.

That’s my initial view.  It could change on further reflection.  What do you think?

This entry was posted in Zeitgeist and tagged , , . Bookmark the permalink.

22 Responses to The information generation theory of consciousness

  1. Lee Roetcisoender says:

    This is just another example of conflating mind with consciousness which only adds to the confusion and obfuscates meaning. Consciousness is a word, and we are going to have to learn how to use that word correctly if we expect to move forward.

    Peace

    Like

    • But what is the right way to use it? And what makes that usage objectively the right way over other ways espoused by others?

      Like

      • Lee Roetcisoender says:

        “But what is the right way to use it?”

        Consciousness is universal, not just limited to mind.

        “And what makes that usage objectively the right way over other ways espoused by others?”

        The correct answer is reality/appearance metaphysics. Ontology is what distinguishes one “thing” from another “thing”, not epistemology. According to RAM, there is no ontological distinction between matter and mind. If there is no ontological distinction, then the mechanisms that govern motion which result in form within the material world of matter must be the same mechanisms that govern motion which result in form within the mental world of mind. This is a position which you previously espoused to be correct.

        Therefore, predicated upon the principles of logical consistency, in other words objectivity itself; if one chooses to correlate consciousness with mind, one must be equally compelled to correlate consciousness with matter. That is the only logical conclusion which can be derived from the ontology. Agreed?

        But here’s the problem: My models assert that in light of the weight of overwhelming evidence, homo sapiens are incapable of being objective. What do you think?

        Peace

        Like

        • ” if one chooses to correlate consciousness with mind, one must be equally compelled to correlate consciousness with matter.”

          The logical issue I see here is that just because consciousness is matter, it doesn’t follow that all matter is conscious. It would be similar to asserting that, because cake is matter, all matter is cake. You could dilute the definition of “cake” enough to make that true, but it would be missing most of the things we associate with cakes.

          Like

          • Lee Roetcisoender says:

            Mike,
            Your defense is incoherent because it is predicated upon inductive reasoning. RAM is derived from deductive reasoning, following the rules of logical consistency, and therefore becomes a grounding, foundational architecture to which every thing must correspond without any exceptions if objectivity is to prevail.

            Here’s another word game Mike. Just because it can be scientifically demonstrated that if I jump into a swimming pool full of water ten times, and every time I jump in I get wet, doesn’t prove that I will get wet every time that I jump in. The only thing the experiment “proves” is that I got wet ten times. The rest is synthetic a priori judgements predicated upon logical consistency. It’s all we’ve got, induction have proven to be a straw-man.

            Like I said Mike, to my own dismay, my models assert that homo sapiens are incapable of being objective, nor do they possess the ability to engage in logical consistency if that logic does not correspond to an intellectual construct they already believe to be true. It’s called prejudice, bigotry and bias, the stumbling blocks of intellectual progression.

            The sandbox of discourse is full of word games, good luck and have fun. I do not have time, nor do I enjoy this type of foolishness.

            Peace

            Like

  2. James Cross says:

    “The information generation theory is that this prediction feedback is what gives rise to conscious experience. ”

    The problem as I’ve noted is that this doesn’t really explain how consciousness comes from bit shuffling. It may very well be a lot of what the brain does when it is conscious but it doesn’t explain how conscious experience arises from it.

    Like

    • Their weaker claim is definitely a lot more plausible. To rise to actual consciousness requires something more. Although I do think information processing can provide it, but we have to resort to higher order theories to get there.

      Like

      • James Cross says:

        To me, higher order theories and GWT seem to be just abstract descriptions of how consciousness work without addressing the root problem of what it is and how it arises.

        Like

        • I think GWT is right as far as it goes, but it’s incomplete. Though it does identify an important part of the infrastructure. But to me HOT points toward an explanation for how we’re aware of our primal reactions (feelings) and how we’re aware of our awareness.

          Of course, no theory of consciousness is currently complete. They all carry promissory notes. But HOT’s strike me as more likely to pay off. Only time will tell.

          Like

    • PJMartin says:

      James, how about this: consciousness come from purposeful (self-optimising) bit shuffling with two added loops that mean that (1) some of the input bits represent the shuffling, and (2) some of the output bits change the shuffling.

      The bits are then the content of consciousness; the shuffling is the process of consciousness and subjective experience arises because the process is conscious of its own operation and content.

      Like

      • James Cross says:

        I don’t see how adding some extra bits for whatever purpose makes consciousness come about. What does my experience of a blue sky with billowing clouds have to do with extra tracking bits?

        Like

        • Your experience of a blue sky comes about from photons in certain energy patterns striking your retina, exciting patterns of photoreceptors and then ganglion cells, which result in patterns propagating up the optic nerve to the thalamus and then occipital lobe, when the patterns excite various processing layers, forming image maps, predictive representations at various levels.

          The “bits about the bits” are additional processing making use of those predictive representations. This processing triggers various innate and learned dispositional firing patterns, feelings, about the visual input, what it feels like to see the blue sky. And there are bits about the bits about the bits, enabling you to be aware that you’re having the experience. And even more bits we’re using right now to talk about this stack of bit processing.

          Like

    • James, part of the problem is terminology. Every time neurophilosophers refer to “prediction” I have to translate that for myself into “recognition mechanism”. To say that your brain is predicting there is a cat out there is really to say that your brain has a mechanism for recognizing cats. When this mechanism actually recognizes a cat, i.e., receives sufficient inputs, it generates an output which is the equivalent of saying “cat”. This output influences the sensory mechanisms, thus providing “feedback” making it more likely that those mechanisms will produce cat-appropriate signals. The important point of the paper, for me, is that inputs to trigger the “cat” recognizer do not have to come from sensory inputs, which is why you can think about cats without cats being present.

      As for Consciousness, the key is the representations. The authors “propose a hypothesis that a function of consciousness is to internally generate counterfactual representations ”. I think it’s the generation of those representations that the authors would claim *is* Consciousness (strong version) or is otherwise associated with consciousness (weak version). For myself, I think those representations are the inputs to the consciousness-related event. The outputs might be actions, or memories, or newly learned concepts, but there would be no consciousness if there were no outputs based on those representations.

      *

      Like

  3. Paul Torek says:

    hypothesis that a function of consciousness is to internally generate counterfactual representations detached from the current sensory events.

    Emphasis added. I like this hypothesis, because it uses the right article before “function of consciousness”.

    Like

    • James Cross says:

      I like it too but can you tell me what it means?

      It seems to apply only to organisms with a capability for imagination, which would be limited probably to humans. So doesn’t seem it can generalized more broadly. That’s okay, I guess, if you think only humans are conscious.

      Like

    • Paul, I think you’re keying in to their more modest claim, which I agree is pretty plausible.

      James, there is considerable disagreement over when episodic memory evolved. Some limit it to humans, great apes, or other relatively intelligent species. Others see it in all mammals and birds. But episodic memory and imagination are basically the same mechanism. So the counter-factual part wouldn’t necessarily restrict consciousness to humans. And if we just focus on the generative part, it’s definitely broader.

      Like

  4. PJMartin says:

    This sort of theory seems to me to be heading the right direction. I’d move slightly further away from representing incoming sensory data, as really what is needed is attention to the right inputs, and discrimination between those attended inputs, sufficient to take the right and timely action, whether mental or physical. While predictive representation of sensory data is one way to look at that, the action based view seems more to the point, and means you don’t quite need a representation of sensor data as such, more the actionable causes of that data.

    The addition I’d want to make is that prediction needs to explicitly needs to distinguish between self and world. I’m not sure we get to consciousness without that distinction.

    Like

    • One of the things the hypothesis doesn’t really address is what motivates the counter-factual information generation, which I think is the action selection activity you mention. And I totally agree. That aspect of it is necessary for any accounting of what we call consciousness.

      On the self and the world, agreed. But for most animals, that only amounts to the bodily-self, which for them is sufficient. For humans, and perhaps a few other species, there is also the mental-self, which adds an extra dimension.

      Like

  5. James Cross says:

    “Drawing upon empirical literature, here, we propose that a core function of consciousness be the ability to internally generate representations of events possibly detached from the current sensory input.”

    “According to this view, consciousness emerged in evolution when organisms gained the ability to perform internal simulations using internal models, which endowed them with flexible intelligent behavior.”

    Aren’t these statement just repeating the common definition of consciousness? I don’t think either statement provides any additional insight.

    Given that a lot of what goes on in the brain is unconscious, should we draw the conclusion that the unconscious is unable to:

    1- Generate representations of external events
    2- Perform simulations
    3- Exhibit flexible intelligent behavior

    I don’t seem to feel consciously my brain generating the representations of my backyard. The representations seem to be generated unconsciously then presented to consciousness. I spend some time simulating alternate scenarios (for example, which route to use to get to the grocery store) but I can easily enjoy the backyard without performing simulations, unless “performing simulations” means something really low level like I need to simulate before each foot placement so I don’t fall on my face.Flexible intelligent behavior needs to be better defined. I’ve pointed out before that slime molds can exhibit flexible intelligent behavior.

    Like

    • “Aren’t these statement just repeating the common definition of consciousness?”

      I think it’s a common definition. There isn’t one common definition of consciousness (other than near synonymous terms like “subjective experience”). I don’t think they meant this statement itself as an insight, but merely as a philosophical frame for what the paper discusses.

      I do agree that the simulations, in and of themselves, don’t make consciousness. (Although, as always, it comes down to how we define “consciousness.”) Certainly their preparation is mostly pre-conscious. We (usually) just enjoy the result. And there are likely multiple simulations happening (partially or in full) in parallel, many of which never make it into consciousness. Only some of the simulations “win” and influence behavior, and perhaps even a smaller number make it into the scope of introspection and are available for self report.

      On flexible intelligent behavior, my thinking is that it needs to involve situational predictions of cause and effect. I don’t see any evidence for anything like that in slime mold. It definitely behaves adaptively, but it never seems to rise about reflexive action. It does raise a caution about interpreting behavior in animals. As Dennett noted, it’s possible to have substantial competence without comprehension.

      Like

      • James Cross says:

        I think we agree then that a lot of what the theory proposes for functions of consciousness are also done unconsciously. So we are still missing the critical part that causes something in the processing to emerge into consciousness from the background of unconscious processing.

        I don’t have an answer to that either. I’m just saying that most or all of the theories floating around seem to be describing things the brain (often a complex, sophisticated brain) is doing but they don’t seem to address the gap of why part of it becomes conscious and other parts don’t.

        Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.