The facilitation hypothesis

Jonathan Birch has an interesting paper in Noûs: The search for invertebrate consciousness.  Birch notes that there is no consensus on whether any invertebrates are conscious, and no agreement on a methodology for establishing whether they are.

He starts off assessing the difficulties of applying many human centric theories, such as global workspace, which don’t have a floor for how sophisticated the mechanisms need to be for consciousness.  (For global workspace: which specialty systems are necessary for the workspace, and how many are required?)  On the other hand, many theories from animal research, such as Merker’s midbrain centered one, aren’t really supported by evidence from human studies.

He also sees problems with totally theory-neutral approaches.  Researchers often assume behavior associated with certain experiences in humans are the same in other species.  But often these behaviors are automatic and don’t require the experience.  An example is learning avoidance behavior, which in humans is typically accompanied by unpleasant experience, but similar behavior can be observed in a rat spinal cord that’s been disconnected from its brain.

To thread this needle, Birch advocates an approach he calls “theory-light”.  He sees this approach as being based on a particular hypothesis:

The hypothesis I have in mind I call the facilitation hypothesis. The motivating idea is that phenomenal consciousness does something for cognition, given the actual laws of nature, but precisely what it does is a question to which we do not yet have definitive answers.

…The claim is that, holding all else fixed (e.g. the stimulus, the difficulty of the task), a cluster of cognitive abilities is facilitated when the stimulus is perceived consciously.

In other words, consciousness provides benefits.  It isn’t an epiphenomenon, a side effect that has no causal effects for the organism.  This means that a way to establish consciousness in an animal is to examine its capabilities.  (This seems resonant with Daniel Dennett’s hard question strategy.)

Birch is interested in direct ties to human experience, so he’s looking for capabilities associated in humans with phenomenal awareness.  The primary capability in humans, and the gold standard for evidence about conscious experience, is verbal report.  Of course, that’s only available in humans.  However, there are others that tend to only be possible in situations where verbal report is also possible.  A few candidates that Birch identifies:

  1. Trace conditioning: in which an organism is able to learn an association between two sensory stimuli separated by a time interval, such as learning that a tone is associated with a puff of air in the eye one second later.
  2. Rapid reversal learning: in which the animal, once it has learned about an association between two stimuli, is able to quickly adapt when one of the stimuli ceases.  This would be a measure of how quickly Pavlov’s dogs adapt when the bell no longer signals the food treat.
  3. Cross-modal learning: learning associations across sensory modalities, such as associating a visual stimuli with an auditory one (i.e. the bark of a dog with the sight of one).

Birch admits that any one of these might be able to happen unconsciously, but the more that are present, the stronger the evidence for phenomenal experience.  He goes on to examine some of the evidence for these capabilities in bees, and comes to the conclusion there is limited, but not yet conclusive evidence, for bee consciousness.

This approach is similar to the ones taken by Todd Feinberg and Jon Mallatt in The Ancient Origins of Consciousness, who used global operant learning, value trade-off behaviors, and other capabilities to establish affective consciousness (sentience), as well as the approach by Simona Ginsburg and Eva Jablonka in The Evolution of the Sensitive Soul, which focused on unlimited associative learning.  (Mallatt and Jablonka are actually acknowledged as reviewers at the end of the paper.)

So the overall approach here isn’t really new.  But Birch’s set of capabilities seem to come from a broader net.  He seems to be identifying cases of classical learning (sensory-sensory associations), but in a more sophisticated form.  They may be easier to test for than operant learning, which requires an action component.

That said, Birch’s approach strikes me as typically binary in its thinking: either the animal is conscious or it’s not.  This makes his approach perhaps more theory laden than he thinks.  It’s hard to look at consciousness in something like a bee without holding some particular definition of “consciousness.”  We know a bee’s information processing is going to be some subset of the human version.  Whether any particular subset is both necessary and sufficient for consciousness isn’t a fact of the matter.  (At least until someone conclusively discovers a “consciousness field” or some other objective basis for phenomenal consciousness.)

Birch argues that we don’t need more theory, nor more “undirected data gathering”, but investigation along his theory-light strategy.  Given the point above, I’m not sure that’s true.  I think the most productive thing we can do is learn as much about the capabilities of bees and other invertebrates as possible, and then with those established, discuss what kind of consciousness they have.

But maybe I’m missing something?

43 thoughts on “The facilitation hypothesis

  1. Check out: Nova, What Are Animals Saying

    On Wed, Sep 2, 2020 at 2:00 PM SelfAwarePatterns wrote:

    > SelfAwarePatterns posted: “Jonathan Birch has an interesting paper in > Noûs: The search for invertebrate consciousness. Birch notes that there is > no consensus on whether any invertebrates are conscious, and no agreement > on a methodology for establishing whether they are. He star” >

    Liked by 1 person

  2. We don’t already know the capabilities of bees and ants?

    Actually I do not think one approach is a good idea. I think many approaches are better because, well, we don’t know what we are looking for. So, we spread out look around and if we find anything promising, we pursue that, too.

    At times theory guides experiment. At other times, experiments guide theory. Currently we do not have enough of either. (Or am I missing something? :o)

    Liked by 1 person

  3. I think the biggest problem that Birch, and the vast majority of thinkers on the subject of consciousness, are making is, as you mention, binary thinking. But I think I mean this in a way that’s different from what you were suggesting. The problem I’m referring to is the concept of thinking an individual organism as a whole is conscious or not, as opposed to specific isolated systems being conscious or not.

    I’m referring to Minsky’s society of mind idea. If “consciousness” is about certain kinds of processes, then you can have two systems that are “conscious”, but one experience would be unconscious relative to the other. You could have two such systems in one brain. The most obvious example would be the two hemispheres of the brain. When the corpus collosum is cut, the separate halves no longer have access to the experiences of the other half. A less obvious example is the one revealed in the blindsight experiments. The system that has access to the verbal report mechanism (the autobiographical self) no longer has access to the experiences of the visual cortex, but those experiences still happen and control movement.

    That said, I’m down with Birch’s theory-light approach and facilitation hypothesis. If only we had a theory that provided the basic structure of a conscious-type process which could explain how experiences become affordances of cognition.

    *
    [checks pockets … Hey! Look!]

    Liked by 1 person

    1. “but one experience would be unconscious relative to the other”

      I like the sound of that. I think it’s the right frame of mind to look at this stuff.

      On blindsight, if we eschew mention of consciousness, the actual process doesn’t really seem that mysterious. As you note, the portion of the system with access to the reporting functionality has lost access to visual information. That portion not having the access also seems to prevent object identification and using the visual information for planning. But lower level subcortical processes can still process some of the visual information, enabling reflexive reactions, such as object avoidance, etc.

      If I recall correctly, your criteria to call something consciousness involves it having representations, but your conception of representation almost seems equivalent to symbol, of maybe even just information. Or do I have that wrong?

      My version of representation is more akin to a model, image map, or predictive framework. I do think that version is a very important component of what we call consciousness. But our consciousness, the one’s having this conversation, seem to involve several interacting systems that use representations, some with representations of things going on in the other systems.

      Sometimes I wonder if we should just talk about reportability.

      Anyway, your theory explains how experiences become affordances?

      Like

      1. Yes, my basic unit involves what is equivalent to a symbol, but as an intermediary. This intermediary is an affordance, by virtue of the mutual information it carries. An experience includes the generation and the interpretation(s) of the symbol, with specific definitions for symbol and interpretation.

        These units can be simple or complex. Your version would be a subset of complex units. Specifically, what you call a “model” is, I think, a mechanism which generates a “symbol”, which symbol may be a specific firing pattern in a set of neurons, or it may be the dispersed output of a single neuron. I think model-type mechanisms are probably best described as unitrackers, because (I think) they are likely to be individual pattern recognition units. The outputs of the pattern recognition unit is the symbol. How it is used, i.e., interpreted, determines the function, whether it be simple memory, further input to next-level pattern recognition, feedback to prior pattern recognition (so, prediction), motor control, etc.

        My statement that experiences could be affordances of cognition was a little fuzzy. In one aspect, the symbols themselves can be considered affordances for interpretations, and specific interpretations could be called cognition. In these cases the experience *is* the cognition. Alternatively, an experience could be memorialized, and that memory could become the input for creating a new symbol, with it’s new cognitive-type interpretation.

        *
        [btw, have seen this paper? https://www.cell.com/action/showPdf?pii=S1364-6613%2820%2930175-3. I’m beginning to think there may be a one-to-one correspondence between pyramidal cells and unitrackers.]

        Like

        1. It seems like we agree on a lot ontologically, but the terminology often trips me up. I can see what you mean about the predictive models converging on a unitracker, what I think of as an association somewhere, a mental concept, a neural connection, or complex of connections, that light up when all the associations that add up to that concept have previously fired, and which may stay lit up as the entire complex recurrently keeps itself activated.

          When I think of affordances, I think of things in the environment that are useful to the organism. But I could see a model of the perception of one of those things converging in the same way on a unitracker / association / concept. We might loosely refer to that association as the affordance.

          I do agree that the experience is the cognition. I think that’s true even when it involves memory, which is really just the perceptual patterns that, due to strengthened synapses, have become easy to retroactivate. A lot of the higher level ones involve categorization of the percept. Much of which I think is cognitive. In my mind, you don’t get phenomenality without cognitive access. The are different sides of the same coin.

          I hadn’t seen that paper. Thanks! Skimming it, some of the concepts seem familiar. It sounds like it’s a lower level theory compatible with the global theories of consciousness (GWT, IIT, etc). I’ll have to go through it more carefully.

          Like

  4. I like the theory-light approach and the facilitation hypothesis. But I don’t get why the focus on associative learning. It seems to me that what phenomenal consciousness most importantly facilitates is value trade-offs. For example, whether it is worth getting bitten by mosquitoes in order to weed the garden and grow fresh food. Todd Feinberg and Jon Mallatt and Simona Ginsburg and Eva Jablonka are mining gold; Birch is scraping up silver. Not that any “money” should be left “on the table”, but it can be useful to prioritize.

    Liked by 1 person

    1. Just to be clear, associative learning is Ginsburg and Jablonka’s focus, and the value trade off processing, which I agree is key for affective processing, was something Feinberg and Mallatt focused on. (And Ginburge and Jablonka too, since they used F&M’s review of the research.)

      But the learning and value trade off decisions are tightly related. In order to make such a trade off, you had to learn about the relevant values. If you didn’t learn them, then you’re just reacting reflexively.

      Of course, there’s no guarantee that type of learning is conscious. There is model free reinforcement learning, which may be what the simplest organisms capable of learning these things are doing.

      Liked by 1 person

  5. I believe that the multidimensional approach to animal consciousness you mentioned in a previous article would be better to determine whether invertebrates have consciousness or not.

    From the way you summarized the article, I believe there are some interesting things there, but an approach to whether bees are conscious or not without having a clear definition of consciousness looks quite problematic. We are still discussing the nature of human consciousness and trying to assess animal consciousness with so much we still need to settle feels like it won’t be fruitful at all.

    Liked by 1 person

    1. I agree the dimensions are a good way to assess the consciousness of a particular species. The trick is getting empirical data on those dimensions. But it’s a good idea for scientists to design their experiments to get as close as possible to giving us info on the dimensions. I think that falls into the overall need to explore their capabilities in every way we can.

      I find myself constantly returning to the conclusion that there is no one right definition of consciousness. The ones everyone agrees on, such as “something it is like”, are hopelessly vague. As soon as you try to get more precise and objective about it, you end up invoking theories and including or excluding things in a way that violates intuitions.

      In the end, consciousness ends up being in the eye of the beholder. Scientists can objectively study specific capabilities, such as sensory discrimination, attention, memory, learning, imagination, metacognition, reportability, and others, and learn a lot about them. But we’ll argue endlessly about what that means for consciousness.

      Liked by 1 person

  6. I would think language is very much tied up with consciousness. If an animal does not have language, then it is hard to see how it can be phenomenally conscious. How can it have the thoughts and ideas that I have been having as I read your blog or type these words now? Of course, it may act like it is conscious, but then humans do that all the time. There may be some internal language of thought for the animal but without any external verbal report, why would we need to propose this?

    Liked by 1 person

    1. There are a lot of people who see consciousness entangled with language. I definitely think much of the human cognition rests on it, or more precisely, with symbolic thought, of which language is the primary use.

      But if you introspect carefully, you’ll find plenty of instances of you imagining things without language being in the mix. Consider the color red, the redness of red. Now consider how you would describe that to a person born blind. Or a toothache for someone who’s never experienced one. Or a backache to a kid who’s never had one.

      These are difficult because language ultimately refers to aspects of experience, experience which is pre-language.

      Like

      1. There are two ways we can conceive of a mind seeing red:

        1. Seeing this colour: ***** (the asterixes are supposed to be red)
        2. Processing this bit of language: red

        I think the two are significantly different. 1 is a purely physical process of the light waves of a certain frequency hitting the retina and being processed by the brain. 2 is a linguistic thought that tokens an experience as ‘red’.

        We have to be careful what we are discussing. Any non-conscious being can experience 1 but you need language to experience 2.

        I could explain 1 to a blind person (or anyone for that matter) but I need language to do that. It would not be an experience of 1. It would just be a linguistic description. Whenever we talk about ‘red’ on these boards, or any other thing, we are only working in the linguistic domain. It is not the actual experience. That is ‘pre-language’ as you say. So we cannot rightly talk about 1, we can only experience it.

        This is why I say that language is consciousness and non-linguistic beings are not conscious. They can experience 1 but they cannot do 2.

        Like

        1. A lot of people would say experiencing 1 is consciousness. Admittedly, it comes down to how we define “consciousness.”

          I agree that you could explain 1 to a blind person, but it would be difficult. You would have to convey a lot of information to them, and the current limitations of scientific knowledge would be an obstacle. The old adage: “A picture is worth a thousand words,” comes to mind. In some cases, it may be worth millions.

          Like

  7. As you might guess I think consciousness is fairly widespread among animals.

    I’m with Llinás when he says he thinks it is what nervous systems are all about. It is about learning in the broadest sense of the term. Consciousness is how organisms understand their environment and make the adjustments required to carry through with pre-wired behaviors predominately in simple organisms and complicated learned ones in more complex organisms . So even though the spider might make webs by instinct, it still needs integration of senses and limbs to anchor the web and build its structure. Consciousness in some form is what provides that.

    Liked by 1 person

    1. One good way to define consciousness in a way that meets many people’s intuitions is, life impulses + prediction. So a plant is alive and has some intelligence, but it seems to be all just automatic behavior. Same for unicellular organisms and even very simple animals. But as soon as distance senses come into the picture, we start getting prediction (which I think Llinas was onboard with), most notably, learned predictions.

      Of course, even a lot of that can happen in humans in a manner that isn’t reportable, so finding where to draw the line remains a difficult matter.

      Liked by 1 person

          1. BTW, early in his book, Llinás has a discussion about motricity.

            I didn’t realize much about how movement actually works in biology until I read that. It turns out when we move – raise an arm, for example – it is not actually done smoothly but rather done in a jerking fashion controlled by rhythmic pulses from nerves. This has to do as much as how muscles work as anything else. But in this we may be seeing the origins of brain rhythms that support consciousness.

            In the last chapter, Llinás speculates about consciousness in the non-biological. He thinks it might be possible but it will require additional understanding of the analog and probabilistic nature of the brain and additional understanding of the issues of reliability/unreliability. I may post something on this last issue and don’t want to get into too far now. But he references Warren McCulloch from 1965 if you wanted to look into it.

            Like

        1. I will just assume, then, that having the nervous system is not sufficient, but the organism must also be doing the seeing or hearing and/or moving via this nervous system.

          So do we have any idea what kind of nervous system is necessary? Think of an alien creature which has a nervous system, but some of the parts are significantly different. If it looks and acts just like a terrestrial creature, how will you decide if it is conscious?

          *

          Like

        2. Okay, you just made my week! [get ready for more exclamation points]

          I googled Warren McCullock 1965 and got this paper by Seymour Papert (Seymour Papert!): “ Introduction to Embodiments of Mind by Warren S. McCulloch“, http://papert.org/articles/embodiments.html. I highly recommend this paper to anyone interested in computational/cybernetic theories of mind.

          Some pertinent quotes:

          “ The common feature of these proposals is their recognition that the laws governing the embodiment of mind should be sought among the laws governing information rather than energy or matter.”

          [Boom! But note: laws governing matter are the laws governing information, but we don’t have to get into that here.]

          “ The principal conceptual step was the recognition that a host of physically different situations involving the teleonomic [!!!] regulation of behavior in mechanical electrical, biological, and even social systems should be understood as manifestations of one basic phenomenon: the return of information to form a closed control loop.”

          [I thought “teleonomic” was a more modern coinage. He uses it here assuming it needs no explanation.]

          “ The liberating effect of the mode of thinking characteristic of the McCulloch and Pitts theory can be felt on two levels. On the global level it permits the formulation of a vastly greater class of hypotheses about brain mechanisms. On the local level it eliminates all consideration of the detailed biology of the individual cells from the problem of understanding the integrative behavior of the nervous system. [!!!] This is done by postulating a hypothetical species of neuron defined entirely by the computation of an output as a logical function of a restricted set of input neurons. The construction of neural circuits using schematic neurons specified by their conditions of firing was not in itself either original or profound; these had often been used diagrammatically to illustrate such simple things as reflex arcs. The step that needed boldness of conception and mathematical acumen was the realization that one could formalize the relations between neurons well enough to allow general statements about the global behavior of arbitrarily large and only partly specified nets to be deduced from assumptions about the form and connectivity of their components.”

          *
          [Seymour Papert is now my official patron saint of the Computational/Cybernetic theory of mind! Who knew?]

          Like

          1. I haven’t read Papert’s paper, but it’s worth noting that McCulloch, along with Walter Pitts, originated the computational theory of the nervous system in 1943. There have been a lot of nuances and complexities revealed since then, but the basic insight remains a seminal one.

            Like

          2. A couple of additions here.

            Llinás is aware of all of this obviously and still has some doubts about capabilities outside the biological.

            Llinás and Buzsáki (both I’ve been reading a lot of recently) do emphasize the closed nature of the brain – how it is primarily an emulator self-activating and self-generating its reality.

            Keep in mind Llinás ‘s law which emphasizes that neurons are not totally defined by inputs and outputs. So whether there is “hypothetical species of neuron” seems somewhat doubtful.

            I don’t see any reference in your comment to McCullough’s reliability/unreliability argument. If I understand it correctly (and there is more reading I should do), it implies that the brain is non-deterministic – that its components do not behave in fixed fashion but continually and regularly work in different ways, that the same unit can produce different outputs at different times from the same input. This isn’t just how the brain overall works but how its components (neurons, etc) work.

            I think I have at various points have argued that any claim of instantiation of consciousness outside biology would need a compelling physical theory that explains among other things how information accumulates in matter. This seems to be a broader issue that includes how life comes about from the non-organic.

            Aside from that, I’m glad you found it useful.

            Like

          3. James of S took this in a similar direction to where I would have gone. Up above, you said that if it moves and can see and hear, then at least in some sense, it’s conscious. I know you meant that within the context of biology. I would add that if the system in question has preferences about the state of affairs, particularly in relation to itself, then our intuition of consciousness is likely to be triggered, regardless of whether biology is involved.

            A lot is made about the stochastic nature of the brain, but from what I’ve read, the nervous system compensates for its noisy and gappy signalling with repetition and/or redundancy. Apparently that requires less energy than a more deterministic protocol. (It also only does it where reliability is crucial.) The randomness does play a role, but mostly in borderline cases or situational uncertainty. If food is seen or smelled, or a predator is seen approaching, and the animal reacts with a high degree of indeterminacy, its genes probably aren’t going to stay in the gene pool.

            Like

          4. McCullough’s argument per Llinás is “that reliability could be attained if neurons were organized in parallel so that the ultimate message was the sum of activity of the neurons acting simultaneously. He further explained that a system where the elements were unreliable to the point that their unreliabilities were sufficiently different from one another would in principle be far more reliable than a system made on totally reliable parts”.

            So it has consequences in how it is built, structured, and works.

            Like

          5. Yes, that is mostly what is being said. -how a system can be structured with unreliable parts to make it even more reliable that it would be with more reliable parts. Llinás goes on to talk about how since the unreliabilities of the unreliable components are different they don’t add to each other, but instead compensate for each other which results in a more reliable system that one built on reliable components which might be unreliable in the same ways and add up.

            Like

          6. Ah, okay, I see what he’s getting at now. Another benefit of that approach, is that it can be tailored to just how reliable a particular stretch of neural circuity needs to be. Which means the trade offs between reliability, performance, and energy consumption can be tailored for each micro-function.

            A couple of more recent books which look at this kind of stuff are The Principles of Neural Design by Peter Sterling and Simon Laughlin, and The Principles of Neural Information Theory by James Stone. Warning: both of these books are extremely technical.

            Liked by 1 person

          7. I think this gives in part the logic behind neural assemblies and possibly the diversity of neurons in brains and in the various layers in brain structures, although there are also probably other specializations involved with that.

            Liked by 1 person

        3. “ If I understand it correctly (and there is more reading I should do), it implies that the brain is non-deterministic – that its components do not behave in fixed fashion but continually and regularly work in different ways, that the same unit can produce different outputs at different times from the same input.”

          As Mike pointed out, a lot of the noisy-ness of neurons is compensated for by, for example, frequency of firing. But also, we can expect neurons to be plastic, to learn new firing patterns either quickly or slowly. This can be done in silicon as well.

          “ I think I have at various points have argued that any claim of instantiation of consciousness outside biology would need a compelling physical theory that explains among other things how information accumulates in matter.”

          This is exactly right, and something I’ve been looking at for a while. The concept you’re looking for is mutual information. Every physical process generates mutual information, a correlation between discrete physical systems. This happens all the way down at the quantum level. [I think it may be a significant part of “entanglement”.]

          So if there is mutual information between system X (say a food source) and system Y (say the concentration of sugar molecules at some distance from the food source), then this mutual information becomes an affordance to respond to X when given Y. So a bacteria can “see” Y and respond appropriately because it is related to, i.e., correlated with, X. Natural selection can take advantage of this affordance by creating systems that respond to Y. Those systems that respond in a good way get selected.

          This correlation can become diluted over time as one of the systems interacts with other systems. [I think that’s associated with quantum decoherence.] However, this correlation can become enhanced via a process called (I think unfortunately) causal emergence. Erik Hoel has done a lot of this work, and refers to the resulting mutual information as “Effective Information”. The operations described to generate this Effective Information are (I think) similar to what pattern recognition systems do. These pattern recognition systems can be computer neural nets or biological neural nets. [I think the biological ones are what Ruth Millikan refers to as unitrackers.]

          So is this compelling enough to explain how (mutual) information accumulates in matter, and life comes from the inorganic ( via natural selection making use of the affordances of mutual information)?

          *

          Like

          1. Computational methods may be useful for describing how the brain works but mistaking descriptive methods for consciousness just seems like a sort of computational woo.

            Like

  8. Hello,

    This is my first comment on your fantastic blog.

    My father used to have what they called seizures where he would walk around like a zombie. He could do anything but talk, yet he clearly wasn’t “there.” It would last at least half an hour. He would do things like let himself out of the house, walk to the neighbors, let himself in, and find a nice comfortable chair to sit down on, where he would sit. He was harmless, but he would totally freak out my mother (and the neighbors). I suppose it was like sleepwalking except that happily it responded well to seizure medicine.

    So as someone who knows what it is like to be around a zombie who does most things that a normal person does, I am a little dubious of being able to tell if a bee is self-aware just from observing its behavior. I get it, though, that that is the point of your post.

    As for using theory to try to use as criteria to decide if an insect is conscious, I am wondering if it would be helpful to compare with white blood cells as they travel around the body looking for—and recognizing—foreign invaders to attack and kill. There is a case where (I think) we can know he answer; the WBC’s are not conscious. So for an insect to be conscious, it has to go beyond being like a WBC.

    Liked by 1 person

    1. Thanks, and welcome!

      Very interesting with your father. I suppose he never had any memory of what happened during those events? I can imagine how unnerving it must have been for everyone involved. But cases like his do provide a lot of insights into consciousness.

      I definitely agree that we should expect more from a conscious entity than what unicellular organisms can do. Feinberg and Mallatt, in their book, have bees identified as capable of operant learned responses to punishment or rewards, which seems to imply some version of valenced feeling, although I’ve learned that the evidence for things like that in these types of animals is typically much more muddled and limited when you dig into the cited studies, not that I’ve done it in this case.

      Like

    2. Welcome to Mike’s blog! I’m going to push back at one of your ideas here, but please don’t take it personally. I have a different understanding of consciousness than most here, and bringing up the immune system is an excellent opportunity to flesh it out a bit.

      You say white blood cells are not conscious, but I will say they are, at least to the extent that any single cell organism, like a bacteria, may be conscious. Single neurons are also in this category.

      More interestingly, the white blood cells are organized into a larger system, and just like neurons are organized to create a conscious brain, white blood cells are organized to create a conscious immune system. This system does not have the same capabilities as the brain, but only in a way similar to how a bat’s brain does not have the capabilities of a human brain, and vice versa. The different systems simply have different capabilities.

      The key for consciousness is the creation of symbols (which contain mutual information) and the interpretation of those symbols. This can happen inside cells, and it can happen between cells. The “consciousness” of a system is just the set of conscious-type (symbol interpretation) processes that system performs.

      *
      [hoping this response generates questions without being off-putting]

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.