Rule out plant consciousness for the right reasons

In recent years, there’s been a resurgence in the old romantic sentiment that maybe plants are conscious.  I hadn’t realized that an entire sub-field had formed called Plant Neurobiology, the name itself incorporating a dubious claim that plants have neurons.  Although later renamed to the more cautious Plant Signalling and Behavior, it’s reportedly still popularly known by the original provocative title.

Apparently a number of biologists have had enough and published a paper in the journal Trends in Plant Science making the case that plants neither possess nor require consciousness.  From the abstract:

  • Although ‘plant neurobiologists’ have claimed that plants possess many of the same mental features as animals, such as consciousness, cognition, intentionality, emotions, and the ability to feel pain, the evidence for these abilities in plants is highly problematical.
  • Proponents of plant consciousness have consistently glossed over the unique and remarkable degree of structural, organizational, and functional complexity that the animal brain had to evolve before consciousness could emerge.
  • Recent results of neuroscientist Todd E. Feinberg and evolutionary biologist Jon M. Mallatt on the minimum brain structures and functions required for consciousness in animals have implications for plants.
  • Their findings make it extremely unlikely that plants, lacking any anatomical structures remotely comparable to the complexity of the threshold brain, possess consciousness.

The paper is not technical and is fairly easy and interesting reading, although there are numerous summaries in the news.  It makes some good points about the metabolic expenses of consciousness, and that plants are simply not in an ecological position where paying that energy price is adaptive for them.

Those of you who’ve followed me for a while might recognize the names Todd Feinberg and Jon Mallatt, as I’ve highlighted their work several times.  Their books have been a major influence on my views, so it makes a lot of sense to me that their work would be discussed in this context.

But while they are a major influence, I don’t buy all of their propositions.  In particular, I’m not comfortable with their neurobiological essentialism, or this paper citing it as the major driving force for rejecting plant consciousness.  Feinberg and Mallatt understandably hold this position because the only place consciousness has been conclusively observed is in such systems.

But evolution has repeatedly shown itself capable of finding alternate solutions to problems.  In the case of nervous systems, their low level functionality is actually a re-implementation of functionality that already existed in unicellular organisms.  So it seems to me that we should be open to alternate implementations of the high level functionality.

And it’s worth noting that Feinferg and Mallatt’s “structural, organizational, and functional complexity” criteria were developed by looking at vertebrate nervous systems (fish, reptiles, mammals, birds).  The invertebrates who they admit into club-consciousness: arthropods (insects, crabs) and cephalopods (octopuses, squids), make it in based on their sense organs and behavior, that is, their observed capabilities.

I think that’s how we should assess the proposition of plant consciousness, cognition, intelligence, etc, by their observed capabilities.  Doing so leaves us open to alternative possibilities, while also avoiding animal chauvinism.

Here I’ll pull out my own mental crutch: the hierarchy of consciousness.  Each layer builds on the previous one.

  1. Reflexes: automatic reactions to simuli, fixed action patterns determined by genetics (or programming) although modifiable by local classical conditioning.  Some will insist that these action patterns be biologically adaptive; if so, then this and the layers above are inherently biological.
  2. Perception: predictive models  of the environment build on information from distance senses (sight, hearing, smell), expanding the scope of what the reflexes can react to, enabling reaction prior to a direct somatic or chemorecptive encounter
  3. Attention: prioritization of which perceptions the reflexes are reacting to.  Attention can be bottom-up, essentially reflexes about reflexes (meta-reflexes), or top-down, driven by layers 4 and 5.
  4. Imagination & Sentience: simulations and assessment of possible action scenarios.  Based on the results of the simulations, individual reflexes are either allowed or inhibited, decoupling the reflexes, turning them into motivational states (affects) rather than automatic reactions.
  5. Metacognition: A feedback mechanism for assessing the system’s own cognitive states, particularly the reliability of beliefs.  An advanced recursive form of this enables symbolic thought including language, art, mathematics, etc.

Layers 1-4 make up what is commonly referred to as primary consciousness.  Based on all the evidence I’ve read, plants are mostly in layer 1, with perhaps some limited layer 2 abilities.  I haven’t seen anything implying layers 3 or higher.

Of course, many in the plant neurobiology community might insist that layers 1 and 2 are sufficient for the label “conscious”.  But if so, then we’d have to make careful distinctions between animal consciousness vs plant consciousness, and be clear that we don’t mean plants have imagination or self reflection.

Consciousness is in the eye of the beholder, but plant consciousness doesn’t strike me as a productive proposition.  If we accept it, what then?  Are we required to take their sensibilities into account?  Are we being cruel when we mow the yard or trim the hedges?  Are vegetarians as much killers as carnivores?

All in all, it would be a lot of trouble for a sketchy proposition, an extraordinary proposition that we should require extraordinary evidence for before accepting.

Unless of course I’m missing something?

39 thoughts on “Rule out plant consciousness for the right reasons

  1. Well, for one the universe is conscious don’t you know! Today plants, tomorrow the universe!

    Yes, I am being sarcastic. It seems that consciousness will be a manifestation of networking ability and the connections between stars/planets/black holes that make of the universe seems limited at best.

    I greatly enjoy your posts, good work!

    Like

  2. Well, I can’t resist. Doesn’t this depend upon some objective set of functional capabilities that only conscious entities possess rather than a specific set of anatomical features?

    Otherwise, how could anything (AI, for example) other than animals be conscious?

    Like

    1. Resistance is futile 🙂

      My whole discussion of the hierarchy is to point out that there’s no one objective set of capabilities everyone agrees is necessary and sufficient for consciousness.

      In the case of AI, I mentioned in layer 1 that many will insist that the reflexes be biological ones. If so, then by definition that excludes AI, at least unless it’s programmed with biological impulses. Personally, I’m not convinced restricting the reflexes to biologically adaptive ones is necessary, but intuitions vary. In science fiction, it’s a common theme to start regarding a machine as conscious once it becomes concerned about its own welfare.

      Liked by 1 person

    2. It’s the fourth of July and I can’t resist either James. I’m surprised that an old hippie from the Woodstock generation would have such a narrow view of consciousness. For example: If it was understood that consciousness is universal, then Einstein’s thought experiment would not have required general relativity to explain “why” the earth would immediately spin out of control if the sun were to instantly disappear.

      I know, I know……. we all prefer the paradigm of magic and mystical explanations provided by the laws of physics over simple parsimonious ones. After thousands of years of human achievement, all we’ve really been successful at accomplishing is replacing the eternal gods of the Greeks with the eternal laws of nature. Magic by any other name is still magic just the same. I just don’t get it. Oh wait; yes I do, it goes like this: Give me my first miracle, and I’ll take over from there. That in a nutshell is the grounding tenet of the Church of Science.

      Like

      1. “Give me my first miracle, and I’ll take over from there. That in a nutshell is the grounding tenet of the Church of Science.”

        It is. Science has been described as magic that actually works. But the understandings that technology is built on always eventually reduce to raw phenomena we don’t understand. We can model them and the mathematical relations between them, use them in an instrumentalist fashion, but ultimate understanding is never reached.

        Even when or if we do come to understand the brute force phenomena our current understandings are based on, those new understandings will be in yet more primal phenomena we don’t understand. Turtles all the way down. Or at least until society is no longer willing to fork over the resources for more investigation.

        Like

  3. I remember watching a Joseph Campbell lecture where he talked about how plants turn to face the sun, and he called that “a form of consciousness.” I get what he meant by that, but I felt like that’s overextending the meaning of the word conscious. If we really want to talk about plants as conscious, then we really do need something like your hierarchy of consciousness.

    Liked by 1 person

    1. Joseph Campbell’s use of language often took a lot of artistic liberty. I think it was why he was so popular in the 1970s. He had a way of discussing mythology without sounding too analytical about it, and hence not coming across as dismissive of it. I felt his influence in my own Greek and Roman mythology class in the 80s where the professor conveyed the stories in a similar manner.

      But yeah, if you took his language too literally, you might come away with some crazy ideas.

      Liked by 1 person

  4. Fruit Is Murder! 😮 😉

    This is what you get with all that relativism and interpretation. One gang proclaiming that silicon and copper will be “conscious” and another gang proclaiming that my fern has opinions. (And yet another gang thinking my thermostat “knows” what it’s doing.)

    As you know, my view holds central a physical, massively interconnected, parallel network of sufficient size and complexity as a foundation requirement for consciousness, so plants (and thermostats) are easily eliminated from the class of conscious things.

    (I wonder if lack of progress with a Theory of Mind, in part, drives some of these fringe ideas both from frustration with the lack of progress and from the idea that nothing says an oddball idea wrong, so the door is open. Something similar happens in HE physics and math in areas that are still undecided or unknown. So much is unknown with consciousness that it’s a wide open door.)

    Like

    1. Actually, the fruit producing plant “wants” you to eat it, and poop its seed somewhere far away. So fruit isn’t murder, it’s sex! Or, well, cross species participation in sex. Or one can look at it as eating another organism’s sex organs, if that makes one more comfortable. 🙂

      Not sure what you mean by Theory of Mind, but cognitive neuroscience seems to be making steady progress, year after year. It’s not stuck in the empirical wasteland that particle physics seems to be in. There’s a lot of solid grounded science happening, some of which I’ve tried to highlight on this blog, but most of it I never get to: https://neurosciencenews.com/

      Like

      1. “Actually,…”

        I get a kick out of the pepper plant, which evolved capsaicin as a way of keeping most mammals from eating it. But birds are immune to capsaicin, so they eat peppers and spread the seeds much further than any mammal could. The plant uses airmail!

        And then these other evolved mammals come along, taste the pepper’s self defense, and think, “Mmm! Salsa!!”

        “Not sure what you mean…”

        I mean that we don’t know (yet) how things work, so there can be conflicting theories without definitive means to select among them. (Hence all the long-running discussions without resolution.)

        Like

        1. For the last several years we have only bought suet with hot pepper to keep the squirrels off of it. Unfortunately I think this may become an experiment in evolution where the squirrels gradually become resistant to it. I seem to see them more and more on the suet although still not to the extent they would be if it was untreated. So there may be a developing immunity to its effects.

          Also, I just noticed your comment about “central a physical, massively interconnected, parallel network of sufficient size and complexity” as a requirement. I am not at all certain it needs to be all that large or massive. I think it probably has more to do with the wiring and types of connections than the absolute size or number of connections, although probably some minimum requirement applies.

          I guess one question that comes to mind is how much of our brain is actually used to produce consciousness. Even if consciousness is somewhat of a whole brain phenomena, it still might be only be a small part that makes the difference between consciousness and not consciousness.

          I am increasingly suspecting that connections and information flow make a lot of difference in behavioral complexity and capacity but that the missing pieces are in the neurotransmitters and hormones, linked to instincts, that combine with neural circuits to create a somatic unity that forms the basis of self.

          Is there red without the experience of red? Can there be experience of red without a sense of self? I am not referring to a sort of meta-cognitive self that might only exist in humans but some deeper, more basic self linked to somatic unity and control that may be realized in actual brain circuits.

          Like

          1. “So there may be a developing immunity to its effects.”

            Maybe they’re just learning to like it — an acquired taste.

            “I am not at all certain it needs to be all that large or massive.”

            Agreed. (I said “massively interconnected” and of “sufficient size and complexity” — neurons having, on average, 7,000 connections is what I was keying off.)

            “Is there red without the experience of red?”

            There are obviously photons with the right wavelength (700-650 nm) that a machine can detect (color cameras do it routinely), so it seems to depend on exactly what’s meant by “experiencing” red if the experience lies between human qualia and color camera.

            I’m not sure what such a thing would amount to.

            Like

          2. About the red.

            I am trying to get at the difference between a camera and human qualia. There are red detectors in both but the camera, I think we can agree, isn’t conscious. The difference is that with qualia there is idea of experience. We could imagine perhaps a human being who could also detect red but have no experience of it. Likely this is happening all of the time – that our red detectors are firing but we are not seeing it.

            I am not trying to make a distinction between access and phenomenal conscious as I understand them. Rather what I am referring to are red detectors firing without any consciousness whatsoever or any possibility of it happening.

            I am just using red as an example of qualia. I know there are a small number of people who cannot see color so substitute something else for them

            Liked by 1 person

          3. “I am not trying to make a distinction between access and phenomenal conscious as I understand them.”
            “I am trying to get at the difference between a camera and human qualia.”

            Have you considered that the difference may be the camera doesn’t have access conscious functionality to transform its sensory data into qualia?

            Like

          4. James,
            My own answer about the red is valence. Take that away, and there will be nothing it is like to see red — pure information. Often our input senses have both an informational component (access consciousness) and a valence component (phenomenal consciousness). The sense of smell often displays this more clearly than the sense of sight. An even more obvious example would be toe pain. It both hurts (valence) and tells you where this issue happens to be (information).

            A camera which is set up to detect red should tell us about such occurrences by means of electricity (computer 4). The non-conscious brain can do this sort of thing by means of neuron firing (computer 2). Theoretically the “fuel” which drives consciousness (computer 3) is valence.

            Here you may wonder, if a person had no valence regarding color, do my models then suggest that no colors would be seen? Actually no. It just wouldn’t feel like anything to see any colors. They could still be accepted as input information. But if all forms of valence were lost, then my models suggest that the conscious form of function would cease, and so nothing would be seen in a conscious sense.

            Like

          5. “Have you considered that the difference may be the camera doesn’t have access conscious functionality to transform its sensory data into qualia?”

            Maybe my understanding of the distinction is wrong but can’t red be a part of phenomenal consciousness?

            Like

          6. I am inclined to think that the distinction between access and phenomenal consciousness is somewhat bogus anyway. I only brought it up to make clear when I am talking about the concept of experience I would mean both access and phenomenal consciousness if you think there is some value in the distinction.

            The difference between unconscious neurons firing and consciousness is that consciousness has the quality of something experienced and that experience takes place in reference to something – the self – that which experiences what it feels like to be something.

            Like

          7. Eric,

            You and I may be saying something vaguely related. I am not sure I like the term “valence”. For me it seems rather obscure and makes me only think of chemistry. When I look it up, it also seems to have connotations of good/bad, positive/negative. I also don’t see how it ties to phenomenal whereas information ties to access consciousness.

            Liked by 1 person

          8. Red is obviously a part of phenomenal consciousness. But as you noted, the idea that phenomenal consciousness is something completely separate from access consciousness is dubious. I think phenomenal consciousness is just access consciousness from the inside.

            In terms of what Eric is talking about, it’s the access functionality that knits the visual perception of red with any affects (felt valence) associated with that perception, creating the overall felt experience, the qualia of seeing red.

            Like

          9. James,
            It’s been difficult for me to come up with a punishment / reward term that everyone is comfortable with. “Valence” seems to work best over at Wyrd’s anyway. Perhaps “sentience” works best here?

            It sounds to me like you’re saying that consciousness needs to be considered holistically. Thus in my own speak, toe pain must inherently provide both personal punishment and location information. There can’t be one without the other. Is that right?

            I’ve developed a detailed conceptual model by which the conscious form of computer (like you) exists as an output of the non-conscious form of computer (like your brain). Thus if you have any questions about how various circumstances are handled from this perspective, I’d be pleased to get into it whenever you like.

            Like

          10. One last reply since I think my ideas are still a little vague and forming on this matter.

            Basically I am trying to connect the radical plasticity theory to the “hard problem”. If we think of neural pathways and circuits as involved in reflexes and perception (#1 and # 2 in your list), then we can imagine at some point in evolution to get more adaptive behavior we need to develop neural pathways and circuits to control the reflexive and perceptual neural pathways. This would mean circuits that are not passive and reactive but are proactive and it would mean pathways controlling other pathways. For this to operate most effectively for adaptive purposes, it would require an internal spatial and temporal model that maps the body itself and the body in relation to the external world – the basis of a deep self. In its most primitive forms even this self is unconscious. However, as control pathways evolve in multiple layers with additional layers of control upon lower levels of control, eventually we develop a unitary sense of self. This self is the experiencer and its process of control is the generator of qualia. The quale of red is not an add-on to the pathways processing “red” light but rather the process of higher level pathways controlling the lower level pathways that are processing the “red”.

            Meta-cognition and the higher levels of your hierarchy are simply additional layers piled upon other layers. The whole framework is built up by piling layers of control on other layers.

            This is intended as mostly a logical model (and a somewhat general one) that may not be realized in some easily predictable form in actual brains. However, I think I have quoted this before that suggests that there may be actual pathways associated with maintenance of self:

            Rather, decreased connectivity between the parahippocampus and retrosplenial cortex (RSC) correlated strongly with ratings of “ego-dissolution” and “altered meaning,” implying the importance of this particular circuit for the maintenance of “self” or “ego” and its processing of “meaning.”

            https://www.ncbi.nlm.nih.gov/pubmed/27071089

            Liked by 1 person

          11. A lot of what you’re saying here resonates with my own views. I’d just add that I think the self concept exists at many layers. There’s Damasio’s proto-self in the brainstem, and the core self in the insula and regions you mention. It seems like all of this gets stitched together as a self concept somewhere in the fronto-parietal network, where it gets used in the models and simulations we call imagination.

            But like you, my own views on this, and I suspect neuroscience’s views, remain blurry, with many details still to be worked out.

            Like

  5. I have my own argument against this plant consciousness nonsense, or my “four forms of computer” discussion.

    The genetic material which exists in cells evolved…. somehow. Regardless this sort of thing seems effective to call reality’s first form of “computer”, or stuff which functions algorithmically. (I’d love to add an earlier form if anyone can think of a reasonable example.)

    The second form of computer may have incited the Cambrian explosion of life around 541 million years ago. Regardless, apparently a central organism processor was helpful in certain forms of life, or “brain”. While the first form of computer produces proteins and such given the relationship between genetic material and input material, the brain essentially produces organism guidance given the way that input signals cause neurons to fire for output function.

    Then the third form of computer exists as an output of the second, or “consciousness”. For clarification I reduce this back to “valence”, or a punishment / reward dynamic. There is something it is like to have valence, and nothing it is like to not. Furthermore computation is not inherent. Instead this stuff may be considered as motivation from which to potentially drive the conscious form of computer. Apparently non-conscious brains couldn’t deal with more “open” environments as effectively as they might, though such an “agent” could. Whatever processing that I consciously do, I suspect that my brain does more than 100,000 times as much, and then my genetic material trumps this again by a greater magnitude still.

    The fourth variety of computer is of course technological.

    To now return to the original issue, plants do have the genetic form of computation, though biologists don’t consider them to have central organism processors, or certainly valence produced by them. Thus there should be nothing it is like to exist as a plant. While scientists in general agree, all sorts of nonsense is permitted to enter the realm of science without any accepted model regarding these matters. In many ways science is still a new institution, though voids like this one should be filled as it progresses. I hope to help it do so.

    Liked by 1 person

    1. “though biologists don’t consider them to have central organism processors, or certainly valence produced by them”

      This can get into how we define “valence”. Plants have no imaginative capabilities, so they have no reason to feel their valences, but they do have preferences. They aren’t conscious preferences, just like the reflexes in our brainstem and spinal cord aren’t conscious ones, but they are there. A lot of this gets back to Antonio Damasio’s concept of biological value, of which conscious affects (including valence) are an elaboration.

      Liked by 1 person

      1. Right Mike, it all depends upon definition. And when someone prefaces the term “value” with “biological”, that’s a pretty good indication that they aren’t talking about the same value that’s so central to my ideas. So I do like that Damasio makes such a formal differentiation. And as you know, I also like to mention my themes over here when applicable from time to time. It could be that someone will point out some valid objections for me to consider, or even decide that I’ve got things pretty straight and so want to hear some more.

        Liked by 1 person

        1. Eric,
          I think what you should consider is that the sentience-value you describe is a special case of biological value. In other words, sentience-value didn’t pop into existence out of nothing. It had antecedents. Biological value is the “goal” of the reflexes. Sentience-value is biological-value fed into the sensory-action-scenario-assessment engine (imagination). In other words, sentience-value is a continuation of biological-value.

          Liked by 1 person

      2. In other words, sentience-value is a continuation of biological-value.

        Well of course Mike. I’m not saying that sentience has no evolutionary uses, or “value” in that regard. David Chalmers even calls the “Why?” of sentience a “hard problem”. I haven’t heard that Damasio has provided any true answer for Chalmers. I do provide such an answer however, or “autonomy”. Non-conscious machines simply cannot be programmed to deal with enough situations. Thus evolution developed a true agent by means of sentience.

        Regardless, according to my theory the goal of the conscious entity is to feel as good as it possibly can each moment. It is not to promote “biological value”. That’s evolution’s goal, and apparently one mechanism to that end was to create our separate goal, or sentience.

        Liked by 1 person

        1. As it turns out, I’ve been reading some Chalmers this morning. I’m pretty sure he wouldn’t see your answer as any more acceptable than the many others that have been given. He seems to regard experience as something separate from any functionality, undefinable, and irreducible.

          On the relation between felt-value and biological-value, I think we’re on the same page. Felt-value (what I was calling sentience-value above) exists to promote biological-value, but the two exist at different levels and can become mismatched (i.e. birth control, fatty foods, etc).

          Liked by 1 person

      3. He seems to regard experience as something separate from any functionality, undefinable, and irreducible.

        If what you say is true Mike, then in my book this renders Chalmers a substance dualist. Do you agree?

        Liked by 1 person

        1. It makes him a dualist, but his views are nuanced and lumping him in with straight out Cartesian dualism is probably failing to give him interpretational charity. Here’s his own assessment:

          This position qualifies as a variety of dualism, as it postulates basic properties over and above the properties invoked by physics. But it is an innocent version of dualism, entirely compatible with the scientific view of the world. Nothing in this approach contradicts anything in physical theory; we simply need to add further bridging principles to explain how experience arises from physical processes. There is nothing particularly spiritual or mystical about this theory – its overall shape is like that of a physical theory, with a few fundamental entities connected by fundamental laws. It expands the ontology slightly, to be sure, but Maxwell did the same thing. Indeed, the overall structure of this position is entirely naturalistic, allowing that ultimately the universe comes down to a network of basic entities obeying simple laws, and allowing that there may ultimately be a theory of consciousness cast in terms of such laws. If the position is to have a name, a good choice might be naturalistic dualism.

          http://consc.net/papers/facing.html

          Liked by 2 people

      4. Well understand this Mike, if he regards experience as being something that exists, and yet is separate from functionality, then he’s clearly referencing an ontological void in causality. You may be wrong about him there, though the passage you provided doesn’t suggest to me that he deserves charity. “Basic properties over and above the properties invoked by physics…”? Hopefully he meant “know physics”.

        I see that he wants to be perceived as a naturalist, though I worry about him still feeling free to say supernaturalist types of things. If he will not acknowledge that there is a reason that phenomenal experience evolved, whether mine or any other account, then he’s a full Cartesian dualist. Otherwise there may be hope.

        Many physicists have this problem as well. On the one hand they want to believe that reality ultimately functions randomly, or that there’s a full ontological void in causality itself. Then on the other hand they want to be perceived as naturalists. Well one position here contradicts the other. Thus there is Einstein’s side and the supernaturalist side — no other side. Currently physicists seem to enjoy bitching out Einstein for saying “God doesn’t play dice”, as well as pretending that they’re naturalists like him.

        Sabine Hossenfelder seems to have grasped my point about this and so no longer says ontologically dubious things. This is to say that she now considers there to be “epistemological uncertainty” rather than “ontological uncertainty” regarding Heisenberg’s uncertainty principle. Einstein would approve. Could Chalmers be straightened out about this sort of thing as well? Might he practically become the naturalist that he clearly wants to be? I don’t know. Apparently he answers mail from JamesofSeattle though….

        Liked by 2 people

  6. Interesting post and obviously above my pay-grade.
    If plants satisfy some level of consciousness, what is the harm in considering them conscious? Or because the next stop would be to ask if plants are intelligent?

    Liked by 1 person

    1. Thanks mak.

      I think the trouble would be the points I mentioned at the end of the post. It would oblige us to treat plants as subjects of moral concern. Weeding the garden or clearing out brush might become an ethically tricky issue. Of course, if plants are conscious, then we should face up to those issues, but given the trouble, and the prima facie falsity of the proposition, I think it’s rational to hold out for much better evidence than we currently have.

      None of that is to say senseless destruction of plants is okay. I think most of us regard it as something that should only be done when necessary, for environmental reasons if nothing else.

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.