Reducing felt experience requires not preemptively dismissing the solutions

Annaka Harris has a new audio book out which she is promoting. I haven’t listened to it, but based on the interviews and spots like the one below, it appears that she’s doubling down on the conclusions she reached in her book from a few years ago, that consciousness is fundamental and pervasive.

The Hard Problem of Consciousness | Annaka Harris

Harris starts off by discussing the profound mystery of consciousness. But she clarifies that she isn’t thinking about higher order thought, like the kind in humans, but something more basic: “felt experience.” She takes this to be something that can exist without thought, and so discusses the possibility of it existing in plants and other organisms that don’t trigger most people’s intuitions of a fellow consciousness.

As I’ve noted in a couple of recent posts, the hard problem of consciousness seems specific to a particular theory of consciousness, that of fundamental consciousness, the idea that manifest conscious experience is exactly what it seems and nothing else, that there is no appearance / reality distinction or hidden complexities. I’m sure Harris, like so many others, will argue that there’s no choice but to accept fundamental consciousness. How else to explain the mystery?

But like David Chalmers and many others, she starts off by dismissing the possible solution, “higher order” processing. Without that, felt experience, the feelings of conscious experience, do look simple and irreducible. But that’s only because we’ve chosen to isolate something that didn’t evolve to be isolated, that has a functional role to play in organisms.

Harris’ example of looking at decisions vines make in where to grow is a good example. In most biological descriptions, this behavior is automatic, done without any volition. She wonders if this may not involve any felt experience. But she doesn’t seem to wonder if similar behavior in a Roomba, self driving car, or thermostat has similar types of feelings. (Some panpsychists do admit that their view implies experience in these types of systems, but in my experience most resist it.)

Many animal researchers have similar intuitions, that the observable behavioral reactions in relatively simple animals must involve feeling, since similar reactions in us are accompanied by them (at least in healthy mentally complete humans). Of course, similar to most panpsychists, they typically resist the implication for machines, often gesturing at some unknown biological ingredient or principle which will distinguish the systems they want to conclude experience feelings from those they don’t.

My take is that the solution is to reject the theory of fundamental consciousness. What’s the alternative? A reductive theory. But how do we reduce felt experience? Remember, to do a true reduction, the phenomenon must be broken down into components that are not that phenomenon. If anywhere in the description we have to include the overall phenomenon itself, we’ve failed.

Along those lines, I think part of the explanation of what feelings are is that they are composed of automatic reactions that can be either allowed or overridden. So if an animal sees a predator and always automatically reacts by running away, that in an of itself isn’t evidence of fear. On the other hand, if sometimes the animal can override their impulse to run away, maybe because there’s food nearby and they judge the risk to be worth it, then we have an animal capable of feeling fear.

So a feeling is a perception, a prediction, of an impulse which an organism uses in its reasoning to decide whether to inhibit or indulge the impulse. This means the higher order thinking Harris immediately excludes from her consideration is actually part of the answer. That answer, incidentally, also explains why we evolved feelings.

An organism is generally only going to have feelings if they provide a survival advantage, but that advantage only exists if they have some reasoning aspect to make use of it. Note that this reasoning aspect doesn’t have to be as sophisticated as what happens in humans, or even mammals or birds necessarily, although the sophistication makes it easier to detect. It just needs to be present in some incipient form to act as one endpoint in the relationship between it and the impulse, the relationship that we refer to as a “feeling”.

This requirement for a minimal level of reasoning seems to rule out felt experience in simple animals, plants, robots, and thermostats. It also gives us an idea of what a technological system would need to have it, a system of automatic reactions, which can be optionally overridden by other parts of the system simulating possible scenarios, even if only a second or two in the future.

Figuring out how to do this is not trivial. None of the current systems people wonder about are capable of it. But while it’s hard, it’s not the utter intractability of the hard problem of consciousness. Once we dismiss fundamental consciousness, that problem seems to no longer exist.

Unless of course I’m missing something?

63 thoughts on “Reducing felt experience requires not preemptively dismissing the solutions

  1. “My take is that the solution is to reject the theory of fundamental consciousness. What’s the alternative? A reductive theory. ”

    A theory of emergence is another alternative.

    And what does it emerge from? As far as we can determine so far, only biology.

    “It also gives us an idea of what a technological system would need to have it..”

    It doesn’t need to have it.

    Liked by 1 person

    1. I see emergence (in the weak sense in which it’s plausible for me) as reduction in reverse. So saying thermodynamics is emergent from particle physics is the same as saying it reduces to particle physics.

      On biology, if it’s special, the question for me, as always, is what in particular about it makes it so? The specific elements involved? The functional orientation toward survival and procreation? Or some other aspect?

      Liked by 1 person

      1. Since everything is made of particles, everything can be trivially reduced to particle physics. I don’t see that you are reducing consciousness to particle physics. I see you are reducing it to “functions” and I don’t know what is the physical definition of a “function” or where one would find “functions” in particle physics.

        Biology is special because “as far as we can determine so far” only biological systems are conscious. If consciousness is about information, the information in a biological organism would be biological. Hunger might serve the function of notifying that there is a calorie deficit but, if that was all it did, we wouldn’t have any fat people.

        Liked by 1 person

        1. I don’t know if “reducing” is the right word, but it’s probably right to say I equate consciousness with functionality. But there’s always many different ways to skin a cat. Functions, cause / effect relationships, are multi-realizable. So there wouldn’t be a 1:1 correspondence between a type of experience and a type of physical system. (There would in individual cases. A particular experience of hunger equates with a particular physical state.)

          My issue with the “as far as we can determine” is it seems like that was once true of locomotion, computation, accounting, navigation, and many other things.

          I’d say hunger is the perception of an impulse to eat that, when everything is working right, happens with a particular homeostatic state. But everything is often not working right, particularly in modern societies with all the low nutrient high calorie options available.

          Liked by 1 person

          1. If “reducing” isn’t the right word, why do you call it a “reductive theory?”

            I still didn’t see a definition of a “function” that is measurable, that allows us to know we have a “function” or don’t have one. How do we distinguish a conscious function from an unconscious one? How much of walking is conscious navigation and how much is unconscious?

            That consciousness should have “functions” is hardly any great insight because it evolved in biology and most of what evolves has “functions.” But that’s true of hearts, lungs, and digestive systems too. If we combine a pump, a bellows, and a wood chipper, do we have an organism?

            Liked by 1 person

          2. It seems like a function is defined by its inputs and outputs, its truth table. So we can measure its presence or absence by whether the right inputs are being transformed into the right outputs. Of course, biology is messy, so there’s often a category of processes with very similar but not identical functionality.

            I agree. Functionalism shouldn’t be controversial. But it is, at least in philosophy.

            Liked by 1 person

          3. Still seems vague without more of an agreement on right inputs and outputs. Or, are we just talking about how veridical the outputs are to science? Because that doesn’t seem to be what consciousness is about. The outputs need to be evaluated relative to the organism not to a truth standard. Running from a hose that we think is a snake might be the right output.

            Liked by 1 person

          4. Certainly it’s complex. But we have tests like the one I discussed in the post. Can the animal override their instinctive reaction? At least in a way where another instinctive reaction isn’t just overwhelming it? But the edge cases are always going to be difficult. Do arthropods, for example, feel? It depends on exactly what we mean by “feel”. And our intuitions about that meaning aren’t exactly the most consistent thing anyway. Which is why I often say that consciousness is in the eye of the beholder.

            Like

          5. I’m not sure overriding an instinctive reaction even works for animals but it certainly wouldn’t work for non-biological entities. I don’t think your example of “system of automatic reactions, which can be optionally overridden by other parts of the system” really works. If the reaction is programmed to be evaluated by another part of the system, it isn’t automatic anymore. It just looks like something rigged up to resemble an instinct. But even with animals, the fact that most of behavior is instinctive doesn’t definitively mean no consciousness is involved in carrying it out or being conscious of it. Humans probably can’t override the startle reflex but after the fact we become aware of what startled us.

            Like

          6. We can talk in terms of dispositions if that seems better. I used “automatic” to get at the idea that this isn’t something we volitionally choose to initiate.

            But many aspects of it are in fact fully automated. If you see a dangerous animal coming at you, you may for some reason decide not to dodge or run, but there are things that will happen anyway, such as your heart rate and breathing shooting up, eyes dilating, muscles contracting, etc. All of which reverberates back in the interoceptive loop as part of the overall experience of the impulse.

            We can never rule out an epiphenomenal experience. But we can observe that evolution rarely wastes resources. If we never see the behavior it enables, I think it’s reasonable to question whether it’s there. But the edge cases will always be difficult, and our intuitions may not be serving us well in those cases.

            Liked by 1 person

          7. It seems like different aspects of the same framework. Overriding initial impulses in an adaptive fashion, I think, requires memory, learning, and problem solving. I just focused on the overriding component because of Harris’ emphasis of felt experience.

            Like

  2. A higher-order/cognitive aspect doesn’t avoid the Hard problem of consciousness. We can still imagine the causal and functional dynamics of a system that can interrupt instinctive behavior occurring without any felt experience. We need an argument that connects this kind of cognitive control with felt experience. What necessary role to cognitive control does felt experience have?

    There are two related problems. How are instincts represented within the system, and why is there a further consumer of representation over and above collections of neurons? Neurons don’t need fear to do their job. What is it that needs felt fear to do its job?

    Liked by 2 people

    1. Thanks for commenting!

      Consider if an organism is aware of danger and has the impulse to run away, and so is aware of that impulse, but with great effort can override it, what would you call its perception of that impulse? What do we have to add to it to qualify it for the label “fear”? And if it were absent, what would that do for the organism’s chances of survival?

      Of course, you could say that the ineffable intrinsic essence of fear is missing. If it’s something utterly epiphenomenal, something that makes no difference to the organism’s survival, then how can we demonstrate its existence or non-existence?

      Like

      1. The target of consideration is presumably the phenomenal aspect of experience. However we want to characterize it, even the staunchest of eliminativists generally agree there is a manifest subjective aspect to all the various functional dispositions that characterize conscious beings. The Hard problem is accounting for this manifest subjectivity given the conceptual tools sanctioned by science.

        There is a potential equivocation inherent in these discussions. We can talk about “organisms aware of danger” without being committed to any target of reference beyond the collection of causal simples that ground explanations of its behavior. So we can convince ourselves that we are giving a complete description of a conscious organism without leaving anything out, nothing worth having at least. But these descriptions don’t bear a resemblance to our first-personal experience as agents experiencing, cognizing, and acting in the world. I certainly don’t feel like a collection of simples. I feel like a whole entity engaging with the world through vivid impressions bearing potent meaning. We should be able to see ourselves in our theories of consciousness. This is the core difficulty in broad acceptance of theories that eliminate the felt experience.

        Like

        1. Manifest experience, in and of itself, doesn’t seem to have a hard problem. We only get the hard problem if we make some additional assumptions, that what is manifest is simple, irreducible, and that our introspective judgments about it are reliable. With those assumptions, we have fundamental consciousness and the hard problem. But we’ve assumed ourselves into that problem.

          If instead, as in so much with science, we take manifest experience as the tip of the iceberg, that there are hidden structures and processes, then that manifest image becomes something scientifically tractable.

          On resembling first person experience, an impulse to run away that can be overridden feels like it matches my first person experience of fear. As does a potentially resistible impulse to fight matches my experience of anger. The idea that these experience float apart from the functionality has always struck me as a major assumption. Surveys done in experimental philosophy seem to show that it isn’t a universal intuition.

          Rather than see ourselves in a theory of consciousness, I think we should be able to account for our impression of ourselves. But the results of accounting are frequently far from intuitive.

          Like

          1. >Manifest experience, in and of itself, doesn’t seem to have a hard problem. We only get the hard problem if we make some additional assumptions, that what is manifest is simple, irreducible, and that our introspective judgments about it are reliable. 

            A part of me feels like people who are committed to a deflated notion of consciousness are forced to claim they don’t see any “Hard Problem” in explaining the manifest experience from structure, dynamics, etc. Whatever intuitions drive people to posit a Hard problem of consciousness, I certainly share. So its a little hard for me to see where you are coming from. But setting that aside, a satisfying explanation for consciousness will need to explain where all the hard problem intuitions come from. Chalmers calls this the meta-problem of consciousness.

            If there were no unique difficulty in explaining consciousness from the conceptual tools sanctioned by science, few people would buy into the Hard problem framing. This bias needs to be explained. While you can say there’s no in principle problem for science in explaining whatever subset of features associated with consciousness that turn out to be indispensable, from where we sit there is an explanatory gap that other scientific mysteries do not share. At the very least this warrants an explanation. We can then judge the complete theory on the resemblance criteria and whatever else.

            Like

          2. It’s always tempting to see those with different intuitions as engaging in motivated reasoning. But sometimes people just have different intuitions. In my case I’ve never felt the one behind the hard problem. The first time I read Chalmers’ description of it, or Joseph Levine’s of the explanatory gap, I was puzzled about what they were talking about. They depended on ostension (pointing to a common perception) to make the case, but sometimes someone else can look at what is being pointed to and just not see the same thing.

            For a while I thought I was just missing something. Which is why I found this discussion from a couple of years ago interesting. Surveys conducted in experimental philosophy show that the intuition is widespread, but not universal. https://selfawarepatterns.com/2023/02/18/do-regular-people-see-a-hard-problem-of-consciousness/

            I do agree that the intuition needs to be accounted for. But a widespread intuition is easier to account for than a universal one, and as Chalmers notes, there are explanations for the universal one that don’t reference fundamental consciousness. I can understand the skepticism toward those explanations for someone who feels that intuition intensely. But then who is convinced by quantum mechanics, general relativity, or natural selection the first time they hear them?

            Like

  3. Along those lines, I think part of the explanation of what feelings are is that they are composed of automatic reactions that can be either allowed or overridden

    I may have said this before(?), but this aligns very well with Henri Bergson and Alfred North Whitehead’s views, and imo is the key insight to consciousness. Bergson wrote:

    Where consciousness appears, it does not so much light up the instinct itself as the thwartings to which instinct is subject; it is the deficit of instinct, the distance, between the act and the idea, that becomes consciousness so that consciousness, here, is only an accident. Essentially, consciousness only emphasizes the starting-point of instinct, the point at which the whole series of automatic movements is released. Deficit, on the contrary, is the normal state of intelligence. Laboring under difficulties is its very essence.

    It also aligns with our own lived experience that for the things we can do on “autopilot” we are typically unconscious of. It’s when we encounter some unexpected difficulty requiring a bit of deliberation that things enter our conscious awareness.

    Meanwhile for Whitehead, consciousness is “propositional feeling”, feeling the contrast between the theory and the facts. It’s not just awareness of what is, but what might be.

    But Whitehead does keep feeling (though not necessarily conscious feeling) as fundamental, although they are not non-physical or epiphenomenal. I think this is the right move. We might substitute “information” or “relation” instead of “feeling”, but we cannot explain consciousness in terms of perceptions and predictions if these are not real. I suppose I’m saying that whatever consciousness may be reducible to can be considered as fundamental consciousness, at least if it is not reducible further (which I think is true for both information and relations).

    Liked by 1 person

    1. Thanks. But I’m having trouble parsing Bergson’s quote here. I can see some resemblance to what I’m saying in his mentioning of the relationship with instinct, but it’s hard to tell if he’s conveying the same idea. Maybe it makes more sense in context.

      It seems like if we say consciousness is reducible to something which is itself not reducible, we’re talking panprotopsychism. That’s not my view. I think it’s reducible all the way down to fundamental physics, with the caveat that there is a lot going on in between.

      Information in particular is, I think, causation, or maybe a snapshot or causal transformation. But causation seems like an asymmetrical relation across time, one that is itself emergent from symmetrical relations as we scale up from microscopic to macroscopic scales due to the second law of thermodynamics.

      But maybe I’ve overlooked something?

      Liked by 1 person

      1. Sorry, I can see how the quote might be confusing out of context, and the wording is admittedly difficult. Where you speak of automatic reactions he is speaking of “instinct”, and where you say about being either allowed or overridden, he talks of instinct being “thwarted”. The point is that consciousness steps in where automatic/instinctive response fails. Intelligence/consciousness is essentially bound to difficulty and deciding between different options.

        Saying that it’s reducible all the way down to fundamental physics can still be seen as panprotopsychism though. Depending on how we define “physics”, I think this is the general panpsychist view already. The distinction is perhaps that panpsychists generally make it a separate aspect/dimension of fundamental physics, which I think is a misstep.

        Personally, I lean towards information being fundamental and being more or less convertible with relations/structure/process. If it could be reduced to anything else, that thing would have to be informationless, which seems impossible to me.

        Liked by 1 person

        1. Thanks for the clarifications. I see what you mean about the similarity. I suspect where we’d diverge might be in what we mean by “consciousness”.

          Right, the panprotopsychist thing highlights the difference between the two types of panpsychist I discussed the other day: naturalistic panpsychists and fundamental panpsychists. The naturalistic ones are basically physicalists, but define consciousness (or proto-consciousness) in such a way that it’s pervasive. The fundamental panpsychists always see consciousness (or proto-consciousness) as something extra at all scales. Of course, many individual panpsychists are undecided about which side they come down on.

          I might be on board with information being convertible with relations / structure / processes, if by “convertible” we mean “identical to” or “composed of”.

          Liked by 1 person

  4. The headline advises against “preemptively dismissing” a solution, but the conclusion invites us to “dismiss fundamental consciousness.” l can’t decide whether you’re asking us to dismiss it preemptively, or whether you’ve offered some solid reasoning about its flaws. For my part I don’t know what’s <i>logically</i> wrong with a theory of what you’ve termed “fundamental consciousness.” The main objection still seems to be the “incredulous stare.”

    On the other hand, the dismissal of a reductive theory need not be pre-emptive; one can have considered reasons. The strongest of these, in my opinion, is that if complex mechanisms of evaluation, feedback and control can be implemented without consciousness, then there is no need to add consciousness. And such systems can be implemented, and biology is held to be complex mechanism. The explanation starts to look <i>ad hoc</i>, made only because consciousness is there and needs to be explained,; meanwhile other ideas about why it’s there are pre-emptively dismissed.

    But it also occurs to me that the notion of a fully automatic reaction in an animal requires more evidence than your sketch provides. Even a spider seems to have several choices : run, hide, or play dead; how would we ever know whether one of them is automatic? Even a consistent choice under controlled circumstances would be no more than the consistently best choice.

    Even if we grant these animal automata you’ve conjured, the theory about a fully automatic mechanism that comes under under some external influence that needs to be aware of it leaves us with questions. Is this external influence not also a mechanism? Why does one mechanism need to be “aware” of the other in order to work? And if the external influence isn’t just another blind mechanism, what is it? Does it have some sort of freedom? “Consciousness” at this point seems less like an explanation than an incantation.

    All of which is to say that the dismissal of theories other than fundamental consciousness need not be pre-emptive. Adopting the incredulous stare is an easy way of presupposing the recipient to be lost in delusions and wishful thinking; yet overcoming the incredulous stare can be is a deliberate choice, made with some difficulty after thoughtful consideration.

    Liked by 3 people

    1. I don’t think I’m advocating dismissal of fundamental consciousness due to simple bias (the incredulous stare). Harris herself, along with many others, identifies the biggest issue with it, a problem so severe that it has a special name, the “hard problem of consciousness”. Fundamental consciousness doesn’t solve it so much as embody it. It just says that conscious just is, a fundamental fact we shouldn’t expect to understand. We could call it the “shut up and feel” stance.

      If Harris went through the reductive approaches and identified specific problems with them, then I wouldn’t say she was being preemptive. Maybe she does in her new audio book. (I can’t recall if she did in her previous book.) I see assertions that it’s something separate from reasoning with little if any justification. (She does reportedly interview physicalists in the book, who I’d imagine push back. Maybe I’ll check it out if/when the price comes down.)

      I’m a functionalist. I think if we built a system that was functionally equivalent to a conscious one, then the new system would itself be conscious. For me, the evidence of its consciousness would be in its abilities, at least when it’s fully functional. I don’t perceive we’re anywhere close yet. My understanding of fundamental consciousness is that there’s no way beyond that to prove or disprove its presence, which the proponents are generally upfront about.

      On animals and automatic reactions, I think if we see certain behavior universally within a particular species, then it’s fair to say it’s instinctive. The question then is do we see it being overridden in situations when it’s adaptive to do so. There are experiments which test for this type of value trade-off behavior. Based on my readings, it’s fairly straightforward to see in mammals and birds, but much more open to interpretation in other species.

      On the name “fundamental consciousness”, my only innovation here is to say it with that shorthand instead of having to repeatedly use a phrase like “the theory that consciousness is fundamental.” Harris uses the word “fundamental” repeatedly in the video and interviews. And of course Chalmers has a whole section of his book arguing for consciousness being fundamental (filled with talk of zombies, inverted qualia, and Mary’s room, all of which seem flawed to me).

      So if someone wants to give reasons for certain theories, or against other theories, I’m totally onboard with that discussion.

      Like

      1. The hard problem of consciousness is not “an issue” for panpsychism, if you mean it’s some kind of argument against the position. Apparently you do think that. I see this misunderstanding so often that I wonder whether some element of incommensurate language is at play.

        In the first few minutes of the video , Harris outlines the hard problem. It’ s the standard first move for arguing why we should consider panpsychism, not why we should be skeptical of it. The hard problem is “how non-conscious matter somehow gets configured” into a “felt experience.” This is an issue for any position that starts with non-conscious matter and tries to get to felt experience. It is clearly not an issue for a position that starts with felt experience.

        You’re right that this is in some sense a “shut up and feel” stance, although this is not quite the same as quantum mechanics, where every theory defies human understanding with paradoxes or seeming impossibilities or absurdities. There’s nothing deeply paradoxical or logic-defying about a view that starts with felt experience. But there is this: it accepts, along with every other philosophy, that there is something rather than nothing; and as with every other sensible philosophy, it knows when to stop asking why there is something rather than nothing, and to accept that something just is. The only real difference with other views concerns what “just is.” And here panpsychism has the edge in argument, because it begins with the incontrovertible evidence of experience as what “just is,” while other theories start with what is experienced as what “just is.” These theories find themselves unravelling layer after layer of what is experienced, so that it turns out again and again to be not quite what we thought, while experience itself remains an elusive “hard problem,” having perversely been omitted at the outset from the world of “what is experienced.”

        In these discussion threads, two forms of panpsychism have been identified. One accepts that “what is experienced” just is, and tacks on experience as an “aspect” of it; this is Russellian dual-aspect monism. The other tries to understand what is experienced in terms of the “just is” of experience. This seems to be philosophical idealism, broadly speaking, although I’m still getting used to that way of using the term. I suspect the second form of panpsychism might differ from idealism in the end, but I haven’t put my finger on it.

        Liked by 1 person

        1. I do think the hard problem is an issue specific to fundamental consciousness, the theory that manifest consciousness is simple and irreducible, and that our introspective judgments about it are accurate and universal enough for us to know that.  I acknowledge that views like panpsychism, dualism, or idealism are attempts to solve that problem.  Do they succeed?  I guess it depends on what we require from a solution.  But I find accepting fallibility in our introspective judgments a more plausible answer.

          I agree that consciousness isn’t the same level of problem that quantum physics represents.  But for me, that comes from rejecting fundamental consciousness.  I used the “shut up” phrase to emphasize a difference in curiosity.  I want to understand how measurement works, and similarly I want to understand how consciousness works.  In both cases just telling me that it’s fundamental and there are no answers beyond that seems like an argument to accept the structural gaps I discussed in the previous post.

          I would say that the only incontrovertible evidence for experience is for manifest experience.  Anything beyond that is a theory, a model, a set of assumptions that either help in accounting for that manifest experience or not.  The problem I see with panpsychism, dualism, and idealism is that they’re not trying to explain manifest experience, but the more theory laden fundamental experience.  

          To an outsider like me, the variants of panpsychism and idealism do seem to blur into each other.  But I’m aware that the proponents of these views see them as very different.

          Like

          1. A theory of fundamental consciousness does not require manifest consciousness to be simple and irreducible. This is what the “combination problem” is all about.

            You say the problem with panpsychism is that it’s “not trying to explain manifest experience,” but actually it is — just like every other theory of consciousness. Its chosen explanation is in terms of fundamental experience. You say that it’s trying to explain “the more theory-laden fundamental experience,” but it isn’t. It holds fundamental experience to be “just there,” beyond explanation — in the same way, and with the same basic justification, that any theory has to accept that something is “just there” and beyond explanation.

            From the panspychist perspective, it’s all the other theories that want to explain fundamental experience (i.e. as not fundamental), and they get very theory-laden about it. I hope we can agree that everybody wants to explain manifest experience. If there is any finer-grained sort of experience below it, the other theories want to explain that too. They may of course deny any subtler or more primitive forms of experience at all, and attempt to explain manifest experience without reference to them. The point is that they won’t be content until they have explained experience in terms of something that is not experience. The urge to understand “how it works” is perfectly respectable, but the expectation that experience, manifest or otherwise, will ever be “explained” in terms that omit experience seems, from a panpsychist perspective, to be deeply confused.

            It’s good to rehearse these perennial differences, but I sometimes get the feeling that some deeper incompatibility of discourse causes the two sides to talk past one another. How else to explain their persistence, despite the best efforts of all concerned?

            Like

          2. It does seem like many philosophical disputes are people talking past each other. It’s like we’re both looking at the rabbit-duck illusion, or the dress color issue, and not able to see what the other is seeing. I do periodically attempt to try on views like panpsychism, property dualism, and idealism. But since I’m ultimately not convinced, I never know for sure whether I’m seeing them the way proponents do. Or if they’re seeing physicalism and functionalism the way I do.

            And that’s even before all the definitional issues, which I do my best to clarify. But the limits of language are such that even the clarifications can often have different interpretations. If there’s an easy solution for this, I haven’t found it. It seems like all we can do is make our case until a lightbulb comes on somewhere.

            Liked by 2 people

    2. “The strongest of these, in my opinion, is that if complex mechanisms of evaluation, feedback and control can be implemented without consciousness, then there is no need to add consciousness. ”

      It is what I’ve been trying to say about “consciousness” in AI. If we can explain the operation of the AI without consciousness, why would we suppose it exists in the AI? That leads, however, to the argument that the consciousness we know is an illusion and the “real” consciousness isn’t anything other than the computation the brain or the AI is doing. In other words, we are all automata who fool ourselves into thinking we are thinking.

      Like

      1. You can’t fool something that isn’t conscious.

        But yeah, any proposal that adds consciousness to some machines but not others has some explaining to do. I’m comfortable with machines being conscious, but not because I think consciousness emerges out of unconscious stuff. My view is that the stuff of machinery, indeed of everything, innately supports consciousness.

        Liked by 1 person

        1. To me you have to have something the “consciousness” does if you say it is present. If whatever you are seeing or measuring is explainable without consciousness, then Occam’s Razor would prevail and it isn’t indicative of consciousness.

          “not because I think consciousness emerges out of unconscious stuff. My view is that the stuff of machinery, indeed of everything, innately supports consciousness.”

          I don’t see much resembling consciousness in a granite block. And it’s hard to imagine a granite block “supporting” consciousness either. It’s also difficult to think of a granite block as machinery. Of course, we can call every little particle and ion a bit and pretend some kind of computational machinery is at work if we want. But why do that? It doesn’t really help us explain what happens in brains. It only relieves the ontological angst of consciousness.

          Like

          1. The act of seeing or measuring itself is what consciousness does, and I have trouble explaining it without consciousness.

            The consciousness of rocks inevitably comes up. Obviously rocks and brains are organized very differently. If proto-consciousness accounts for higher configurations of consciousness, that doesn’t mean the result has to be the same for every configuration. Rock “brains,” so to speak, are incomparably boring.

            Incidentally, i think the role of brains in consciousness is overrated. There are those who would argue that the gut is conscious, or the liver, or the heart. Anyway they would be further along in the organization of proto-consciousness than a rock..

            Liked by 1 person

          2. To speak of rock consciousness or intelligence dilutes the definition of consciousness and intelligence to the nonsensical.

            Guts, heart, and livers have neurons and are part of the nervous system. That’s a big difference between them and a rock. If consciousness evolved from excitable membranes in single cell organisms and oscillatory behaviors of groups of neurons in multi-cellular organisms, then claiming guts, heart, and livers have proto-consciousness isn’t as big a leap as claiming a rock has some kind of consciousness.

            Like

          3. You’re right, the idea of proto-consciousness for hearts and livers is much easier to accept than proto-consciousness for rocks. Other here have been talking about whether Dr.Mike Levin is a panpsychist, and I think this is where he might draw a line: basically between living and dead matter. But that’s not panpsychism. With panpsychism it’s psyche all the way down, and then you have the problem of how to think about rocks.

            For the soft panpsychism (panpsychisticism?) of the living-dead divide, if there’s a line between the living and the dead, then it’s either in the organization of what would otherwise be dead, or in some mysterious difference in the underlying substance. Organization is the popular choice, but again, by Occam’s razor we have to wonder why felt experience is needed as an adjunct to the organization. As you’ve pointed out, that leads to the idea that consciousness is just along for the ride, fooled into thinking it has any significance at all. I hardly regard that as an “explanation.”

            But if we look to some difference of substance, we’re into vitalism, or maybe dualism. So the consistent panpsychist has to talk about rocks as made up of proto-consciousness. Now I could talk about a box of gears as made up of gears, without suggesting they do anything. It’s kind of like that with rocks. You need organization; the question is whether that’s enough, or whether something in the components has to support felt experience.

            Liked by 1 person

          4. Just jumping in to note that there’s a view in recent years called “biopsychism,” I think most recently promoted by Victor Lamme. Sometimes it’s taken to mean that only living things can be conscious. But its more recent meaning has been that all living things, and only living things, are conscious. It does raise the difficulty of edge cases. Are viruses conscious? What about prions? Viroids? Or even artificial life?

            Like

          5. I searched for the term and found your 2020 post.

            There are two main variants: that life is necessary and sufficient for consciousness (or felt experience), or that it’s necessary but something else is needed. Either way it shifts the focus to the difference between life and death. That could be a good thing — maybe.

            There are edge cases, and they suggest that we’re not even sure what the difference is. An interesting one that you didn’t mention is the difference between a dead creature and the same creature the instant before it died. Until we figure this out, I’m not sure we even know what we mean by “artificial life.”

            Like

          6. If my post is the top hit, that doesn’t bode well for it catching on.

            Yeah, artificial life, like artificial intelligence, hinges on what we mean by terms like “life” and “intelligence”, neither of which have a consensus. But for life, it seems like if it reproduces, maintains some kind of homeostasis, and evolves, then some will likely call it “living”. Of course, the difficulty is when all of that is happening in a simulation.

            Like

          7. Homeostatis, reproduction and evolution seem like stages of a responsiveness that works against entropy. Homeostatis could be a neutral word for “self-preservation” — one that avoids the connotations of “self.”

            Like

          8. I sometimes think of life as something like the stuff of a black hole’s accretion disk, a sort of backwash along entropic gradients. Intelligence may enable it to preserve its backwash beyond the original gradients it arose in.

            Like

          9. I guess it always comes down to whether preservation against the backwash effectively is the intelligence (a functional view), or whether it requires an intelligence that eventually comes from somewhere else to helps things along (a spooky view, in my reading).

            Liked by 1 person

          10. I agree, if you draw a line between living and dead matter, it’s not panpsychism.

            I don’t think it’s a difference of substance either. The “substance” stuff is nonsense in my view.

            Consciousness evolved from excitable membranes in single cell organisms and oscillatory behaviors of groups of neurons in multi-cellular organisms, but it only begins to appear in a form we would recognize when brains grew larger as they needed to integrate multiple complex senses and develop a spatial-temporal model of the world.

            Like

          11. I would venture that consciousness evolves from pure responsiveness, which we find everywhere. But I would not assert this or any answer with complete confidence at this stage in the investigation.

            Liked by 1 person

          12. In I of the Vortex: From Neurons to Self (2001), Rodolfo R. Llinás proposed it arose from what he called “irritability” in single cell organisms. A 2014 paper by Cook, Carvalho, and Damasio, explores a cellular basis for consciousness arising in animal cells from the membrane excitability that underpins complex behaviors and psychological phenomena. That’s basically the same as what Llinás is saying. They limited this to animal cells. But Arthur Reber and Frantisek Baluska argues for this “irritability” even in all cells, including those of bacteria and plants. The basis for “irritability” is controlled ion movements through membranes, which evolves to become the key part of what neurons do.

            So, the “responsiveness” of cells is produced by a specific type of physical phenomenon that involves charged particles. Also, keep in mind that most of the neurons in our brains are not responsive to the external world, they are responsive to other neurons. That may be related to how consciousness scales up from responsive cells.

            Like

          13. “Irritability” suggests responsiveness; did Llinás mean to do that? It leaves me wondering whether the irritability is produced by a physical phenomenon, coming into being because of it, or whether it’s an irritated response to a physical phenomenon.

            Liked by 1 person

          14. Yes, I think Llinás meant it as you suggest. Cook, Carvalho, and Damasio followed up on it. It is powered by ion movement across a membrane. Ion movement generates energy to trigger a reaction and can also power movement in flagella at the cellular level. So a common mechanism exist for detecting environmental changes and movement which are the basis for responsiveness.

            The physics of this as it evolved into neurons, brains, and nervous systems is where to look for the emergence of consciousness in my view.

            Like

          15. The physics of it all is fascinating, but it tends to obscure the original question of what consciousness adds that really helps anything — as if it were independent of the physics, and making some separate contribution.

            It’s the same point I’ve just made to Mike. Going back and forth about it is fun, but I really should be working on my next blog post, which is languishing. (Actually the main problem is that I can’t tear myself away from the news these days.)

            Like

  5. Gonna give a little more pushback than normal. Have you read Michael Levin’s piece: https://www.noemamag.com/living-things-are-not-machines-also-they-totally-are/?utm_source=noemabluesky&utm_medium=noemasocial? If not, you should. His point is that certain concepts (life, intelligence, consciousness) are fuzzy and change depending on the context, which means a given definition is wrong in one context, but fine in another. This restates your oft repeated line of the eye of the beholder.

    So there is a perfectly valid sense in which the vine is using intelligence to decide where to grow. The difficulty in understanding this position is that we come laden with our intuitions based on how human consciousness works. We want to apply words like “decision” and “volition” in the same way we apply them to human actions, which is the wrong context if we’re talking about plants, or roombas.

    And I’m a bit confused with your example of “fear”. You say if the automatic flight response happens in the absence of a competing response, that’s not fear. But if the same response happens in the presence of a competing, albeit losing, alternative response, that is fear. I would suggest to you that the latter case describes a “decision”, but the “fear” is the systemic change in the system (increase blood pressure, adrenaline, suppression of alternative activities like feeding, etc.) that ultimately leads to the flight, so, potentially there in the first case. The typical intuition as to the “feeling” of fear is simply a reference to the set of interoceptive perceptions resulting from that change. (And I noticed you slipped in “prediction” in your definition of “feeling”, which again is a feature of human-level consciousness/unitrackers.

    So I’m willing to cut Harris and panpsychists some slack if they want to say any physical interaction which results in a change to the interactant of interest counts as a “feeling”, as long (as you point out) that they apply that logic to everything (roombas). The hard problem comes from trying to understand the first person perspective of such a system. For myself, I’d say “first person perspective” is not a useful concept until the interaction in question involves the use of information for a purpose.

    whatcha think?

    *

    Liked by 2 people

    1. I did read that Levin piece. My overall reaction is best encapsulated by this old xkcd. Based on everything I’ve read from Levin, with his frequent talk of biological algorithms, he’s a mechanist. He just doesn’t seem to want to admit it here. As he notes, people arguing against biological machinery are usually arguing against a strawman, a notion that biology is equivalent to contemporary machinery, rather than the usual stance, that’s it’s profoundly complex machinery.

      On fear, I said that if the animal can override that impulse, then we have evidence for the feeling. If it just reacts, then we don’t. A case where it has multiple reactions with one winning because it’s stronger isn’t really the same thing as overriding it. But it does get at why an ability to override would evolve, to break the ties or conflicts when they arise.

      I suppose someone could argue that when an organism predicts (or anticipates or expects if “predict” is objectionable) alternative scenarios, each of which typically triggering their own related impulses, that it’s still certain impulses winning over others. But it seems like a lot more work is happening in the more sophisticated version.

      I agree about the eye of the beholder thing. If someone wants to define “feeling” as a system with impulses, then fair enough, as long as they’re consistent (which too often isn’t the case). I do think it dilutes the meaning of the word, further deepening the terminological morass which pervades this field.

      The problem with acting with purpose is, whose purpose? One the agent comprehends and chooses? Or one they’re competent toward due to evolution or engineering, in the sense of what Dennett calls “competence without comprehension”? It seems like we need distinct terms for these different types of goals.

      Like

      1. On Levin, I don’t think he would disagree with anything you said. His, and so my, point is that the meaning of a word like consciousness changes depending on the context. So the consciousness of a worm is not the same as the consciousness of a human, but you can still be definite if you make the context clear.

        *

        [I have more to say, but I just spent thirty minutes on the next paragraph on fear, pressed space too many times, hit the delete button, and the entire paragraph went away. This has happened before. I need a break]

        Like

        1. On Levin, agreed. Most of my issues with him are terminological. I have no problem with context specific language, as long as we’re clear about that context. And I think being clear requires frequently reminding the reader (or listener) about it.

          [Sorry to hear that. I’ve had the block editor bite me a few times. Sometimes CTRL-Z work gets what you lost back. Sometimes. Although IIRC you use an iPad. I stopped using mine for composing anything even remotely lengthy after similar events.]

          Liked by 1 person

      2. “Based on everything I’ve read from Levin, with his frequent talk of biological algorithms, he’s a mechanist. He just doesn’t seem to want to admit it here.”

        Funny. I agree. That was a good article. I’m not sure he is playing both sides. He speaks and writes a lot, so his ideas are out there. He wants to focus on certain intelligences of very simple organisms. And he self describes as a panpsychist, so we should probably take him at his word. But I would agree he seems as good of a mechanistic perspective as you can find. Maybe he will flesh out the consciousness and higher cognitive stuff in the future.

        Liked by 1 person

        1. I agree if he wants to call himself a panpsychist that we should respect that. But there are different kinds of panpsychists. Some, like Levin, are really just using definitions that make it compatible with naturalism. I usually call them “naturalist panpsychists”.

          I don’t perceive him as a panpsychist in the sense of seeing something extra at all scales, when we’re talking about consciousness or proto-consciousness as a non-physical property of all matter. But who knows. Maybe he’ll say something tomorrow that puts him in this category.

          Like

          1. It’s not hard to be a naturalist panpsychist. I mean, I’m a naturalist panprotopsychist. I’d be a panpsychist if you simply change the requirement to be information processing but not necessarily for a goal. All the standard parts (aboutness, what it’s likeness) are there (if you squint and look sideways).

            *

            Like

          2. It’s all in the definitions. But under the ones of a naturalist panpsychist, is anyone not a panpsychist?

            Of course, even under the more conventional panpsychism, it’s not clear there’s any empirical difference between it and eliminativism. It seems like one just has an undetectable metaphysical glaze added.

            Like

    2. BTW, Levin has a WordPress site.

      https://wordpress.com/reader/feeds/144936555

      If you cut off a tail of a chameleon, it will grow back. Living organisms reproduce and evolve – they change by themselves and environmental forces. Each copy of an organism is unique because it develops from genetic and epigenetic factors. From the uniqueness comes the ability to evolve.

      These are pretty big differences with what we normally call machines.

      Like

  6. I’m sorry to turn sniffy and suspicious but this lady appears to be a writer and publicist with no background or education in her chosen topics. Perhaps she has studied well on her own, perhaps she has hidden depths I have not been able to discover. But while her premise sounds interesting, it seems to me one may as well resort to reading and believing such luminaries as Deepak Chopra. While I like the sound of some of her ideas, she appears to have no firm basis in reality.

    Liked by 1 person

    1. I think you’re right to be suspicious. It seems like the reason she gets attention is she’s Sam Harris’ wife. It’s not otherwise clear to me she’d be any more notable than you or I. But it has gained her an audience, which is why responding to her felt worth doing.

      Like

      1. She’s a writer of popularizing books, like Philip Ball or James Gleick. Would you tolerate her if she were popularizing your own theory of consciousness, or would you still suggest she doesn’t know what she’s talking about?

        Liked by 1 person

Your thoughts?