The semantic indeterminacy of sentience

I’m currently reading Jonathan Birch’s The Edge of Sentience, a book focusing on the boundary between systems that can feel pleasure or pain, and those that can’t, and the related ethics.

While this is a subject I’m interested in, I’m leery of the activism the animal portions of it attract. I have nothing in particular against that activism, but mixing it with science seems to risk questionable results. This is an area where there are often stunning headlines. However I sometimes find that when I follow the citation trail and dig up the actual study, the results are more nuanced and open to interpretation than the headlines imply. Since I don’t have time to do that with every study that gets publicized, I’ve become cautious in accepting the claims in this area.

Birch’s book has an activist feel to it. But he makes clear at the beginning that he’s interested in an evidence based approach. And in an initial review of the science and philosophy in this area, he admits that there is currently a tremendous amount of uncertainty, and a number of “zones of reasonable disagreement”.

The first zone of disagreement starts with how to even define “sentience”. After dismissing very liberal definitions, such as the ability to respond adaptively, Birch covers the concept of affects, which are usually characterized as having a valence (an evaluation of whether something is good or bad) and an arousal dimension. After some reasoning about drugs that could target either the valence or arousal aspect individually, he concludes that valence is the crucial one, and settles on a definition of sentience as the capacity to have valenced experiences.

Of course, that immediately leads to the zone of disagreement on “experience”, which leads to a review of the philosophy and science of consciousness. Birch discusses how epiphenomenal views of consciousness, a view that experience makes no difference to behavior, might make the question impossible to study. But since evolution can only select for things that make some difference, it seems unlikely.

Among materialist views of consciousness, Birch notes a key distinction, whether consciousness is a single unified natural kind, or two or more kinds. He notes that people like Daniel Dennett seem to be in the camp of rejecting a single kind, often characterizing it in an illusionist or eliminativist fashion, although Birch feels like “many kinds” may a better label. (This resonates with my own view, along with the semanticism of Jacy Reese Anthis or semantic indeterminism of David Papineau.)

Often proponents of a particular scientific theory are operating under the single-kind view, but a many-kinds view often takes a pluralistic stance, that many of these theories may be addressing different aspects of the same complex reality. Birch uses an analogy of people in a town working to understand “what it’s like around here”, with some focusing on the economics, others the social aspects, ecology, or other areas. But rather than recognize they’re all working on different aspects of the problem, they see each other’s theories as bitter rivals.

Birch also ecumenically recognizes “radical alternatives”, such as interactionist dualism, panpsychism, biopsychism, and IIT (integrated information theory), as being in the “zone of reasonable disagreement”. Each of these views have their own challenges, such as identifying where the interaction happens between the mental and physical in interactionist dualism, the combination problem in panpsychism, or the metaphysical assumptions of IIT (which Birch characterizes as idealist in nature) and how to test them.

Another question is whether there can be edge cases of sentience or consciousness. In evolutionary history, is sentience a sharp “lights come on” type development, or a gradual one? Are there creatures where the question of whether they’re sentient has no fact of the matter answer?

If it is gradual, are we talking about a sharp start to sentience with gradually enriched contents (shallow gradualism) or a gradual development of sentience itself (deep gradualism)? Deep gradualism seems more likely under some views (such as many-kinds, global workspace, or IIT) than others (such as dualism or panpsychism).

Birch reviews some of the philosophical literature which discusses how hard it is to sympathetically imagine an edge case of consciousness, and so try to use that as a reason to dismiss the conceivability of such cases. But Birch concludes that this isn’t a good reason. Just because we struggle to imagine something doesn’t mean it isn’t possible. (I also think people have a tendency to help themselves to whatever minimalist concept of consciousness they can find in any posited edge case and declare the experience is therefore wholly conscious.)

Birch admits that both many-kinds materialism and deep gradualism complicate his task, and that he would like them to be false. Since I tend to think both of these views are true, I’m going to be interested to see how he treats them as the book progresses.

Birch also discusses the traditional philosophical theories of ethics such as utilitarianism and neo-Kantianism, concluding that they’re compatible with the view he calls “sentientism”, that all sentient systems deserve moral consideration. He also discusses alternate views, such as eco-centric ones, as well as the views of some of the major religions. Most he can see as compatible with sentientism, although he admits that it’s a rough compatibility in some cases.

One interesting view is a consciousness-without-valence one, which could become an issue with artificial intelligence. Consider a PV (philosophical Vulcan). PVs are different from Star Trek Vulcans, who merely suppress their emotions. A PV has no emotions at all, and it could be argued, no sentience. But they are conscious. Are they worthy of moral consideration?

Here I think we see an issue in Birch’s valenced experience definition of sentience. He admits that a PV would likely have preferences about outcomes, and so would reason about those preferences in relation to their perceptions. He makes a distinction between this and “valence”, which I think reveals he’s unwittingly sneaking in more of the affect concept in his notion of valence, such as arousal and motivational impulses. But he concludes that the PVs have found an alternate path to moral significance, so it doesn’t seem to matter. However that seems to put him in the same camp as David Chalmers, who uses the PV concept to argue that it’s consciousness itself rather than sentience that is the crucial issue.

Which brings us back to the possibility of consciousness and sentience being semantically indeterminate, which would seem to make the ethics around them also indeterminate. I’m not a moral realist, so this holds no dilemma for me. But it obviously does for Birch’s project. As I noted above, I’ll be curious to see how he deals with it in the rest of the book. (I’ve currently only read the first quarter or so.)

What do you think about Birch’s overall project? Or about my conclusions of semantic indeterminancy? Are there reasons to think the edge of sentience is sharper than I’m imagining?

73 thoughts on “The semantic indeterminacy of sentience

  1. [thanks for reading/reviewing this, so I don’t have to, prolly]

    I think Birch’s project is extremely important right now, not so much because it explains consciousness/sentience, but because it starts to address the basis or morality, and highlights the problem of using the common notion that morality derives from either consciousness or valence-sentience(the ability to suffer). The philosophical Vulcan is an excellent vehicle for this discussion. What is it that is common to the pVulcan and the boiling lobster. My answer: goals (in the sense of possessing one or more systems which recognize a discrepancy from the current state of the world and a “goal state”, and responds by taking action designed/selected to move the state of the world toward the goal state). The pVulcan has the goal of continued survival, as well as (presumed) internal goals of satiation(not being hungry) and pain avoidance, plus various structural(?) sub-goals such as finding a food dispenser when hungry, etc.

    As for the lobster, as Bentham said, the question is Can they suffer. From my point of view, to suffer has a strict definition:

    A system suffers if

    1. it has more than one goal state,
    2. it perceives a discrepancy from the goal state and responds accordingly, and
    3. the response negatively impacts other goals while not improving on the target goal.

    Under this definition, having a pain is not necessarily suffering. Pain is valenced in that it generates systemic effects via hormones, such as adrenaline, which can negatively affect others goals (by preventing attention to those goals), but if you step on something which causes a pain in your foot and you move your foot and thereby stop the pain, you have benefited from the pain. With chronic pain, on the other hand, you have the negative effects without the benefit of the pain going away, and can be said to be suffering.

    Final comment: having goals can be sufficient for moral consideration, but that’s not the end of the story. Each goal, both internal and belonging to some external entity, gets assigned a relative value, and that value will be related to things like intelligence. Thus, we tend to place more value on the goals of more intelligent creatures, say, mammals, versus less intelligent creatures, like lobsters. Morality is about which actions to take given which goals might be affected and the relative value of those goals in light of the likelihood of those effects. These considerations explain why we might swerve a car (incurring a small risk) to avoid hitting a dog while not doing so to avoid hitting insects. Similarly, it can explain why we might ruin a $200 pair of shoes to save a drowning child (probability of having the valuable effect is high), but not make a $200 donation to “Save the Children” (probability of having an effect uncertain).

    *

    Liked by 1 person

    1. Right. I think that’s a valid analysis of the pVulcan scenario. I didn’t get into it in the post, but I personally think goals is too broad a standard. It gets us into having to think about the goals of bacteria infecting a dog’s wound (an example Birch uses in the book). Should we be concerned about the bacteria’s goals when deciding whether to give the dog antibiotics?

      Your answer sounds like it would be because the dog is more intelligent, we value its goals more. Maybe. Or it’s that we can see the dog and intuitively empathize with it in a way we can’t for the bacteria, and the rest are rationalizations. That’s the problem with trying to find any rational basis for moral intuitions. Those intuitions aren’t particularly consistent.

      But my own moral intuitions require more of the affect to trigger concern, the arousal and motivation aspects, along with at least a minimal reasoning ability to make the affect a feeling rather than just a reflex arc. Otherwise, it seems like we have to consider the goals of the laptop I’m typing this on, or, if you want to invoke the intelligence rule, of Skynet’s goal of getting rid of all those pesky humans.

      Liked by 1 person

      1. Admittedly, this understanding of goals:morality requires a long discussion, but now is the time to start.

        The intuitions we get from nature (selection) work pretty well as rules of thumb, for good reasons. But we’re entering a new era with AI. Now we have intelligent agents that don’t necessarily have goals, or have goals which we don’t have to put significant value on. Also, there are some things that have value without having goals, like natural rock formations. The key is the combination of goals and the value we place on them. Bacteria have goals, but we don’t place any significant value on the goals of individual bacteria. (Although, consider if it was an entire, possibly experimentally valuable species of bacteria, and our treatment would wipe out that entire species …). And intelligence is not the only basis of value, but it does tend to increase perceived value, as our intuitions show.

        And as for Skynet, some goals have a negative value. Some goals, like a foreign country’s intent on taking over a part or all of your country, might be so negative that you would sacrifice all of your own goals to prevent it.

        *

        Like

        1. So it sounds like rather than goals, the real criteria you’re arguing for is value, as in how well the other’s goals match with our values? But the question is where do those values come from? Eventually it seems like we just hit brute unreasoned values that we just hold due to a mix of evolutionary programming and learned experiences.

          Or am I missing something?

          Like

          1. Seems like you mostly have it. While goals can be objectively determined separately, moral considerations necessarily involve values, but values only make sense w/ respect to goals.

            As for where values come from, originally they are naturally selected, then learned, and then reasoned, as those capacities become available. Problems come when we rely on the selected and learned despite the (well) reasoned.

            *

            Like

          2. The thing is, even the reasoned values ultimately exist on a foundation of unreasoned ones. Think about any reasoned value you hold. Now think about what the reasons are for it. Maybe other reasoned values? But what are the reasons for those? If you follow the layers, you eventually hit something you just value, like survival, health, cuteness, freedom from pain, etc.

            Like

          3. Actually, it goes past life, all the way down to entropy. Things that exist longer are selected by entropy. Entropy itself tends to break things up, but things that last longer tend to accumulate, thus being selected. Things that last longer tend to interact with other things leading to new things. These new things may (or may not) last yet longer. (This is the essence of Assembly Theory, I think.)

            Eventually you get to things which catalyze processes, including the process of copies of itself, but also possibly other new things. At some point you can get to two things which catalyze each other. Terrence Deacon is working on this. Here is the genesis of morality: cooperation.

            There are various considerations when developing new things. One is mobility. In order to interact, two things have to find each other. Another is containment. If you create new things, but they just drift away, they are less likely to find each other.

            And then you get into life, where you have to balance cooperation with competition. Cooperation allows you to scale up to multicellular groups, and then social groups. But within a group of cooperators the non-cooperators have the competitive advantage. So there’s all kinds of dynamics.

            *

            [bet you weren’t expecting to go all they way to the bottom. 🙂 ]

            Like

          4. Afraid I’m not following the logical chain here. Certainly everything begins with the laws of physics and initial state of the universe, but looking for moral foundations there seems to require a lot more dots be connected.

            But I haven’t read Walker’s book yet. I’m actually waiting to see if it has any staying power before sinking in the effort. Have you read it?

            Like

          5. This is mostly out of my own head. I mention Assembly Theory and Deacon mostly because they seem to support what I’m saying. I have read Walker’s book, and it’s an interesting read, but I didn’t really get anything from it that you can’t get from her various papers and videos. (I just (re)posted a couple good, brief, Deacon video clips on twitter, but here are the links: https://youtu.be/7YdqfEv9ecU?si=NMWNZuiqAz0XBnO6 and https://youtu.be/KH5p7uFMr3c?si=24xF6GAGrhZmke_c)

            I guess the point I’m trying to make is that there is an objective explanation of morality, and that explanation goes all they way down to physics. This explanation will not necessarily say what the best moral choices are, although it does suggest why cooperation is almost always the best choice.

            Let me know if you have specific questions.

            *

            Liked by 1 person

          6. I like his distinction between information as physicists discuss it without aboutness, and the fact that in biology, the aboutness becomes important. I would put it that biology depends on the semantics of information, the meaning, (to your point on Twitter, the mutual information) in a way that other physical processes don’t.

            But I don’t see the dots connected enough with morality to say morality is objective. As you note, it leaves a lot of possible solutions. My way of thinking about it is that morality is a social technology we develop together. We should no more expect to find it validated in nature than we would the design of a mouse trap, or a gas engine. That doesn’t mean those designs aren’t based on the laws of physics, just that the laws don’t necessitate them, at least not in any way our brains are likely to be able to trace.

            Like

      2. I despair of finding an rational basis for morality. It is a human cultural artifact that we try to codify in law. That means it will never be consistent, always be evolving, and frequently subject to disagreement. We can fight wars over it.

        Liked by 2 people

        1. I spent a lot of time looking for it some years ago. Eventually I decided it was a lost cause. My current take is that morality is a social tool we construct together. In that sense, it’s a sort of technology. So we shouldn’t expect to see it validated in nature anymore than we’d expect to see the design of the wheel or a mousetrap.

          Liked by 2 people

        2. I partially agree with you and Mike. As you say, morality is “human cultural artifact” and as Mike says it’s “a social tool we construct together.” However, I disagree that there is no rational basis for morality. Perhaps—to be more precise—I should say an understandable basis. So, I fail to see a reason for despair. An economy, for example, is a human cultural artifact or social tool that we construct together yet it is real and more or less understandable. I think the same goes for morality. And we seem to do a fairly good job in enacting some important moral principles into our legal codes which have substantial consistency from code to code. Moreover, the major ethical principles followed my most of the various world religions and ethical systems have substantial overlap—with disagreements only at the margins. I submit that the real problem we have with morality is the lingering intellectual legacy of logical positivism and emotivism beginning early with the is/ought muddle claimed by David Hume on up to full-blown flowering of emotivism in the 20th century with, for example, A. J. Ayer. We are slowly starting to reverse that nonsense. So, please don’t despair my friend.

          Liked by 1 person

          1. “Understandable” doesn’t necessarily mean rational. And I’m not exactly sure the economy is understandable either – the ups and downs of the stock market in anticipation of Fed actions or Apple earnings , oil prices jumping every time a bomb drops in the Middle East, manipulation of prices by monopolies and government intervention. Look at the history of DJT which is up far in excess of any real valuation probably based on Wall Street traders anticipating a Trump victory or oligarchs buying it as an indirect method of bribing Trump.

            I agree there’s overlap in religions and people could probably be pushed to a consensus. So, a rational process might produce something but, even then, I doubt all people would agree with everything and I’m not sure the result would qualify as “rational.” But even if it did, there wouldn’t mean there is a rational basis for it, only that a rational process was used to arrive at it.

            I don’t know. Maybe I am too pessimistic.

            Liked by 1 person

          2. In the firm hope that perhaps we can avoid talking past each other, I will try again. Yes, indeed, “understandable” doesn’t necessarily mean “rational.” You are quite correct! And you see that as perhaps a bad thing or at least a more squishy concept. In short, understandable is more vague than being rational. A rational explanation, you assume, is what we should be shooting for. Ironically, I mean the opposite. Frankly I was expecting that sort of misunderstanding when I wrote that. I should have accounted for that likely misunderstanding. It comes directly from a difference in our education, experience, and (as the Germans would say) weltanshauung or world view. The confusion is my fault totally.

            I was trying to get away from the concept of rationality as it is conventionally understood because I think it’s limiting. Modernity redefined the concept of rationality at the beginning of the Enlightenment. In our modern use it means an instrumental rationality. More importantly, truth claims are justified by an adherence to some form of scientific (i.e., rational) methodology. All that is a very long way to say I screwed up in my ability to communicate but that I really had a thought I wanted to express.

            I was trying to respond to your despair of not finding a rational basis for morality. I assumed you meant rational in the conventional sense I described. And I was trying to say that is OK. Morality (like an economy) is, as you say, a human cultural artifact and like the economy it’s a real thing. Moreover it’s not based in mere subjective opinion or feeling. And my very shorthand argument for that is that we generally agree on basic principles from culture to culture and historical period to historical period. And, finally, it can be understood through careful study as can an economy or other so-called cultural artifacts. I will stop there as you probably have fallen asleep by now.

            Liked by 1 person

  2. I’m not completely clear on the difference here between affect and valence. I’m guessing that affect involves the raw ability to notice or be aware of something happening, and valence involves the ability to care one way or the other about it. If that’s right, then I’d have to agree with Chalmers that affect is the relevant dimension for consciousness. But I’d also venture that valence is the relevant dimension for ethics. If something doesn’t care whether it’s in pain or about to die, why should we be concerned for it?

    Anyway the difference feels like a false dichotomy. What would be the point of being able to notice, if what one notices has no significance to one? This has an evolutionary aspect, I suppose, in that there would be no reason for affect to evolve in the absence of valence. It also seems improbable that something could be in pain and not care; that seems like an abuse of the concept.

    Liked by 2 people

    1. This is an area rife with definitional issues. But I see valence as an evaluation about whether something is good or bad. It’s kind of an automatic preference.

      An affect is usually defined as the experience of that preference, along with other automatic reactions such as levels of arousal and motivational impulses. In that sense, a valence is a part of the affect, but not the entire thing. I actually think of an affect as the feeling of an automatic reaction that, by itself, is just a reflex or habitual reaction. (Confusingly, there are people who use “affect” to refer to just the automatic reaction, or who conflate the two meanings.)

      That’s why Birch doesn’t just describe sentience as valence. It has to include the experience component. But it seems like he could have just defined it as having affects. I’m suspecting he has reasons that will involve what he wants to call “sentient” later in the book, but that might be me being too jaded.

      Like

      1. That definition of affect is more in line with my understanding of the term. But it leaves open the idea of an awareness that does not care one way or the other, that is, which does not have either a preference, nor a response to that preference. Is such an awareness possible, for example in an AI?

        The idea of valence and affect in the absence of raw awareness doesn’t make much sense to me. Taking about them in this way perhaps evades direct questions about consciousness.

        Separating valence from affect also raises questions of what exactly “good” and “bad” mean. Is a thermometer’s assessment of the current temperature, as opposed to the set temperature, a valence? I don’t see why not, if valence can be separate from affect. If the idea of good and bad involves liking or disliking something, as far as I can see that qualifies as arousal.

        Like

        1. On your first question, I don’t think it is possible. Even the device you’re using right now has preferences, albeit ones put in by a programmer. An awareness is always aware with a purpose, whether it’s finding food and avoiding predators, or simply, conceivably, a security system recognizing by face whether someone is authorized or not.

          It sounds like you require awareness to consider a preference to be a valence. Which fits with your next point that separating it from the overall affect strips it of meaning. This is a definitional matter, so I’ll just note that Birch, since he uses the phrase “valenced experience” seems to be using them in a manner where they can be separate. I do agree that liking or disliking implies a full affect.

          For “good” and “bad”, it always seems like something is good or bad for some goal or value, which may be survival, flourishing, tranquility, maintaining homeostasis, or in the thermometer’s case, maintaining a minimum or maximum temperature.

          Like

          1. I can imagine a security system just recognizing a face, full stop. The programmer surely has a purpose in recognizing faces, but it doesn’t mean the machine has the same purpose, even by proxy. Closer to home, I’m aware of a lot of red leaves outside, but I have no purpose in this awareness (although based on your previous comments about red, I think you might argue otherwise).

            On the questions about morality being raised here, I have less to say. When we speak of a good knife we are talking about its usefulness for a goal such as cutting vegetables, but when we speak of a good person, we may not have in mind, for example, their mercenary skills.

            Like

          2. There are purposes you have as a person, and there are reasons why your visual system evolved to notice certain things. When I talk about the “purpose” of perceiving red, I’m talking in the second sense. For you the person, it may nor may not play into your person level purposes at the moment, but the neural circuitry doesn’t know that, so it just does its thing, like noticing the redness of the leaves.

            On good and bad, right. This gets into whether anything or anyone are ever just intrinsically good or bad. I’m not a moral realist, so my answer is no. Someone can be a good or bad citizen, partner, parent, sibling, friend, etc. I think we often shorthand that by just saying they’re good or bad, but the “for” seems implicit.

            Like

          3. The ability to distinguish colours undoubtedly has evolutionary advantages. The idea that red has high salience because, wherever it appears, it is associated with some special evolutionary advantage, strikes me as debatable. But that’s another discussion. Whatever the evolutionary back story, we can be aware of something, indeed all kinds of things, without any associated purpose, and without caring one way or the other about them. The room we are in is full of such things.

            Liked by 1 person

      2. Kent Berridge draws an empirically-based distinction between two systems that he calls liking vs wanting: https://pmc.ncbi.nlm.nih.gov/articles/PMC5171207/

        In your terminology, valence would be the liking part, and affect (emotional response, is how I think of it) includes the wanting. I think Berridge’s distinction is clear. I also was about 1% surprised and 99% “no duh” upon learning of it – because this distinction turns up routinely in my day to day subjective experience, too. It’s not surprising that the brain basis for these functions is grossly different.

        I don’t think the word “sentience” in English clearly picks out liking vs wanting vs both, but there’s nothing wrong with stipulating a particular way of using the word for the course of one book. I agree with Birch that liking is morally more important than wanting, but also that the wants of a pVulcan would gain moral significance by another route. Morality is multi-dimensional – that doesn’t seem super controversial outside of philosophy (which, alas, doesn’t stop utilitarians or Kantians from trying to reduce it to one fundamental value).

        Liked by 1 person

        1. Not sure if Paul’s comment about Berridge’s distinction was for me or Mike, but whatever terminology we use, there are at least three things to be covered. I can notice that my hand is in water (without caring one way or the other); I can like the feeling when my hand is in water (enjoying a valence); or I can want my hand to be in water (experiencing a desire, or perhaps a need).

          FWIW, Berridge’s work seems to suggest that the last is not necessarily a response to a valence, or at least not the valence involved in liking the sensation. I might want to put my hand in water because my hand is burning.

          Liked by 1 person

        2. I had to read the abstract to understand her distinction. So “wanting” is the incentive and “liking” is the reward mechanism. Or at least that’s the way it comes across to me. The reference to drug addiction clarified.

          It’s interesting how pleasurable it can be to alleviate the symptoms from drug withdrawal. Likewise, a piece of food tastes a lot better when you’re starving than already full. (Of course, with our hyper processed foods these days, food can be hard to resist even when we’re stuffed.) So to me, they’re two sides of the same coin, the stick and the honey.

          But I’m not sure a p-Vulcan would actually experience either in the sense we do. The motivational intensity, and resulting wave of relief, just seems like it would be absent. It would experience motivation, but only in calm way, unless the action implied required ramping up intensity. But it seems like they would be separable for them in a way it isn’t for us.

          Like

          1. I would say that liking generally leads to wanting right away, but I wouldn’t be surprised if this can be modulated by depression, or certain drugs. But in the other direction, wanting can totally precede liking by a long duration. Which is very good for animals that have to reproduce sexually: they might well die out otherwise.

            Liked by 1 person

    1. There’s a section on sentience and the human brain, which I’ve skipped to. (Over the policy stuff, which I’m not really interested in.) It includes a chapter on fetuses and embryos, which I haven’t reached yet. Although based on remarks he makes early in the book, I think he’s going to argue it happens early but not in a way that should affect abortion policy.

      That said, based on other stuff I’ve read, I tend to think there’s no reasonable concern until the third trimester. The cortical hemispheres aren’t firing in unison until a few weeks past that point. And there aren’t discernable REM sleep cycles until weeks 28-31. But maybe he’ll change my mind.

      Like

      1. The polls I have been able to find show the majority us accepting of legally allowing abortions during the first trimester but not during the third. (What policy do you have during the second trimester.) think it is because people sense sentience during the third but not during the first. The next step is to apply that to animals other than humans. That is when cognitive dissonance comes in. Hard questions to resolve.

        Liked by 1 person

        1. New data can always change it, but right now I’m not worried about the second trimester. Even in the third trimester, it’s more an abundance of caution than anything else, one that isn’t enough if the mother’s health is in jeopardy.

          Like

  3. Since we know people can be born without the ability to feel pain, would that make them non-sentient?

    I think there is generalizable process of an organism reacting to its external environment and internal states to restore homeostasis. Some people want to call this “sentience. ” (Personally I tend to use the term as synonymously with “consciousness.” ) At any rate, that sort of “sentience” wouldn’t necessarily require an ability to feel pleasure or pain in my view. In fact, almost any biological organism and many non-biological entities could qualify for “sentience.” Pleasure or pain would be useful facilitators for more complex organisms but not requirements for even that sort of “sentience.” Simple organisms wouldn’t have enough of a brain (or any brain) to feel pleasure or pain.

    In my Fragmented Consciousness view, I expect we will find in-between organisms that have flashes of consciousness. Consciousness is the sharing of gestalt-like(-lite) information across a brain. A small brain may need to share sporadically. A large brain like ours is sharing all the time we are awake. That gives an illusion of tight integration and appearance of unity and simultaneity.

    The Active Inference AI folks use the term “sentience” for what they think they are going to achieve, but I think they tend to evade the question of whether their “sentience” equates to what others think of as consciousness. It seems to me more like the homeostatic process I mentioned above.

    Liked by 2 people

    1. For the person born without pain, I think you’d have to remove all affects for them to be considered non-sentient. There’s a condition called akinetic mutism which might come closest to it. It sounds like a pretty debilitating condition. (Not that non-pain feelers are in good shape either. My understanding is their life expectancy isn’t great.)

      Antonio Damasio often ties feelings to homeostasis. So affects like hunger, thirst, and similar sensations are related to our homeostatic state. And alleviating any deviations from ideal end up feeling pleasurable. Of course, when we get to social emotions, things get much more complex. But that’s homeostasis affecting a conscious system. A single celled organism reacting to restore its homeostasis, by most people’s account, doesn’t feel anything.

      On gestalt sharing, a big question might be what it means when AI systems share those gestalts with each other. We could view it as what we do with language, or as a shared consciousness. (Assuming of course they individually meet whatever criteria for consciousness we’re using.)

      Right, we can define sentience to the point that it becomes trivial. Birch, to his credit, seems to be resisting that move. Although I’m expecting him to be more liberal about it than I am.

      Liked by 1 person

  4. Even if sentience is morally important and has some semantic indeterminacy, I don’t see how this poses a problem for a moral realist. There are two definitions of “moral realist” popular in philosophy, but the less-demanding one just says that some moral statements have (1) a truth-value, and (2) some of those are true. The more-demanding, and more common one adds that those truths are mind-independent, in some sense that (desperately!) needs fleshing out.

    The term “tree” is semantically indeterminate, yet the statement that there is a tree within 40 feet of my office window is simply true.

    Liked by 2 people

    1. I generally take moral realism to refer to moral propositions being real in some manner similar to physical laws or mathematical or logical relations. Them being real psychological dispositions, social conventions, or laws doesn’t strike me as controversial. Or am I missing the sense in which you’re using it?

      “Tree” is a good example, because from an evolutionary perspective, it’s more indeterminate than we might think. Trees aren’t an a single taxonomic group. The tree form is something several clades converge on. (Which seems to increase the probability that the same form would exist in other biospheres with complex life.)

      But to your example, is a tall bush a tree? What about a high weed? Do bamboo stalks count? There are edge cases where the statement about a tree outside your window could be indeterminate.

      Like

      1. In my view, moral truths are neither like laws of physics, nor like traffic codes and baseball rules. They’re more like laws of economics. They include rules which only make sense within a certain kind of society (e.g. for econ, one that has money). But there are biological constraints on morality as well as social ones, so the room for moral relativism, while large, is limited (at least until we become versatile bio-engineers).

        I also don’t buy into the fact-value dichotomy. There is a large overlap. This gets missed when philosophers over-inflate the term “normative” (see Schwitzgebel on “inflate and explode”, but notice that many philosophers prefer to stop after “inflate”.) It also gets missed when thinkers mistake the gap between value and motivation for a gap between fact and value.

        Like

        1. I’m actually onboard with moral rules being more like economic ones than baseball rules. But remember that economics is quantified sociology, so we’re still talking about something we collectively construct.

          On the fact-value distinction, I guess it comes down to whether we think a value can ultimately be reduced to a fact, or facts. Consider the value of egalitarianism. Most people today would agree that it’s a good value. But what facts could we cite to convince the Chinese philosopher Confucious (who was very hierarchical in his thinking) that we should regard everyone as equal?

          Like

          1. I am embarrassingly ignorant of Confucian ethics. But for hierarchies in general, the stumbling block is usually justifying it to the people at the bottom of the hierarchy. Why should they accept a system that makes them relatively worse off? Or if the claim is that the system actually makes them better off (in absolute wealth, and enough to compensate for the extra kowtowing) than they otherwise would be – then bring the empirical evidence. On the other hand, if the plan is not to justify the system but just to force it on people, then we have left the realm of moral dialogue. Argumentum ad baculum is not a moral appeal.

            There is something, I wouldn’t call it egalitarianism but it’s similar, built into the very notion of moral justification. And not specifically moral justification, but any justification, e.g. epistemic justification as well. And that’s the idea that each conversation partner is free to contribute reasoning to the pile of reasons, and also free to assess the weight of the reasons given so far.

            Like

          2. I can’t claim any expertise on Confucianism. But I did a little reading on it a decade ago. From a post at the time.

            Confucianism has a great deal to say about family relations, social norms, and governing philosophy. On family relations, it often defines the hierarchy between various relationships, usually with those on the inferior side of the relationship urged to be subservient and those on the superior side to be fair. Fathers are superior to sons, older brothers to younger brothers, brothers to sisters, husbands to wives, etc. On governing, Confucius calls for rulers to be just and virtuous, and to demonstrate that virtuosity to their people. (He saw few examples of this in his time.)

            Confucianism and the definition of religion

            Right, the desirability of hierarchies can’t be demonstrated empirically. But at the end of the day, there’s no way to empirically justify everyone being equal to someone above the bottom rung of a society’s hierarchy, except by reference to other values. The closest might be a human instinct for egalitarianism, but we also inherit an instinct for hierarchies from our ancestor primates, meaning that we’re always obliged to inhibit some of our instincts.

            Like

          3. I feel that Confucius and (more so) Lao Tzu are two lacunae in my knowledge. Regarding justification, when two or more people reason about how to live together in society – i.e. reason about morality – they are obliged to take each others’ values seriously and come up with ways to distribute favor and disfavor toward these values in ways that don’t cheat some to favor others. For example, consenting adults who feel a deep need for hierarchy should be allowed to get together and do their thing.

            Finding very-widely-acceptable standards is hard, and a never-ending, ever-adapting process.
            Fortunately for morality, most people agree with Churchill that “jaw, jaw” is preferable to “war, war”.

            Liked by 1 person

  5. Sentience is semantically indeterminate? It’s quite clear to me that this is a sensible statement today. And though I get no sense from this post that Johnathan Birch is helping to remedy this indeterminacy, I do appreciate his attempt. I’ll now go through the way I think this mess will ultimately be resolved, as well as add one thing more regarding my conception of your position Mike.

    Contra panpsychism, it is my belief that non sentience, non consciousness, and other synonyms, prevail for the vast majority of what exists. But I also know that sentience (or whatever) does exist for me personally since I seem to perpetually feel good/bad in at least some capacity. In fact I think this value dynamic may usefully be defined to exists as “me”. Even given a living body, if I had no sentience, consciousness, or other synonyms, I don’t think I should be said to exist. From here my naturalism mandates that there must be some sort of physics by which existence such as my own occurs. Furthermore I don’t know of a second reasonable option for this physics beyond the electromagnetic field associated with the right sort of neural firing. Theoretically computational brains were not in themselves sufficient to deal with more open circumstances, so evolution must have taken an originally epiphenomenal experiencer in the form of a serendipitous neurally produced EMF, and evolved this physics even to a human level. And because an appropriate electromagnetic field may or may not exist in a given example, the implication is that there should be an inherently sharp start to any case where it does exist. From experience however we also know that good/bad can gradually increase and decrease, and thus the validity of “shallow gradualism”.

    Obviously my above account is quite dense, though hopefully you’re able to follow it Mike. Perhaps certain others here as well. I also think this sort of thing should become more simple to grasp once we have a respected community of professionals which provide us with accepted principles of axiology (and more) for general use. In order for value driven fields like psychology to advance, fundamental value theory from which to build should need to become established. For that to actually happen however, I suspect that the EMF physics of it will need to be experimentally established. Then such verification might be the easiest part of the whole endeavor.

    Anyway Mike, when we get down to the bare essentials it seems to me that you and I may not be all that opposed in these regards. I believe that brains create something that feels good/bad by means of information that informs an appropriate electromagnetic field to exist as that experiencer. Conversely you believe that brains create something that feels good/bad by means of information that needn’t inform anything specific. Aside from that divergence however, can you successfully counter my position and so argue that you find it sensible to believe I’m wrong in the other ways?

    I’ll remind you that I also consider our various moral notions of rightness and wrongness to exist as an evolved social tool of persuasion. Instead I consider there to just be a goodness to badness of existing, and thus the more goodness one experiences versus badness, the better that existence shall be for that experiencer from moment to moment. This is also the foundation upon which the reasonably hard science of economics happens to be based. Conversely the quite soft central science of psychology remains mute on this matter (and perhaps because the social tool of morality would punish psychologists for not supporting mainstream moral notions such as “helping others is how we help ourselves”).

    Liked by 3 people

    1. Here’s a question for you Eric. What do you think is happening in the electromagnetic field? You used to say that the first computer generates a second computer. Presumably back then you would have said computation happens.

      But in the last several years you’ve criticized the information processing view, so you seem to you see something else happening in the field. What is it? You say later in your comment that the electromagnetic field “exists as an experiencer”. What are the sub-experiencer components of this experiencer?

      Maybe a way to think about it is, why this EM field rather than the earth’s, the sun’s, the galaxy’s, or a rock’s? What is going on in the brain’s particular EM field that distinguishes it from all the others?

      Liked by 1 person

      1. I still say that computation happens just as I always have Mike. The essential difference between us is that I think processed information can only exist as such to the extent that it informs something appropriate, though you don’t. So the right marks on paper converted to the right other marks on paper shouldn’t in itself create an experiencer of thumb pain as I see it, but the resulting marked paper theoretically might inform something appropriate that thus exists as an experiencer of thumb pain. And why do I suspect that an electromagnetic field happens to be the sort of physics that such marks would need to inform? Because that’s the only kind I know of that both emanates from neural firing and seems appropriate. So the difference between us here is that I think you stop one step short of causality, while you think I go one step beyond already sufficient causality. My reasoning will never convince you that you’re wrong just as your reasoning will never convince me that I’m wrong. The only way this matter might get settled should be for what I propose to be experimentally confirmed or refuted quite well by means of dedicated testing.

        Your suspicion is correct that I don’t know why each element of what we see, hear, feel, think, and so on would exist in the form of an incredibly complex electromagnetic field created by means of the right sort of synchronous neuron firing. Instead I just know that such a field ought to have enough potential fidelity to harbor something like consciousness, and I haven’t been able to think of a second reasonable candidate. Sub-experiencer components should exist as whatever such a field happens to be composed of. Why would the minute EMF associated with certain synchronous neuron firing exist this way rather than what’s produced by the sun, a galaxy, a rock, or whatever? No clue! But I do know that brains often seem to produce consciousness, as well as that this particular theory does seem to have reasonable supporting evidence even before any dedicated testing has been attempted. If exhaustively confirmed then scientists should try to determine the EMF parameters of specific elements of consciousness, and that ought to be interesting.

        For the moment I didn’t mean to get into this question however. Instead I was trying to open up something that you and I might agree on right now. Apparently each of us believe that value exists in the form of the right information that’s processed into the right other information (merely with me holding that another step is required that you consider unnecessary). So do you dispute the position that nothing should be good/bad for anything that never feels good/bad? Or for something that does feel good/bad at a given moment, that the degree to which it feels good/bad will constitute how good/bad existence happens to be for it at that moment? Or over a given period of time that an aggregate figure theoretically would constitute how good/bad its existence was over that period? Or when figures from various subjects are combined, that social scores ought to constitute the value of existing as any defined number of subjects?

        I believe that feeling good/bad is essentially the fuel which drives the conscious form of function. Here the past only matters to it in the sense of its memory, and the future only matters to it in the sense of the present hope and worry it feels about what will happen. The soft science of psychology hasn’t yet taken a stance on what constitutes value, though I think this could be a productive premise from which to potentially found the field. Have you reason to either dispute or agree with this model?

        Liked by 1 person

        1. Ok, so you see computation happening in the EM field.  If so, you appear to imagine neural computation informing EM computation.  But wouldn’t this be computation informing computation?  If EM computation can be a candidate for being informed, why can’t neural computation?

          “So do you dispute the position that nothing should be good/bad for anything that never feels good/bad?”

          I do.  Things can be good or bad for a plant or unicellular organism’s survival, but I don’t take them to be experiencing anything.  For that matter, conditions can be good or bad for the formation and continued existence of a tropical storm or hurricane, but again with no feeling.  Even for the formation and continued existence of mountains, tectonic movements and erosion dynamics can either be good or bad for the mountain’s continued height and form.  My laptop and phone register when their power is getting low and prompt when action is required, which is good or bad for their continued functioning, again with no feeling (at least according to most people).

          On all the rest, it seems to amount to hedonic utilitarianism. Like any other moral theory, it won’t always match up with our moral intuitions.  No simple rational will.  Our intuitions are just not that consistent.

          Liked by 1 person

          1. Sometimes Mike, you give me hope that you might simply not understand what my position happens to be. Thus if you did understand, then you might also decide that you agree. But even if you sometimes say things which suggest a misunderstanding, in general I think you do understand.

            It is of course your belief that if the right marks on paper were algorithmically converted to the right other marks on paper, then something here will experience what you do when your thumb gets whacked. Why? Because the right first marks are used to create the right second marks, and regardless of what sort of informational medium happens to be implemented. It’s a very popular position (not that many beyond yourself grasp that particular implication of it).

            I can go along with this account to begin, though I also consider it insufficient. To me all that should actually be sufficient for thumb pain (or anything else phenomenal), is the creation of the proper parameters of electromagnetic field which resides as such an experiencer, and regardless of the means of that field’s creation. The only way to settle this matter between us should be dedicated experimental testing. For example scientists might see if they could distort people’s consciousness by inducing appropriate exogenous EMF energies in the brains of test subjects that ought to interfere with endogenous EMF energies. Then if successful they might even see if they could modulate induced fields to impart all sorts of designed conscious experiences for the experiencer to tell us about.

            On good and bad, actually I didn’t mean them in terms of an organism’s survival, the continuation of a weather event, a machine’s ability to function, or anything like that. Clearly if a given purpose happens to be stated, good and bad must exist in respect to that stated purpose. I was instead talking about the goodness to badness of personally existing, which is to say a goodness to badness for something that resides inherently rather than in an explicitly stated form. Would you say that something like this can exist? I’ll break this down a bit further.

            It seems to me that you acknowledge that existence can feel anywhere from horrible to wonderful to you, or a value dynamic that needn’t be defined as good/bad to exist as good/bad to you given what you feel itself. Furthermore we’ve established that you believe this arises because your brain algorithmically converts certain information into certain other information, and could just as well exist by means of such information conversions in other evolved organisms or technological machines. When something experiences what you do when your thumb gets whacked, do you believe that instead of standard personal irrelevance, that existence must inherently feel horrible to the experiencer, or a value dynamic that doesn’t otherwise exist? Observe that there is no rightness or wrongness in this account, or standard moral speculation. It’s just a theory about how reality works — generally there is no value, though sometimes existence feels anywhere from wonderful to horrible.

            Liked by 1 person

          2. Eric,
            Up above you said that the EM field is doing computation. But if I understand what you’re saying here, even if all the computations that might happen in the EM field were performed in a process involving the cards, it wouldn’t be the same. So either you’re saying that something more than computation is happening, or it’s a special kind of computation that can only happen in an EM field? But by your own admission, you can’t account for this special requirement. Or did I miss something?

            On good and bad, it seems like what you’re calling “goodness to badness of personally existing” just is feeling good or bad. If you say there is a distinction, can you elaborate on it? And what would you say are the components of feeling good or bad? What are the upstream causes and downstream effects?

            To your question, feeling bad just feels bad to the experiencer. The question is why they feel bad in that particular situation, such as mashing their thumb. I think trying to answer that while excluding all the possible solutions (upstream causes and downstream effects) makes it seem more mysterious and intractable than it is.

            Liked by 1 person

          3. Try to think of it this way Mike. To me what you’re calling “computation”, doesn’t always complete the cycle well enough to deserve that title. For example, a key may be pressed on a computer keyboard which serves as input information that gets algorithmically processed in all sorts of ways. So here you might submit that computation happens given that input information gets processed into new forms of information. But I think there’s still another crucial step to go since processed information should only be said to exist as such, to the extent that it informs something appropriate. So if the processed information from the key press never informs anything (whether a computer screen, speaker output, new programming conditions, or whatever), then as I see it there should be no associated computation or processed information here. Furthermore you’ve not yet found a reasonable counter example, and yet continue to believe that “thumb pain” and such remain an exception to this rule — that they can exist while informing nothing.

            When we met in 2016 I would tell you about my “dual computers” model of brain function. Here brain exists as an amazingly complex non-conscious computer that also creates a conscious form of computer. Back then I simply didn’t know what a brain’s algorithms might inform to exist as that conscious form of computer. Then in 2020 I realized that the brain must be informing an electromagnetic field that itself exists as consciousness. Though my story has become more complete in this respect, it’s never deviated from the causal premise here itself.

            Yes feeling good or bad is essentially the goodness to badness of personally existing. Another way to say this is that value can exist rather than not exist. And regardless of which of us is more correct regarding the means of its existence, our mental and behavioral sciences seem hopeless in this sense so far (that is except for the science of economics, which has been effectively founded upon a utility based premise as I see it). Anyway I hope you appreciate my observation above that because existence can feel good/bad, this also creates an inherent purpose. Here fundamental purpose or teleology exists, and thus beyond the potential good/bad of anything merely stated to exist as such.

            We’re certainly agreed that trying to work out feeling good/bad without effective upstream causes or downstream effects, should make things seem highly mysterious. Given the models that I’ve developed however (or in the case of EMF consciousness, that I’ve adopted), this stuff doesn’t seem mysterious to me at all. I presume that in the future science will have straightened much of this out, and smile at the thought of how pathetic they’ll consider the state of academia today. How could we have failed to grasp such simple truths given so much evidence? Perhaps because when we attempt to grasp ourselves, objectivity should inherently be more difficult to come by.

            Liked by 1 person

          4. Eric,
            I think you’re being inconsistent. You say that the EM field is doing computation and can be informed. You also say that neural circuitry is doing computation, but for some reason it can’t be informed. You even admit above that one of the things that can be informed is, “new programming conditions,” which is very close to the usual computational position.

            On the causal point, what I failed to mention above is that feeling good or bad doesn’t just come out of nowhere. A feeling is always about something. The goodness or badness represents something far more ancient than feelings, what Antonio Damasio calls “biological value”, which goes back, presumably, to the earliest unicellular organisms. Once we recognized that feelings are an elaboration of older defensive reactions tuned toward biological value, their role makes more sense.

            But we can’t get morality out of that, because those feelings are often contradictory, something we’re agreed that cognition evolved to resolve, and societal mores often oblige us to override our feelings, even when they’re relatively consistent.

            Liked by 1 person

          5. Mike,
            I’ve certainly never implied that neural circuitry can’t be informed. A whacked thumb does exactly this, of course. Then after a whacked thumb informs neural circuitry, this circuitry is known to algorithmically process such information to create new information that goes on to do various things. It might go on to alter someone’s pulse for example. My point is that such information should only exist as such to the extent that something appropriate (like a heart) becomes informed by it. Thus I presume that there must be some sort of “thumb pain physics” which processed brain information informs to exist as such an experiencer. Here the processed information alone should only potentially exist as such should it eventually go on to inform something appropriate. What you’d like however is for there to be no need for such information to inform anything appropriate to exist as such — self informing information that thus exists as thumb pain in itself. If that’s the case then the Genie is out of its bottle and so the right marks on paper converted to the right other marks on paper, should also create something that exists as thumb pain in itself. In order for you to grasp why this is a non-causal solution however, it may be that scientists will need to empirically confirm that brain information informs an electromagnetic field to exist as an experiencer of thumb pain.

            On feeling always being about something, I wouldn’t state this as a fundamental rule. Instead it should just be a general tendency given that evolution seems purposeful to us. This is to say, teleonomy rather than teleology. So I don’t consider Damasio’s “biological value” to be different from any other humanly defined notion of value. In the sense of keeping them going, yes there is good for a fire, a storm, a computer, an organism, life in general, and so on. This however is all purely based upon the whim of human definition. Conversely I’m talking about a more fundamental conception of value. When existence feels horrible to you, I believe that existence is horrible to you in that sense regardless of any definitionally true/false statements. Thus if a rock felt the same, then existing as that rock ought to be just as horrible as existing as you in that sense. If the difference I’m referring to here is not something that you grasp, or if you believe there can be no such difference, then our perspectives diverge in this sense as well.

            Liked by 1 person

          6. Eric,
            You note that you’ve never said neural circuitry can’t be informed, which makes me wonder what the issue has been all this time. But you’ve added the “appropriate” qualifier into your statements about what can be informed. So am I understanding you correctly that neural circuitry is “appropriate” for being informed about some things, like the nociception of a whacked thumb, but isn’t appropriate to be the informed experiencer? If so, why not? What determines whether something is “appropriate” or not?

            Biological value is rooted in natural selection, in what preserves and promotes an organism’s genetic heritage. In other words, it’s value for maximizing gene survival. And as Damasio notes, what feels “good” or “bad” to us is often related to homeostatic states, which themselves factor into our survivability, and therefore gene preservability. (Things get more complicated with social emotions, but those are built on more primal feelings which ultimately relate to these biological values.) A rock, of course, has none of this. It isn’t the result of biological evolution.

            Liked by 1 person

          7. Mike,
            The thing that should determine whether or not something is appropriate to be informed in a given sense (like all else) is causal physics. I’ve displayed this before in the form of a DVD disk. An encoded disk is of course appropriate to inform a DVD player with associated media content. But a DVD might instead inform a table leg that it supports as a shim. Here it doesn’t matter what media content happens to be encoded on that DVD since a table leg is not something that’s appropriate to be so informed. So causal physics should determine what’s appropriate for a given bit of information to exist as such.

            This is also the point of my thumb pain thought experiment. Here there are marks on paper which accurately depict the information that your whacked thumb sends your brain. Furthermore this marked paper will be informational in that sense when it’s scanned into a vast supercomputer that algorithmically processes those marks to print out more paper that accurately depicts your brain’s response. At this point however we seem to merely be left with more marked paper. Thus I don’t believe an experiencer of thumb pain should exist here any more than a DVD under a table leg should unlock the media content that lies within. The implication is that in order for the now processed brain information to exist informationally, as well as the marked paper which represents it to exist informationally, either must inform something appropriate. For example in the brain’s case maybe it could inform the rate that your heart beats? I don’t know anyone however who suggests that the heart is an appropriate instrument to create an experiencer of thumb pain. But it does seem to me that a neurally produced electromagnetic field might exist as such, and so be what such information informs. Regardless something appropriate ought to be informed by such information in order for thumb pain to causally exist.

            I realize that you’re now somewhat conceding my point by suggesting that such information must be informing “neural circuitry”. That’s a broad statement however and so should need unpacking. Furthermore don’t forget that this explanation should also apply to an experiencer of thumb pain when the right marks on paper are converted to the right other marks on paper.

            On the other question here, if we can arbitrarily say “value for [some defined purpose]”, then we shouldn’t quite be getting to what I’m talking about. Observe that there is both value for starting a fire as well as value for putting one out. The same I think could be said for Damasio’s biological value rooted in natural selection — there are things that should aid in both the promotion and the destruction of natural selection, though that in itself shouldn’t constitute something that’s inherently good/bad for what exists. While I do believe that natural selection did ultimately implement the non arbitrary value dynamic that I’m referring to, I also consider this beside the point. Thus my observation regarding a rock that was not formed by means of natural selection which nevertheless experiences what you do when your thumb gets whacked. Even if magical, will existence now harbor a value element that previously did not exist? My assertion is that this feel good/bad dynamic is the sole element of non arbitrary value regarding existence. Furthermore I suspect that the physics of it is electromagnetic, and that our brains evolved to implement this sort of physics. Apparently you believe it arises when the right information is converted to the right other information (pending your clarifications about such information existing as such by informing “neural circuitry”).

            It seems to me that there are three potential ways that you could address this matter. One would be to say that I’m simply wrong about there being some sort of fundamental value dynamic that exists beyond arbitrary human definition which even a rock would have if it were to feel what you do when your thumb gets whacked. Another would be to say that I’m right about this existing, though wrong that it exists as feeling good/bad. Another would be to say that yes it exists, and that you also think it exists as the same thing I do.

            Liked by 1 person

          8. Eric,
            You’ve reworked your concept of “informed” to such an extent that it now seems equivalent to causal effect. In that sense, I don’t think I’ve so much conceded as maybe we’re discovering some commonalities in our views. Possibly. If I’m understanding correctly, your assertion is that the neural circuitry informing other neural circuitry, or itself recursively, isn’t the right causal structure. My question would be, why is the EM field the right structure when neural circuitry isn’t?

            You seem to imply that I need to justify my points about neural processing. But note that there’s no doubt that it’s part of the causal chain. Nothing will be sensed without sensory neurons passing signals up the chain to the brain, and there will be no muscle movement, such as us discussing this, without motor neurons exciting muscle fibers. So the question isn’t whether the nervous systems is involved, but whether we need to add something extra to it? What I keep trying to get you to reason through is why you think the answer to this is “yes”.

            On the marks on paper thing, again, I’ll point out that my actual position is that while we’re looking at a marked piece of paper, there is no feeling. It’s only with the ongoing processing that anything we’d call a “feeling” is happening. Along those lines, would it make a difference if the nodes of the supercomputer that outputs the marked paper and which we keep feeding it back into, were networked together via WiFi? Now our process between the marked pieces of paper involves an EM field. Do you see that enabling a feeling where nodes networked via cables didn’t? If not, then we’re back to the same question: what about the specific brain EM field makes it a candidate to be the experiencer when the others aren’t?

            On the feeling good or bad thing, I don’t think it’s fundamental. It seems far more likely to me that what feels good or bad to us is rooted in our evolutionary history. You seem to think it’s sui generis. For me, that’s non-causal reasoning. We’re probably going to have to just agree to disagree on this one, at least for now.

            Liked by 1 person

          9. I guess things always get back to the same place here in the end Mike. I think you’d agree with my point if empirically validated well enough, though otherwise my point itself should remain elusive. So if it were empirically demonstrated that certain specific exogenous EMF parameters would alter someone’s consciousness in predictable ways, this ought to open things up. Then if scientists could impart and modulate specific phenomenal dynamics in someone for their report and so determine that this must be what consciousness happens to be made of, that ought to do the trick. In retrospect I think you’d see that a supercomputer which converts certain information into certain other information, should never in itself be sufficient since there’d still be the task of the resulting information needing to inform the right sort of EMF. Then of course researchers would add instruments to computers that generate the sorts of electromagnetic fields parameters that neuroscientists determine exist as vision and such. Why these specific parameters rather than standard WiFi or whatever? Because that would be what researchers would find that our brains produce to exist as such. Doesn’t that make sense? (I realize that we each feel a bit strawmanned here in some ways, but maybe it’s unavoidable?)

            Here they should also attempt to detect productive electromagnetic phenomenal decisions within such a field — extremely challenging I think! But if so then they could reward such an experiencer to give it a sense of agency for those detected decisions. Things would be incredibly less difficult if causality does not mandate that consciousness exists by means of a particular type of physics that processed information informs, or exactly what you hope.

            Why couldn’t the processing of the right first set of information into the right next set of information be that physics? Observe that if so then this would be the only known example where the act of processing input information into output information, would also result in something that’s informed by the thusly produced information. It would be as if information could exist as such generically and so violate the causal rule that information can only exist as such to the extent that it informs something appropriate. The difference here is exactly what differentiates the falsifiability of my proposal from the unfalsifiability of yours. This I to say that it’s possible to disprove that consciousness exits as an electromagnetic field, though not possible to disprove that it exists as the right generic information converted into the right other such information.

            In any case because you don’t consider there to be any sort of specific physics by which consciousness arises, it makes sense that you also wouldn’t consider there to be an inherent value element which drives the function of consciousness itself. And of course I do also believe this is rooted in evolution, but evolution should only work by means of existing causality that it happens to come across (such as brain based electromagnetic fields).

            I believe that without a formal understanding that feeling good/bad constitutes the goodness to badness of existing for anything, anywhere, and so drives the conscious form of function itself, that the field of psychology shall essentially remain “pre-Newton”. And indeed, in a sense I consider the situation here hopeful. It could be that once scientist do empirically determine that consciousness exists electromagnetically, as well as identify parameters of it which constitute bad versus good existence, then psychologists may thus begin to have sufficient cover to subvert the social tool of morality enough to found their still primitive field upon the value premise that I suggest. With such a founding premise rather than none at all, the field might finally begin to develop effectively models of our nature no less than harder forms of science have been able to.

            Liked by 1 person

          10. “Why these specific parameters rather than standard WiFi or whatever? Because that would be what researchers would find that our brains produce to exist as such. Doesn’t that make sense?”

            It does. But that would represent an empirical correlation only. If we had that, I could understand your stance toward EM fields. However, if we did have it, I would still want to know why that correlation. I’d want the logical relationship between those structures and the experience. I could also understand your stance if it was based on a hypothesis of that logical relationship. But from what I can see, we don’t currently have either of those. Along with the other issues, it’s why I remain skeptical.

            Liked by 1 person

          11. It sounds like you’d be curious about some things that, even if the evidence does take us there, science might never have much to say about. Though I’ve become as strong a believer as you’re likely to meet, I don’t think it would ever make sense to us that certain minuscule energies that could be produced in a laboratory rather than a brain, would result in something that experiences enormous pain, pleasure, or anything else phenomenal. We’d just have to accept this and move on with the implications. And how could we possibly know that a lab created epiphenomenal experiencer would exist in the form of such a field? Because test subjects would report feeling those things when we induce them in someone’s brain (which is to say, a structure that has already evolved to function as such). Though I know you don’t currently believe that there’s “a hard problem of consciousness”, in that case I think you’d change your mind. Even if causality mandates this to be how consciousness works, it should never make sense to us that physics ought to work like this. Evidence would simply mandate acceptance.

            It could be that science fiction helps explain our differences of intuition here. As a kid I don’t think I ever read anything about mind uploads and whatnot. There’s nothing about that in the Dune books. Furthermore I don’t think I even cared what the computational brain might do to create consciousness. That always seemed ridiculously beyond me. So I developed my psychology based dual computers model of brain function in my late thirties and started blogging in 2014 at this age of 45. Then you and I met up a couple years later just before Trump was elected. For the first few years I don’t think I even grasped what your functional computationalism happened to be. I just smiled at the thought that some people don’t consider the brain to function as a computer. But then at one of Wyrd Smyth’s Chinese room posts, the penny not only dropped, but offended my sense of naturalism. So I soon developed my thumb pain thought experiment as a far more simple illustration of the functional computationalism position. It was maybe a year later in December 2019 that I came across McFadden’s EMF consciousness proposal. Now I had something that made good sense to me, and still haven’t come across an alternative that I consider reasonable.

            Should dedicated experiments validate my position, we might never have good answers for much of what you’re curious about. But I think there’d be quite a lot that would make sense. Beyond personal gratification for the things I’ve mentioned above regarding information and value, as well as my dual computers model of brain function itself, there’d also be an answer for why consciousness happens to be unified. I don’t know if you’ve seen Suzi’s July 23 binding problem post, but you might enjoy it even if you don’t consider your current position to be in violation. Many of us do consider it in violation. If neurons from different parts of the brain are responsible for the colors, edge, textures, motions, and so on that we see, then how does it all become unified into what we actually do see? Of course my answer is that various parts of the brain feed into this consciousness field by means of adding their own particular energies to it — edges, textures, colors, motions, and so on such that a whole becomes inherently unified as consciousness itself. I also don’t know of a second reasonable solution for this circumstance. https://suzitravis.substack.com/p/the-unity-of-consciousness-and-the?utm_campaign=posts-open-in-app&triedRedirect=true

            Liked by 1 person

          12. Right, we disagree on what type of explanation of consciousness is possible. What you describe is a posteriori physicalism, the idea that we can establish empirical correlations, but won’t be able to go any further. I think the only reason we’re tempted to think this way is due to holding an unproductive notion of “consciousness”, one rooted in remnant dualist intuitions. Consider that no one thinks we would have an unresolvable gap in trying to understand the operations of a computer.

            You mention we can never know if we created an epiphenomenal experiencer, except by reports from subjects when the field has been altered. Just a note, but if the experiencer is epiphenomenal in the strong philosophical sense, then any reports could not be caused by the experiencer. Those reports would be utterly unrelated. Epiphenomenalism is unfalsifiable. I personally think it’s incoherent. And it’s worth noting that natural selection can only select for traits that make a difference. Keeping to epiphenomenalism seems like a slippery slope into property dualism or panpsychism.

            On sci-fi, most of the stories I cut my teeth on, like Star Trek, stayed away from mind uploading. Star Trek in particular always drew a strong distinction between humans and computers. I became interested in mind uploading in sci-fi only after realizing the concept made sense based on a physicalist understanding of the mind. So you can’t blame our respective sci-fi influences.

            I’ve seen that Suzi post, but haven’t read it. I’ll try to take a look at it soon. I’ll just note that there are productive and unproductive understandings of that issue. Claiming EM fields solve it is focusing on the unproductive one, one that doesn’t exist if we let go of those remnant Cartesian dualist intuitions.

            Liked by 1 person

          13. Wow Mike, that was a depressing night! And though it’s clear that dictators of the world in general will celebrate this new election of Donald Trump, at least it was the American people who in the end chose such a dangerous path. You and I will be fine. People with the resources and desire to spend their time in the way that we do, shouldn’t have much to pity. If the angry people who elected him didn’t have an outlet for their anger, perhaps even worse consequences would emerge? So hopefully this will be a useful learning process for a still great democracy. But back to business…

            I do think you’ll enjoy Suzi’s binding problem post at least given its neuroscience, and this should be whether or not you decide she provides a productive perspective in general. Either way I doubt you’ll consider it threatening. Might you even decide you can put together an effective blog post on the matter? Maybe.

            I’ll now try to clarify my position regarding situations where consciousness in a perfectly causal world, both could and should begin epiphenomenally. In fact it seems to me that years before I settled upon an EMF consciousness position, I also told you about this. Back then I recall talking about an entirely non-conscious brain that evolution couldn’t program well enough to deal with more open environments very well. Too many random contingencies to potentially program for. So I said serendipity must have brought the physics of an experiencer or agent that could only feel good/bad without any capacity to act on its own behalf. Why? Because that’s how evolution works — it takes things that aren’t initially functional, and then alters them over various iterations to become functional. And of course they do need to be functional in order to evolve.

            Now back to my point today, the theory is that certain parameters of EMF exist phenomenally. Thus if scientists were to learn some things about those parameters then they ought to be able to produce fields in their laboratories that in themselves exist phenomenally. But note that an isolated field that scientists produce shouldn’t be armed with any mechanics for agency — it’s not like the field should be able to alter itself to parameters that feel good rather than bad. Thus effectively epiphenomenal existences initially in the brain as well. With enough iteration in the brain however, theoretically a point came where that experiencer was able to do something that affects brain function. Given that non-agency based function was already problematic, apparently there was room for a non-epiphenomenal experiencer to evolve. And how might a brain produced electromagnetic field alter brain function to potentially cause itself to feel better than otherwise? Apparently such fields are known to affect the firing of neurons under the heading of “ephaptic coupling”. Theoretically when you decide to do something involving your muscles, that EMF decision ephaptically couples with neurons that go on to cause those muscles to function as desired. What I’m saying is that scientists could empirically prove this and learn which parameters by causing tiny energies in the heads of test subjects, because if true then certain energies ought to constructively and destructively interfere with a person’s EMF consciousness. Such people should be able to tell us about this specifically because a fully evolved brain would already have been engineered for consciousness to be functional, and unlike an isolated field produced by scientists.

            Regardless I realize you can’t stand the thought that this might be true. Furthermore you shouldn’t like that this is the only theory on the market today that has a clear path for experimental validation or refutal. Why couldn’t your theory be that way? I’ll spare you of that explanation for now. In any case, perhaps this will help you grasp my position better than your last comment displays. I’m proposing a fully causal, non-epiphenomenal form of consciousness, that’s entirely testable today.

            Liked by 1 person

          14. Hey Eric. On the election, yeah, nothing good. I might do a post on my thoughts later.

            I read Suzi’s post and thought she did a good job with it. My take is that binding happens as necessary for various purposes. I don’t see a reason to introduce anything other than neural signaling for the explanation. But hey, maybe evidence will arise at some point.

            We’ve discussed your thinking about how experience might have evolved before. For me, the initial mutation can’t have been full on experience. It’s just too complex (unless we bring in something non-physical). Which means we need to find something that even if it only barely exists might provide some adaptive advantage. I like prediction for this. The initial form could have been just an incipient tendency to react to the immediate causes of a situation in the reflexive manner the situation previously caused. That would be enough for natural selection to build on.

            Speaking of Suzi, she told me about a new form of brain scanning being developed, functional ultrasound (fUSI). Apparently right now it requires surgery, but work is going on to change that. It would give us a way to measure brain activity other than through electromagnetism. I don’t know if it would help with the testing scenarios you keep pondering, but probably worth looking into at some point. It might at least enable us to remove a confound (measuring the thing we’re trying to observe with the thing itself).

            It’s not so much not being able to stand the thought of EM theory being true. It’s just that I don’t find any reason to buy it. I need either data, or a chain of reasoning that explains something that can’t be explained with neural and other biological activity. From what I can see, all it does is provide a scientific looking gloss on the idea of a soul-like entity existing in and around the brain.

            Liked by 1 person

          15. When I re-read what I’ve said, I commonly see that I’m often quite undiplomatic. So Mike, I presume that I sometimes bait you into an extra negative outlook. Thus apparently you sometimes fail to address my position itself. For example you can’t just say that it would be too complex for an experiencer of goodness to badness to exist by mean of certain parameters of an electromagnetic field, since that’s exactly the position I’m proposing. In order to effectively assess any flaws with that position, you’d need to begin from that premise so that you might grasp it well enough to provide effective commentary.

            In the testing of EMF consciousness, I actually think it’s quite fortunate that electromagnetic fields are so easy to both detect and produce. The main problem should be in getting sensitive enough detection and propagation instruments where they’re needed.

            Observe that science ought to be able to grasp the tiny electromagnetic disturbance created when a typical neuron fires. Furthermore given their Boolean relationships it ought to have a sense of how many neurons tend to fire when they fire in relative synchrony. Theoretically everything we see, hear, smell, taste, think, and so on, exist as they do because the exactly correct EMF is produced to exist as such by some of these precisely fired neurons. So a crucial testing avenue should be to transmit energies similar to what these neurons are already producing, at precise locations in the brain. If it’s found that this never alters someone’s vision, hearing, smell, feelings of temperature, and so on, then consciousness probably does not exist by means of the EMF that synchronously fired neurons produce. But if test subjects do report such distortions, and the specific energies are able to replicate those distortions, then that would be consistent with EMF consciousness. Furthermore if what researchers learn could help them produce energies that add elements to what someone sees, hears, and so on, then that ought to be very interesting! Of course many in your camp would argue that no, consciousness can’t be actually made of EMF or anything else. It must exist as processed information alone rather than processed information that informs something to exist as an experiencer of consciousness. Evidence however might suggest otherwise.

            Of course that’s just open theory. I don’t have the expertise to know what practical challenges researchers would face to implant brain transmitters that replicate the EMF of scores or thousands of synchronously fired neurons. But I do like that brain-computer interface researchers have been able to implant EMF detection arrays in the brain. Unwittingly some of them have even helped validate EMF consciousness by helping with the communication of a woman with degraded speech muscles. Theoretically a detection array was put close enough to the motor neurons that operate her speech muscles. Then after 100 hours of her attempting to read certain text out loud, the EMF detected by that array could reasonably be translated into what she was trying to say. Why? Perhaps because the detected EMF is what ephaptically couples with the neurons that operate those speech muscles. https://pmc.ncbi.nlm.nih.gov/articles/PMC10826467/

            Liked by 1 person

          16. The trick is finding evidence that can either only be explained by your favorite theory, or for which your theory is more simple. (“Simple” here doesn’t mean it accords closer to your intuitions, but requires fewer assumptions.) The evidence that there is an EM field, isn’t evidence that the brain is using it in any systematic manner. It’s a tough theory to find evidence for.

            Which is why I continue to think the only real test will be whether a simulation of the brain works before the EM field’s effects are included. That doesn’t necessarily have to be a human brain. It could be a relatively simple one. Of course, that wouldn’t establish that the field is conscious, only that the field is part of the causal effect leading to behavior.

            As a substack neuroscientist recently observed, the science of this stuff is hard.

            Liked by 1 person

          17. That’s right Mike, I’m talking about a test where the only simple explanation for certain results would be that consciousness must exist electromagnetically, while other results would suggests that it must not exist electromagnetically. If you think I’m wrong about this then I’d like to hear a valid explanation.

            Let’s say that appropriate scientists were to run a type of experiment where they implant leads into the brain of a test subject that are hooked up to a machine that’s meant to replicate the sorts of electromagnetic energies produced by standard human synchronous neuron firing at one or more locations of their brain. Let’s also say that they keep doing this for dozens of subjects in different ways in the attempt to reliably alter consciousness should it exist electromagnetically, though test subjects never reliably identify anything strange about their consciousness during such testing that might be related to these exogenous energies. Since it’s given that EMF energies of a certain variety tend to constructively and destructively interfere with other EMF energies of that variety, how would you argue that consciousness might still exist electromagnetically? Why wouldn’t this demonstrate that McFadden and I must be wrong?

            Or evidence might go the other way. Let’s say that these scientists do get some reports of consciousness strangeness regarding what someone sees for example, perhaps amplified when no light enters the eye (since it might be more obvious that way). Let’s also say that these scientists play with the transmitted EMF to figure out how to impart the image of a known person who seems to be doing something that the test subject is able to describe. Furthermore let’s say that this sort of thing happens regarding what a person reports hearing, smelling, and so on. What might a more simple explanation be than that consciousness itself probably exists under those sorts of EMF parameters?

            Liked by 1 person

          18. Eric,
            Consider two scenarios.

            Scenario 1
            1. Neuroscientists set up the rig you describe and adjust the field as you describe.
            2. The field changes alter consciousness in the EM field.
            3. Consciousness in the EM field alters the neural processing, which signals through the central nervous system and peripheral nervous system to move the muscles of the vocal cords, which generate air vibrations we interpret as a report of consciousness being altered.

            Scenario 2
            1. Neuroscientists set up the rig you describe and adjust the field as you describe.
            2. The field changes alter neural processing. (Much as TMS pulses do now.)
            3. The neural processing signals through the central nervous system and peripheral nervous system to move the muscles of the vocal cords, which generate air vibrations we interpret as a report of consciousness being altered.

            Scenario 1 is your interpretation of the evidence. Scenario 2 would be the mainstream interpretation. What about your hypothetical evidence do I have wrong, that only allows Scenario 1 as an explanation?

            Liked by 1 person

    2. “electromagnetic field to exist as that experiencer”

      I’ve grappled with the problem of where or what is the “experiencer” and have come to the conclusion it is nothing more than another quale or qualia. The experiencer exists just like the feeling you have a hand or foot. It’s possible even that your sense of ownership of your body is even closely related to the experiencer . More broadly, however, the experiencer is the representation of your relationships in space with other people and objects and in time between the memories of the past and the future.

      In other words, it doesn’t need to be explained for itself but needs only to be explained in the broader context of explanation of how qualia are produced.

      Liked by 1 person

  6. So sentience, according to him, is the capacity to reason about good/bad? Or to feel what it’s like for things to be good or bad? Or does this simply boil down to a will to survive and choose things that contribute to that goal? I guess I’m confused about this definition. I have no clue what the term ‘arousal’ means either, unless it means to wake up or to get horny (kind of similar, actually).

    Liked by 3 people

    1. His definition of “sentience” is the capacity for valenced experience. By “valence” he means states that “feel good” or “feel bad” like pleasure and pain. By “experience” he means phenomenal consciousness, which he admits is “unstable common ground” due to the definitional issues. So another way of defining it is the capacity for phenomenal consciousness of good and bad feelings like pleasure and pain.

      Now, I happen to think by using the word “feel” in his definition of valence, he’s dragging in more than just realization of goodness or badness. It’s more like the whole affect. And arousal is an integral part of it. Waking up and getting horny are two examples. But so is a jolt of adrenaline with fear or anger, or (in a negative sense) the loss of energy from being sad or just tired. It’s the change in heart rate, breathing, sweating, etc, that make the feeling visceral in a way simple perception of goodness or badness doesn’t.

      He sees his definition as broader than just the affect, which makes me suspicious about what he’s going to try to wedge in with it. But we’ll see.

      Liked by 1 person

        1. An affect is usually defined as the conscious experience of an emotion or more primal homeostatic related states like hunger, thirst, feeling hot or cold, etc.

          Unfortunately the literature isn’t consistent with this. Some people use it to refer to more basic states on which feelings are constructed. Others just use it for the underlying reflex reactions. But most give at least lip service to the definition above.

          Like

          1. Seems like one of those words that people use to sneak whatever theory they like in through the back door. I can see why you’d be wary of this writer, though I’m afraid I’m too dense to even see what’s at issue here. What’s wrong with the current definition of sentience? I realize it’s vague in the sense that it’s not going to tell us what specific creatures are sentient, but that doesn’t seem like enough of a reason to stipulate some other meaning of the word (assuming that’s what’s going on).

            Like

          2. I think it’s reasonable for someone writing about the edge cases of sentience to try to nail down a working definition of “sentience”. I’m just not sure about the one he chose. But at the end of the day, nothing is going be without controversy. What’s more concerning is that I’m finding his use of the “zone of reasonable disagreement” inconsistent, and revealing his biases. But more of that in the next post.

            Liked by 1 person

  7. Wonderful Mike, let’s finally get into some specific details for potentially testing the EMF consciousness theory!

    If my proposed form of testing results in no reproducible reports of conciousness strangeness, and given strong expert opinion that the exogenous field ought to constructively and destructively interfere with a subject’s endogenous field in a great number of appropriate ways, you didn’t mention how this might not be damning evidence against the EMF consciousness proposal. Do you concede falsification here? I’ll also say that this proposal is the only one that I personally know of that has any true potential for empirical disproof. Beyond this one, do you know of any consciousness theories with strong potential for disproof? Furthermore on a personal note, I wonder if anyone else has ever invented a reasonable test from which to invalidate a bona fide consciousness theory? Could mine even be the first?

    Fortunately you did say some things about the verification side of my proposed form of testing. You seemed to think that even if testing were to display strong supporting evidence, that this sort of evidence should still have substantial potential to lead us astray. Instead of altering EMF consciousness itself, you suggest that the exogenous energies might alter the specific neural processing that you consider consciousness to exist as. That possibility deserves further consideration.

    Physics tells us that EMF energies of a certain variety inherently interfere with others of that variety when they meet. So that’s quite straightforward. For an electromagnetic field to alter neuron firing however, my understanding is that the path is substantially more difficult. Apparently fields of the strength we’re discussing mainly just cause neurons to fire that are on the edge of firing anyway. Of course neurons generally evolved to be protected from EMF effects somewhat by means of their myelin sheath, and some more than others.

    In any case what you seem to be suggesting is that there’s a strong possibility that alterations to someone’s consciousness by means of exogenous energies, would probably instead distort the processing that actually exists as consciousness, and even given that the endogenous energies that scientists mean to replicate here, do not also have this effect? And then you seem to be suggesting that if scientists were to modulate exogenous energies to even impart intended images, sounds, smells, and so on for a subject to experience, that ephaptic coupling ought to be responsible for somehow getting a presumably massive amount of neural coding correct in these ways, or mere electromagnetic energy effects on neuron firing? Do you believe this explanation should always remain more simple and thus plausible than that consciousness directly exists in the form of a neurally produced electromagnetic field? If valid evidence of this sort ends up being found, to me the explanation that you’re proposing seems highly complex and unlikely, while the one that I’m proposing seems obvious.

    Liked by 1 person

    1. Eric,

      I can’t parse the beginning of your last paragraph. But you seem to be implying that removing the extra player (EM consciousness) in the causal chain is somehow more complex than keeping it. I’m not positing any change in procedure or empirical results. All I’m asking is what the justification is for the extra player in the explanation of those results, which seems inherently more complex than the explanation without it (neural consciousness).

      Like

Leave a reply to AJOwens Cancel reply