Does consciousness require biology?

Ned Block has a new paper out, for which he shared a time limited link on Bluesky. He argues in the paper that the “meat neutral” computational functionalism inherent in many theories of consciousness neglect what he sees as a compelling alternative: that the subcomputational biological realizers underlying computational processes in the brain are necessary for consciousness.

Block makes a distinction between computational and subcomputational processes. The computational ones fulfill certain causal roles, but those roles are built from subcomputational realizers. He admits this can lead to an argument about whether the underlying realizers aren’t themselves computational, but argues that if we keep going down, eventually we get to something non-computational, at least unless we want to go with a variation of the “it from bit” hypothesis.

Block focuses part of his discussion on the possibility that the chemical part of the brain’s electrochemical processing might be a candidate for one of these crucial realizers. He discusses that they enable neural inhibition, but admits that there are also electrical synapses that do inhibition. He also discusses learning as a possibility, something that in organic brains requires chemical synapses.

In the conclusion, Block argues that if biology is crucial, then a breakthrough is needed on why it is, and that failing to recognize that a breakthrough is necessary may not be conducive to finding it.

My reaction to this paper is similar to Hakwan Lau’s, which could be summed up as, “Where’s the mechanism?” Kudos to Block that he’s actually looking for one rather than just handwaving about vague possibilities. But to his final concluding remark, what makes a breakthrough of the type he’s looking for necessary? What crucial problem would it solve? How would we know when we’ve found it?

Block admits that part of the problem is, who says “subcomputational” processes aren’t themselves computational? This leads to the old argument about what “computation” means. If we go with selective propagation of effects within a tight range of energy levels, computation seems widespread in biology. If we strictly limit it to what contemporary commercial computers do, then nothing in biology may qualify.

In more broadly functional terms, it seems like any role realizer is itself going to be fulfilling some lower level causal role. So the real argument here may be more about where the crucial layer of functionality resides. And of course the question is, crucial to what? If we go with what is crucial for behavior, then what is the mechanism for propagating its effects? If it still goes through the mechanisms we already know about, then what makes any specific realizers crucial if alternate realizers can contribute to the same mechanistic role?

Block argues that studying what consciousness does is much easier than studying what it is. This assumes there’s a difference, that there is anything distinct to is that’s not encapsulated in does. And to Lau’s criticisms, if we go with what is necessary to produce the nebulous “phenomenal experience,” how does a scientist like him test that?

In the end, I think those who think biology is crucial to consciousness need to identify what the crucial mechanisms might be, mechanisms that can’t be implemented in technology. Or perhaps more plausibly, that can’t be implemented with current technology. When it comes to future technology, artificial life may make this debate moot.

Block muses that theories that focus on computational properties will tend to favor AI consciousness over that of relatively simple animals, like insects. But others that favor subcomputational properties may do the reverse. This may be getting at the underlying motivation so many have for this type of biocentrism. It’s instinctively easier to find affinity with other evolved systems than with engineered ones, at least currently.

But given that neuroscientists study computation in ant brains, Block’s distinction seems to underestimate the sophistication of systems in relatively simple animals. It’s the type of computation, rather than computation in an of itself, that I think matters for most of us.

Block, to his credit, is trying to find the mechanisms. But he’s demonstrating how hard it is. And why many of us don’t anticipate there being anything there to find. But maybe we’ll turn out to be wrong. If so, I’ll be happy someone kept making the attempt.

What do you think? Are there mechanisms you see that only biology can fulfill? If so, what might they be? And would there be any block to future technology implementing it?

96 thoughts on “Does consciousness require biology?

  1. While I’m not convinced that biology is necessary, I do see aspects of biology to be important.

    Ned Block is, in my opinion, making a mistake by assuming that it is all computation (and sub-computation). I’m more inclined to place importance of pre-computation. You compute with data. But data does not exist by itself. The precomputation is in the construction of data, which is prior to any possibility of computation.

    With AI, the data used is constructed by humans via our design of input sensors. This results in a rigidity of the kinds of data available. The advantage that biology has, is that it constructs its own data and can adjust the kind of data to fit its current needs.

    Liked by 1 person

    1. It seems like biology gets its data from its sensory apparatuses. Those are evolved rather than designed, that is, they’re “designed” by natural selection. But I can’t choose how my lower level visual systems present things to my reasoning centers. I can, to a limited degree, train myself on how my perceptions are constructed, but in many ways I’m just as limited by my input mechanisms as AI.

      However, where I think your critique has more bite is that my worldview is completely constructed from my innate dispositions and from what I’ve learned via my senses over a lifetime. Symbolic communications enables me to import a lot, but it will still be in terms of what I’ve picked up from my inbuilt apparatuses. By contrast, a self driving car requires access to vast databases over standard protocols to do its navigation, navigation which is still more limited than what someone who’s been driving for a couple of years built themselves. Large language models require vast global databases of content to present the illusion of a worldview that doesn’t hold up to sustained scrutiny.

      So I agree with you to an extent, but my view is more nuanced, that it’s more of a spectrum than a sharp delineation.

      Liked by 1 person

  2. Trying to prove that brains require biology is equivalent to trying to prove that a computer cannot emulate a mind, which is analogous to proving that you can’t build an airplane out of lego. I don’t see how you could prove it. Failing to get your lego plane off the runway isn’t proof. You can only prove the converse, i.e. successfully emulating a mind using a computer. But even then, the naysayers might say “Ah! But you can’t emulate a human mind…”

    Liked by 1 person

    1. On the one hand, I agree. All we’ll ever be able to really test is whether an AI can give the vast majority of us the impression of being a fellow conscious being. If we succeed, there will always be those who doubt that impression, even if it’s their uploaded grandma asking how their day went. On the other hand, I would think sustained failure to make that impression over a long enough period would cause us to be more skeptical. Granted, it would never rule out someone succeeding further down the road.

      Liked by 1 person

  3. A couple of points before I read the paper.

    Most of the neurons in the cortex have chemical synapses. Electrical synapses seem to get involved only when speed is of the essence. That, also, might mean that messages through electrical synapses carry more urgent but less complex information.

    Learning does not necessarily require synapses, although maybe the UAL type of learning only occurs with true neurons with plasticity. There is a lot of evidence for forms of memory in single cells.

    Let me offer an extended passage of something I’ve been working on.

    The “computation” of inputs that a neuron makes in deciding to fire is far from understood, but it is complex and could vary depending upon the type and state of the neuron at the time. To perform any sort of temporal processing, however, neurons require some form of memory to hold asynchronous inputs from many neurons. Temporally inputs arriving close in time stack together and push towards firing, but inputs too far apart dissipate in effect. Temporally encoding is frequently used to record the strength of stimulus with more frequent firing representing a stronger stimulus. Complex learning and memories, such as we find in humans and other vertebrates, requires brains and specific structures, such as the hippocampus; however, some form of memory is required at the neuron level itself, and we can find evidence of memory-like processes in organisms or organs without a brain, challenging the idea that memory requires a centralized nervous system.

    These phenomena are often called non-neural memory, cellular memory, or behavioral plasticity without neurons. Even plants exhibit memory-like responses. The sensitive plant will stop folding its leaves when repeatedly dropped, indicating habituation. Other plants show priming effects where they respond faster or more strongly after an initial exposure. Slime molds were trained to expect periodic pulses of cold and slowed their movement in anticipation. In slime molds, memory is believed to be stored in oscillatory patterns of protoplasmic streaming or chemical gradients in the cytoplasm.

    Many of these examples can be explained with biochemical feedback loops, metabolic memory, epigenetic changes, and cytoskeletal rearrangements. Many memories in single cells are encoded in the states of protein networks. DNA methylation or histone modification in in yeast and bacteria can persist across cell divisions, giving cells a “memory” of past environments. The cell’s metabolic state can carry information about past conditions and bias future responses. In slime molds, microtubule dynamics store spatial and temporal information, guiding locomotion. Changes in ion concentrations can create short-term memory of stimulation, for example, in paramecia.

    Other examples are not as easily explained. In 1906, Herbert Spencer Jennings experimented with a single-celled protozoan Stentor roeseli that is found in slow moving bodies of water. When he squirted them with an irritating red dye, they used their cilia to spit water at the pipette. After that failed, they would contract sharply back, bend, and hide. When they came out again and encountered the dye a second time, the protozoan would immediately contract and, when that failed, they would swim away. This proved that effectively the organism had learned from its initial experience with the dye and pursued different behaviors on subsequent encounters. Spencer’s experiments were reproduced in 2019. In the 1960’s Beatrice Gelber, in some of the only research on associative learning in single cells, showed that a paramecium could learn by training to associate an uncoated metal wire with food. In 2024, Nikolay Kukushkin and Thomas J. Carew at New York University showed that human kidney cells remember patterns of chemical signals presented in regularly spaced intervals. The emerging view from biology is that forms of memory can be found in all cellular life

    end quote

    Liked by 2 people

    1. Thanks. There’s no doubt that life has a lot of low level sophistication. It began as a molecular enterprise and that’s still where its foundations lay. People debate whether nanomachinery will ever be possible. To me life has been there for billions of years. It doesn’t mean some of the more outlandish things people talk about with nanotech can happen (like predatory swarms), but nature has already been showing us with proteins that it can be done.

      And I’m sure Block would say these are exactly the type of realizers he was talking about. But as I covered in the post, the question is, for any particular capability, how much of that lower level organization is crucial? A muscle requires that low level machinery to do what it does, but we’ve had machines for centuries that replace animal muscle power, machines that are far less sophisticated.

      Liked by 1 person

      1. I would argue it is a matter of how the muscle does it, not that it does it. How does the brain compute? Why does it need consciousness (if it does)?

        At any rate, consciousness, when it evolved, had to evolve from an existing biological substrate of organisms and capabilities. I liked that Block ended his diagram on the origins of neurons at bilaterians, but he forgot about excitable membranes, cellular memory, and oscillatory patterns found in single early cells and organisms as simple as slime molds.

        Neurons and their oscillatory interactions built on that base by projecting dendrites and axons that could extend their range and connections. Chemical synapses evolved from vesicle-based secretion systems found in single cell organisms and early multicellular cells. The chemical synapses in neurons that use neurotransmitters like glutamate, GABA, acetylcholine evolved from this existing secretion machinery. It certainly is possible that the neurotransmitters themselves play a key role in consciousness. . Psychedelics with chemical structures like those of neurotransmitters can dramatically alter perception by attaching to or interfering with the receptors in the dendrites.

        Liked by 1 person

        1. Right, but in the case of psychedelics, we know they affect the chance of activation of downstream neurons. Basically the computational mechanics are altered. And the effects of glutmate and the other neurotransmitters and neuromodifiers have their effects through the way that downstream neurons respond to them. But if we have another connection weighting mechanism with similar upstream and downstream effects in an ANN, what prevents them from having similar overall effects?

          And if those molecules have effects other than through the standard mechanisms, then what are the alternate mechanisms? I don’t rule out that those alternate mechanisms might exist. But I personally need a reason to accept them as a postulate. This is the struggle I always have with biological naturalism. It strikes me as more of an aspiration than anything.

          Liked by 2 people

          1. I’m not sure we know as much about how psychedelics work as you might think, but, of course, the computational mechanics are altered. We would expect altered perception and altered firings to go together.

            Liked by 1 person

          2. I’m not saying we have a complete understanding, but we do know it alters the dynamics of neural firing. It seems like anything which is going to affect experience has to go through that layer.

            Like

  4. [back to philosophy … woot!]

    I see you picked up on this: “Block argues that studying what consciousness does is much easier than studying what it is. This assumes there’s a difference, …”. This is the source of the frequent comment “a simulation of rain doesn’t make the computer wet”, and your observation is the correct response.

    I also picked up on Block’s framing of Functionalists seeing computation in complex systems (like LLM’s) but not simpler systems, like insects. I commented on Bluesky that simpler systems can have simpler computations, but I like your observation that the computations in those simpler systems might not be all that simple.

    [I spent maybe 20 minutes writing something to add, clicked in the middle to change a word, hit backspace twice, and the whole thing went away. I’ll try again when I’m done crying]

    *

    Liked by 2 people

    1. [There hasn’t been much happening lately to spur me. Admittedly, being somewhat burnt out on it, I haven’t been looking that hard. I debated whether to do this post but decided I wanted to get it off my chest.]

      Yeah, the simulation of rain isn’t wet thing. For someone outside of the simulation. Expecting it to be is a category error. But we would expect it to be wet for a simulated being in the same simulation. And some forms of simulated rain are wet. The milk used on the soundstage for Singing in the Rain counts as a simulation, and the actors got very wet. Which is to say that wetness is a functional thing, but whether it exists for any particular observer depends on its relation to the substance.

      I need to lookup up any discussion you had. I’ve also been burnt out on social media, so it was actually dumb luck that I saw Block’s post.

      [Sorry. If it makes you feel any better I ran into the same issues editing the post. I’m becoming progressively more unhappy with WordPress and its block editor. But I’m also not wild about the alternatives.]

      Liked by 1 person

  5. “In the end, I think those who think biology is crucial to consciousness need to identify what the crucial mechanisms might be, mechanisms that can’t be implemented in technology. ”

    I am struggling to understand the difference between biology and technology, if both are just causal mechanisms. What exactly is this “biology” that people want to invoke? I think we need to get to the bottom of that question, before we start asking what consciousness might have to do with “biology.” Speculation about “subcomputational processes” seems beside the point, but maybe I’m missing something.

    Liked by 1 person

    1. Sounds like we’re on a similar page on this one. I do think technologists have a tendency to underestimate the sophistication of evolved systems. The toolboxes in those systems have a four billion year head-start on human engineering. So the initial thoughts in the 1950s that transistors were just like neurons turned out to be…optimistic. Even the artificial neurons used in current machine learning remain far simpler than the organic variety. The question is how much it matters. My guess is more than the technologists want it to, but less than the biopsychists expect.

      Like

      1. It may look like we’re on the same page, but in my experience we’re seldom in the same book! :-). I’m not saying that there may be no such thing as “biology” apart from highly complex technology. The point I’m making is only that the turn to “biology” as an explanation for anything is an unexamined one. To gesture towards biology and then bring up subcomputation is to let the real question slip past us.

        Were transistors initially thought to be neuron-like? A transistor is a solid-state version of a triode, so perhaps people thought the same of triodes in the days of Univac. On the other hand, a triode is an electronic version of a valve (hence the name “valve”), so perhaps people thought the same of mechanical valves in the days of steam plumbing. Someone once observed that we tend to model our theories of biological operation on the popular technologies of the day. These days it’s neural networks. Somehow the current technological substrate always feels adequate to the task, at least until the next one comes along.

        Looking down the road, will we eventually come across a technology that is adequate to explain biology? It’s natural to be optimistic. At the same time, there is something about biology — about life — that feels like a “natural kind,” and unlike many, I’m not yet ready to wave away the mysteries of “elan vital.” Perhaps, after the neural-network and “systems thinking” models have gone their way, and quantum technology has become a candidate for our model of biology, the mysteries of quantum mechanics and elan vital will simply dissolve into one another, and that will be the only answer we get.

        Liked by 1 person

        1. Oh I had no illusions any agreement wouldn’t be narrow. But it made sense to me that, as a consistent panpsychist, you wouldn’t think there’s anything particular about biology. Although your comment about vitialism makes me think I’m less familiar with your views than I thought. (Not unusual.)

          On the transistor, I remember a quote from John von Neumann along those lines, but given the time frame, maybe it was about vacuum tubes or something. And I’m not sure about the context. But I do know early computer scientists vastly underestimated the difficulty of reproducing the brain’s operations. Reportedly, AI was originally supposed to be solved by some graduate students over summer. I suspect there’s still some of that going on with the current AI engineers.

          We do tend to model biology in terms of our current technology, but what that saying always misses is that each generation’s analogies get closer. (See Asimov’s relativity of wrong quote.) Early nervous system analogies talked in terms of fluids and animal spirits, but without an understanding of electricity, it amounted to a useful understanding of the dynamics at the time. Later a telegraph analogy was used for the nervous system, which was less wrong than before while still limited. The design of early electronic computers were reportedly inspired, at least in part, by the brain research happening at the time, but everyone understands the views taken from it were overly simplified.

          And I do think the classic computationalists were too preoccupied with how technological computers worked. But I’d argue it was a useful step. The connectionist theories seem like a closer approximation. It won’t shock me if newer models that are a closer yet take over at some point. But the ones that came before won’t be completely wrong.

          Like

          1. The panpsychist part of me suspects that agency is a fundamental feature of the universe. The vitalist part of me notes that biology is where agency becomes conspicuous.

            Like

          2. Sure, the agency could become conspicuous in machines. Some think it already has. However, the conspicuousness of agency in biology looks to be a “bottom up” kind of thing. It’s not like Adam was molded from clay and God breathed agency into the lump. The fundamental agency would have found its expression in the form of Adam organically. With machines it’s more “top down.” Building lumps of clay and then “breathing agency” into them is actually more or less what we do. Does that count as fundamental agency, or some kind of Franken-agency?

            Like

          3. I usually describe it as evolved vs engineered, but bottom up and top down work too. Of course, the top down architecture itself is a result of, and inspired by, bottom up systems. So it could be seen as a continuation of bottom up processes, evolution moving on to the next tier. And that’s before we blur the line with genetic engineering.

            A lot of people are certainly worried it will be Franken-agency. Although our agency is oriented around survival and preserving our genetic legacy. Machine agency will be oriented around what its design goals are. As long as we don’t make those goals the same as a living system, the danger isn’t eliminated, but it seems far less fraught.

            Like

          4. For me the very concept of agency suggests a “living system,” that is, a system whose concerns are those of life. Its “agency” is exactly the response to conditions that would otherwise constitute life. But this is not necessarily to claim “life” for machine agency; rather it’s to say that if a machine has agency, it cannot help but act “as if” alive. That tendency in action is the whole crux of its agency. If it acts with no self-interest, with no participation, if it’s just a servo-mechanism for someone else’s interest, then it’s not an “agent” in the sense I have in mind; it’s no more than a machine.

            In this “as if alive,” there’s room for an organic or “evolved” version, and an engineered or “designed” version. (The quotes are there because design certainly engages evolution, and whether evolution engages design is, for me, still an open question.) If we manage to engineer agency into a machine, the agency will have an uncanny “Franken” quality, which may or may not justify fear. For some reason I’m thinking of the kid in the animated film “Toy Story” who likes to attach doll heads to tanks.

            Liked by 1 person

    2. The biological crucial mechanisms that can’t be implemented in technology might just be part of some performances of life (biology is about living entities). Among these are evolution and emotions which can bring an entry point for an evolutionary nature of self-consciousness. See The Need for an Evolutionary Perspective in Philosophy and in Psychology. (I agree that the relations between phenomenal and self-consciousness need to be clarified. But looking at them at human level can be a starting point).

      Like

    3. FWIW, the active inference folks (Friston, et al.) might suggest that the significant biological feature is homeostasis, more or less. The processes need the purpose of self-organization, self-maintenance. For myself, I recognize the role for homeostasis, in that a goal of some sort is needed, but I think the goal could be other than self-organization.

      *

      [reasons available on request]

      Like

      1. I think the mention of a “goal” here is key. You can give a technology a goal, but the technology never has a goal, if you know what I mean. Biology may be said to have a goal in this very different way. It’s a difference we can observe, and it might be the definitive difference between biology and technology — whatever that signifies. I make no further comment, except to say that dismissing crucial observations for theoretical reasons seems like poor practice.

        Like

  6. This seems to me to be the crux of the argument:

    “In any case, the main argument of this paper, presented in the section on roles versus realizers, is that when deciding whether machines or animals are conscious, we have to extrapolate from the only creatures we know to be conscious: namely, us.”

    Consciousness in organisms is involved with information, but the information it processes is biological, its output and results are biological, and how it works is biological. It is about biological information – where the organism is and how it feels. Inputs come from sensory neurons on biological circuits. Actions are generated from motor neurons. I’m good with calling this “computation” but that doesn’t mean it works anything like our current digital computers; however, I leave open the possibility that some non-biological mechanism could employ or implement “consciousness,” even though I am dubious of the possibility and of its utility.

    Another extended quote:

    Classical and non-classical computations that produce the same result are functionally equivalent. Hence, we can expect a robot able to produce the same result as a human being, but that does not require that the robot uses the same method as the brain. Consciousness is the simulation generated by the living brain that is integral to its computational processes. It is the means and manner of computation, not the computation itself. The computation can be performed on many types of hardware, but the hardware of the brain uses consciousness to do it. If we understand scientifically how the brain does this, we may eventually understand how to build an artificial device with the same capability (if we should ever have good reason to do so). Consciousness may not be limited to biological organisms, but it is likely limited to some subset of the physical forces, waves, and particles that the brain uses to produce it.

    Liked by 1 person

    1. “Consciousness is the simulation generated by the living brain that is integral to its computational processes. It is the means and manner of computation, not the computation itself. The computation can be performed on many types of hardware, but the hardware of the brain uses consciousness to do it.”

      Are you equating biology and consciousness? If not, I’m not clear on how you’re using the word here. If you are, then aside from an evolved lineage with molecular processes, what distinguishes biology from other systems, current or future?

      When it comes to consciousness in machines, my stance is always, tell me what is meant by “consciousness” in any particular context and then I’ll judge. I do agree that giving machines all the properties of living minds is possible, but may never be particularly useful, except in some experimental studies. Most of us want tool minds, not slave ones. (If we build a race of slave minds, we’ll likely get what we deserve.)

      For artificial companionship, I could see us skirting the boundary somewhat close. But ultimately we’d want these companions much more attentive and conscientious than actual living minds could ever be, without their own self actualizing needs.

      Liked by 1 person

      1. Are you equating biology and consciousness?

        No. “Biology” isn’t even in the quote. I’m talking about the biological brain being effectively a computer that generates subjective experience during its computations.

        My guess it computes using large-scale dynamic patterns that arises from the chaotic (deterministic) interactions of neurons. To what extent the interactions of this sort can replicated outside the brain is an open question.

        quote from paper

        What role do large-scale dynamic patterns play in consciousness?

        Another candidate for the biological basis of consciousness, proposed by Godfrey-Smith [17,44], is the presence of large-scale dynamic brain patterns. This view fits well with the electrochemical mechanisms featured in the text because rhythmic activity in the chemical soup outside of neurons is important to driving brain waves, and we know that anesthesia and sleep involve changes in large-scale dynamics.

        end quote

        Block seems to dismiss this idea on some fairly flimsy basis in the next paragraphs, but I think this is close to the answer he seeks.

        There is empirical correlation of complex brain patterns, sensory processing, and collective activity in neurons in traveling waves and vertices with consciousness. Activity is at a variety of speeds and patterns can be localized or more global. When this type of activity begins to wane in the brain and simple delta waves take over, then the brain is headed towards sleep or unconsciousness.

        Liked by 1 person

        1. My question about biology was based on the text further up in your comment where you emphasized it repeatedly.

          On large scale dynamics, could be. But for me, the details of what those dynamics are composed of seem more interesting. That’s why I like global workspace and related type theories. They’re trying to get at what’s behind the dynamics. They could be wrong of course, and I’m sure they are in many details, but when looked at it in terms of attention the evidence seems more solid. But then a lot is once we stop worrying about the c-word.

          Liked by 1 person

          1. I was a little puzzled about what you were referring to.

            I am arguing consciousness in organisms is involved with biological information. What other kind of information would it have?

            You’re not thinking of it as primarily about information about the “world,” are you?

            It is information about the “world,” but only secondarily through the reactions of its sensory neurons and their interactions with the brain’s intrinsic patterns. The “world” is the brain’s simulation from the biological information its been given.

            Liked by 1 person

          2. But wouldn’t that be true for everything. The only information the processor in the device you’re using to read this has is what’s come to it through its I/O systems. It never gets access to anything except through those mechanisms. (Aside from whatever info it has innately, just as with organisms.) It seems like every system, biological or technological, has its Markov blanket, its boundary through which it interacts with the world.

            Liked by 1 person

          3. Correct, that would be true for anything that could be considered a system. But here’s the difference that biology makes.

            The information of biology isn’t of the same sort as the information of a contemporary processor. Biology is primarily analog; computers are digital. Brains operate near chaos with higher-order spatial and temporal patterns emerging during consciousness and cognitive activity. That is probably how we would expect a brain to process analog and chaotic input information. Computers show no higher-order spatial and temporal patterns.

            So, we could possibly conclude that higher-order spatial and temporal patterns, which empirically correlate with consciousness, are directly involved in conscious experience.

            Liked by 1 person

          4. There are analog computers: https://en.wikipedia.org/wiki/Analog_computer

            They’re rarely used anymore because analog processes can be reproduced in a digital system with sufficient capacity and resolution. Digital adds quantization noise, but it only has to be less than the analog system’s own variance noise, the differences inherent in multiple runs of analog processes. It’s why we can listen to music and watch movies that were originally recorded with analog media.

            My question for the higher order patterns is the same I had for Block, what’s the causal mechanism or mechanisms? I’m not satisfied with just broad correlations. I want a full causal / structural account with out any gaps (or at least the minimal number). Until we have that, I don’t think we have understanding.

            Like

          5. Are you asking what are the causal mechanisms for the patterns? Or, how do the high order patterns relate to consciousness?

            The patterns are much like the patterns in Conway’s Game of Life. Simple rules – or in the case of the brain maybe not so simple – generate complex order in its spatial and temporal execution. Where is the causality in the Game of Life? You could say it is in the rules or the mechanism that applies the rules; however, there is no way to derive the patterns from the rules except by executing the rules. The brain self-generates much of its activity but it is constantly being perturbed by information from the organism when it is conscious.

            Patterns of activity are associated learning, memory, visual processing, even activity in the motor cortex. In the primary visual cortex of awake animals, stimulus-evoked responses from moving objects create traveling wave patterns across retinotopic maps. Experiments show that when viewing moving images or natural movies, waves propagate to match stimulus motion in a way that anticipates or predicts the future location of the object before the stimulus reaches the cortex in that location. Do we conclude those waves predicting the motion directly relate to our conscious perception of the moving object? I would think it is something that should be considered. For that matter, the retinotopic map looks somewhat like the stage for the visual aspect of the brain’s simulation.

            So, I could imagine these patterns are “felt” by neurons and a critical part of memory and learning, By “feeling,” I mean that it generates conscious experience, that neurons – like the irritable single cell organisms with excitable membranes – feel their environment of connected neurons and react. These reactions of neurons and clusters of them are the basic stuff of qualia. As patterns spread across the brain, the information in the patterns is shared with different interests. A visual image in the visual cortex triggers the recognition of an object and then later the word for the object in different parts of the brain.

            So, why do neurons need to feel? They have to feel for the simulation to be convincing. If the simulation wasn’t convincing, then it wouldn’t provide any evolutionary value. If we didn’t believe in our pain, we wouldn’t take any action to relieve it.

            Liked by 1 person

          6. I’m more interested in the causal chain for behavior we usually take as indicative of conscious experience. So someone describing the red apple they’re imagining. Or someone recognizing a face that they saw fifteen minutes earlier.

            To me, the patterns you’re describing count as indicators of those types of experiences. But they don’t give us insight into how they come about and affect memory and behavior. I know a lot of the Chalmers of the world will say I’m not focusing on “real consciousness,” whatever that means. But I take myself to be focusing on what can meaningfully be studied.

            Talking about neurons feeling anything seems like a category mistake to me, mixing up different levels of description. But I’m a reductionist. I see feelings as complex processes which are composed of particular patterns of neural activity. Of course, we can play around with the definition of “feel” until we have something that can plausibly be attributed to neurons or other cells, but then, as with any time we take a deflated view of a concept, we need to ask what else that it could be attributed to.

            Liked by 1 person

          7. My original “felt” came with quotes.

            Feeling is actually a reaction if you’re looking for a causal chain and it’s a reaction that can trigger other reactions. I’m actually suggesting that the “irritability” found in single cells is the foundational element of consciousness. Except it gets massively scaled up in neurons,

            Have you ever looked at the avoidance behavior of a paramecium? When it senses contact with an object, the cell membranes open their gates, ions flow in, and the cilia reverse their motion to cause the organism to back up. As the membranes reset, the organisms reorients and swims in a different direction. It behaves similarly with other stimuli. It is amazing and intelligent behavior in a single cell, but it is also easy to think of it as a nanomachine.

            Now scale up that nanomachine to a neuron with connections to hundreds of other neurons that enables it to obtain from and provide information to other neurons. What’s more it can retain information for variable lengths of time. Now bundle billions of these nanomachines together and have reacting to each other and sensory information. Those reactions are the feeling of consciousness.

            Liked by 1 person

          8. Sorry, I missed or forgot the quotes. And certainly I agree that the core of a feeling is a reaction. And in a living system, that reaction is composed of the reactions of all its components down the the cellular and even protein level. But two points.

            One is that I’m not seeing any reason why the overall reaction can’t have alternate components that produce the same overall functional role. In other words, what makes the lower level reactions a crucial realizer? And what is the mechanism of that crucial realization?

            Second, back to my point above, if we just define feeling as those reactions, then what about those reactions distinguish them from the reactions of the device you’re to read this?

            My own view is feelings, in the sense we commonly use the word, need more. There’s the reaction, and there’s the interpretation of the reaction. Note that in us these interpretations aren’t themselves consciously constructed. They’re pre-conscious. But they arise to the reasoning centers as impulses toward certain actions, ones which we have to decide on inhibiting or indulging in. It’s that relation, I think, which is required for what we usually mean by “feeling.”

            But as always, I see consciousness as in the eye of the beholder. I can’t say your version is necessarily wrong. I can only examine how well it relates to what most of us intuitively mean by the term.

            Liked by 1 person

          9. Yeah, I only put the quotes on my first usage so you’re forgiven (joke) for missing them elsewhere.

            The reactions in brain are organized into the higher order patterns from the collective activity of neurons that compute the brain’s simulation of its body and the world. The reactions themselves have causal impact in the organism, both causing reactions and being caused by interactions with other neurons. Consciousness doesn’t construct anything. It is the simulation made real for the organism so it fulfills its biological imperative: survive, thrive, and multiply.

            Liked by 1 person

          10. Right, this sounds like the interoceptive loop. The brain’s reactions to certain sensory input trigger reactions around the body, such maybe increased heart rate, blood pressure, breathing rate, muscle contractions, etc. Which all come back as interoceptive input. It’s why we use the same word for the sensation of touching an ice cube and the conscious experience of an emotion. A large part of the experience of the emotion is the interoceptive feeling generated by it.

            But it’s not clear to me why a machine couldn’t have that. Not that I see it as a particularly productive way to architect one.

            Like

          11. BTW, did you see this?

            https://medicalxpress.com/news/2025-10-century-brain-oscillations-emerge.html

            “We recorded gamma activity from mice who were detecting the visual stimulus and then played it back into the brain of other mice. And when we did that, it tricked the mice into thinking they had detected a stimulus,” says Cardin.

            “Together, the findings indicate that gamma activity in the cortex supports the integration of visual information and is involved in the behavioral responses that emerge from that integration”.

            However, I wouldn’t focus exclusively on gamma waves.

            Liked by 1 person

  7. [done crying]

    I just finished Christof Koch’s book “The Feeling of Life Itself” which is a really good read. He describes the field of consciousness studies from the neuroscience perspective and then explains Integrated Information Theory in a very non-mathematical manner. I bring this up because his take shares a difficulty similar to those requiring some biological (meat) physicality for consciousness. Specifically, they share the p-zombie problem.

    Here’s the problem: it’s conceivable that a machine that computationally duplicates the function of a meat brain would not thereby duplicate the consciousness, but this is only true if consciousness is epiphenomenal and has no impact on anything that happens physically, like reporting a “feeling”. Both IIT and Seth’s biological requirement would say the machine is not conscious, but then they would have to explain what the machine is talking about when it says it sees a red rose, smells cut grass, and just generally has qualia. You could call it machine consciousness, but that’s the kind of consciousness I think is worth talking about. (Note: LLM’s are not necessarily conscious for reporting these feelings. They definitely do not duplicate the computation going on in the brain.)

    *

    Liked by 1 person

    1. One thing that occurred to me as I was typing the post, was that we could make biology crucial by definitional fiat, such as Koch’s definition of, “the feeling of life itself.” By definition that excludes the feeling of machines, to the extent they might eventually have any.

      Koch does explicitly accept that IIT allows for the possibility of zombies, which to me makes the theory’s claim to being scientific tenuous. But I know Seth doesn’t buy into zombies. (Or at least he didn’t in his book.) I suspect he’d draw the line at machines ever being able to have the relevant capabilities. So he seems more willing to stick his neck out, which I think keeps him more securely in the science camp.

      Like

        1. I tend to agree. There may be some that become relevant later depending on what we’re trying to do with the machines, such as molecular machinery, but for what Block is discussing, I’m not anticipating any. I could be wrong.

          Liked by 1 person

        1. LOL.

          Like your CPU’s feeling of joy when it clears its registers.

          If consciousness is integral to the method of how a brain computes, it wouldn’t be epiphenomenal even if some other form of hardware could do the same computations without consciousness.

          Like

          1. On what basis is consciousness and certain types of computations the same?

            That should be something you need to prove rather than assume.

            What types of computations are they anyway? What defines the category?

            Like

        1. I can guess, of course, the answer more or less. Computations that functionally implement something that “looks” like consciousness using various human behaviors as a guide.

          But the computations in biology are governed by evolutionary logic and serve ultimately to perpetuate the genome by managing the actions of the biological organism. The computations for something that mimics a human being would have to different in category, I would think.

          Like

          1. Ok, this is my own theory, but it’s the best one I know of, so …

            The main computation is a pattern recognition, but in addition you need a communication of that recognition and an interpretation of that communication. (Actually, the interpretation part is a necessary part of communication.)

            What you say about computations in biology is correct. And the computations a current LLM uses to mimic human speech are nothing like the computations a human uses to produce similar speech. The consciousness of a computer running an LLM is just different. But AI’s can be created that use computations much more like humans , and soon they will, I think (I see the path, which means someone else must also). They will never be exactly like humans just because all of the molecular complexity involved, but they will be close enough and differ only in ways that my consciousness computations differs from yours.

            Liked by 1 person

          2. “The main computation is a pattern recognition, but in addition you need a communication of that recognition and an interpretation of that communication.”

            Agree to an extent with it, but my take would be somewhat different. Locally, simultaneously in multiple places, clusters of neurons are trying to find patterns that match their input. They use memory of prior input to attempt to find a short-cut to the matching pattern. Once it finds a match or coalesces on a pattern it generates output to other clusters. The patterns are spatial and temporal but also may have extra-dimensional properties similar to quasi-crystals. I would imagine the patterns are constructed by rules that we may able to correlate with qualia.

            If you want my speculation, I think eventually we will be able to match patterns to cognition sufficiently that we will be able to read “thoughts” possibly even of animals. We know AI can already do this to a limited extent.

            Liked by 1 person

    1. Biological systems are actually kind of mishmash. The action potential is usually described as a binary, either firing or not. But the process that determines whether it fires is a messy analog one.

      There are analog computers: https://en.wikipedia.org/wiki/Analog_computer

      They’re rarely used anymore because analog processes can be reproduced in a digital system with sufficient capacity and resolution. Digital adds quantization noise, but it only has to be less than the analog system’s own variance noise, the differences inherent in multiple runs of analog processes. It’s why we can listen to music and watch movies that were originally recorded with analog media.

      Liked by 1 person

      1. I guess I’m a die-hard functionalist. I’m reading now “The Master and His Emissary” by Iain McGilchrist about our right and left brain hemispheres. It turns out that much or most of our actions appear to be initiated subcortically, subconsciously, and we become conscious of them only milliseconds after the fact, feeling that our thoughts initiated the action. Incidentally, humans aren’t the only animals with two brain hemispheres. Also, note that they aren’t symmetrical or redundant in their functionalities. Bottom line: I don’t think “what’s it like to be me” should be a show stopper in our trek to AGI.

        Liked by 1 person

        1. I’m a stone cold functionalist myself, and totally agree on the “what it’s like” phrase.

          From what I’ve read, dual hemispheres are common (universal?) among bilateral animals. Interestingly, not all animals have the connections between them that we do. Bird brain hemispheres, for example, don’t appear to have any correlate of the corpus callosum connecting the two sides of their pallium.

          Liked by 1 person

  8. Living beings exist for their own sake and have the will to do so; this is why they develop a sense of self-awareness. This pre-reflective form of self-consciousness corresponds to the subjective dimension of experience, and this constitutes phenomenal consciousness.
    The existence of living organisms depends on their actions, with their survival intertwined with physiological processes giving rise to agency and consciousness
    You could argue that the functional role of consciousness is to provide the organism with just enough information about its ongoing experience to enable it to easily obtain as much information as it needs for its purposes.
    Artificial intelligence does not arise on its own, but is created; it has no will of its own and pursues no purposes of its own, rather its computatiions serve purposes imposed from outside. It doesn’t care about its own well-being either.

    Liked by 1 person

    1. I would argue that nothing living arises on its own. The will of any organism revolves around impulses programmed into it by its genes. With artificial intelligence, the source of those impulses is different.

      I do agree with part of what you’re saying. We are crucially concerned with our own survival and the survival of anything related to our genetic legacy. It’s why most people would quickly sacrifice themselves for their children.

      Machine impulses revolve around their designed purpose. They’re not automatically going to be survival machines like us, and I think we’d be very unwise to make them so. We want tool minds, not slave ones. If we do build slave minds, we’ll likely get what we deserve.

      Liked by 1 person

  9. [1]So the real argument here may be more about where the crucial layer of functionality resides. [2]And of course the question is, crucial to what? [3]If we go with what is crucial for behavior, then what is the mechanism for propagating its effects?

    For 1 and 2, yes. But on 3, take a particular sensation, e.g. pain. Pain isn’t a behavior, it’s an underlying cause of a bunch of stereotypical behaviors (as well as occasional other behavior).

    Most of our words and concepts are like this, devoted to the causes of observables. The referent of a word is usually the thing (or a few things, in the case of “jade”) that explains the features that we have noticed co-occurring. For pains, that would seem to be certain patterns of neural activity.

    “Jaguar” refers to a set of animals whose traits are best explained by evolution, and that species is ultimately defined through its evolutionary ancestry. If someone brings an animal back from Tau Ceti that looks and acts remarkably like a jaguar, it’s still not a jaguar.

    My view is that some aspects of consciousness are probably best explained/understood functionally, but some are not. Lumping “consciousness” into one bucket doesn’t help, if different aspects of it have important differences.

    Liked by 2 people

    1. Right, but pain evolved for a host of reasons, mostly around motivating an organism to take particular actions. Of course, you or I could be in pain but resist taking any action, even wincing or expressing discomfort, because our prefrontal cortex is inhibiting the responses for some reason. Although it likely would still manifest as increased perspiration, muscle tension, or other physiological signs.

      The problem is, other than these outside behavioral consequences, how do we measure any conscious experience? We’re always stuck with either self report (behavior), or physiological correlations previously established by self report.

      The temptation here is to focus on some putative ineffable undefinable aspects of experience which supposedly have no causal effects. This is the epiphenomenal rabbit hole. But it never seems to make any evolutionary sense. If experience leads to no change in behavior, then what does natural selection select against?

      It seems like anything that could be selected can be implemented artificially, at least in principle.

      Like

      1. Let me reiterate some of what I’ve posted before I leave this topic. Thanks by the way, for posting on the paper and I find myself surprisingly in agreement with a lot of Block’s arguments.

        Evolution, in its way, perhaps has played a cruel trick on us. The brain can only provide us with simulations of reality based on what it can model from its sense organs and its intrinsic patterns and reactions. Implicitly, we believe the simulated realities to be real because we feel our existence in our neurons firing. If we did not treat the simulation as real, it would not provide any evolutionary value. If we didn’t believe in our pain, we wouldn’t take any action to relieve it. If we did not believe in our hunger, we would not find food. Our neurons must feel to make the brain’s simulations believable. Consciousness is the simulation made real for the organism, so it fulfills its biological imperative: survive, thrive, and multiply.

        Liked by 1 person

      2. We can identify things and processes not just by casual observation, but also by looking under the hood. We can X-ray them, or do genetic testing. We can try to see how it works. We can looks at its history (that’s ultimately how we tell a Tau Ceti “jaguar” from the real thing). And so on.

        If you want to be a thoroughgoing naturalist, you have to posit that linguistic meaning (e.g. “jaguar” refers to any jaguar) is also a natural phenomenon. Likewise cognitive meaning (e.g. you were just thinking of jaguars). How does that work? Well, for starters, the world contains clusters of objects and processes – things that have the same or similar explanation. We invent words in order to share cognitive attention toward one or another cluster, with other humans. These are the clusters that actually exist (and make an impression on us) in our shared environment at the time of coining the word. That’s why the Tau Ceti “jaguars” don’t count – they arrived too late, and they have a different explanation than the ones we know and love.

        Of course, we could abandon some of our ways of talking and adopt new ones. We could say, forget pain, what matters is fain! Where fain is defined functionally. You could certainly argue for that. But I think if you say, we’re already there, you’re cheating.

        Liked by 1 person

        1. Not sure I understand what you’re trying to say with the jaguars. But you and I usually mean different things by “function” here. I think you usually mean something like teleofunctionalism, while I mean causal functionalism, or structural realism. In that sense, for any instance of pain, there are upstream causes and downstream effects. Would you agree that if we had that mapping, we’d have pain mapped? If not, then what would be missing?

          And if we have that mapping, is there anything that would prevent us from implementing a similar structure in technology, even if with different components? If we reproduced the effects from similar causes, what would be the mechanism for any difference in the components to matter?

          Liked by 1 person

          1. When I say “functionalism” I mean behavioral-science functionalism. I.e., the inputs and outputs of the system that matter to mentality, on such a functionalist view, are those that are in the domain of behavioral science. (If one digs down to microphysics, or even to substrate, one has gone too far.) Now, a lot of these inputs and outputs are characterized by behavioral scientists in biological terms. And biology is chock full of teleonomics. So that is probably why you’re seeing “teleofunctionalism”. And I think this behavioral-science version is the closest thing to a standard way to understand “functionalism” in philosophy of mind.

            Your structural realism definition is unusually broad. I think some philosophers of science would be very surprised to find that they are automatically “functionalists” in phil mind because they are structural realists in phil sci.

            Moreover, I don’t think you yourself are content to settle for such weak sauce functionalism. Nobody would be sympathetic to the unfolding argument just because they were structural realists. (“The main point the authors make is that any output that can be produced from a recurrent feedback network, can also be produced from a unfolded feedforward network with ϕ  (phi), the metric IIT uses to supposedly measure the amount of consciousness present, equal to zero.” I.e., a whole different structure!)

            Liked by 1 person

          2. I think my understanding of functionalism is the one most common among self described functionalists. The one you describe sounds more like historical behaviorism. Functionalism began as a reaction to the weaknesses of both behaviorism and type identify theory.

            From David Chalmers, who has his own issues with functionalism but does a decent job describing it:

            These problems were finessed by what has become known as functionalism, which was developed by David Lewis (1966) and most thoroughly by David Armstrong (1968).8 On this view, a mental state is defined wholly by its causal role: that is, in terms of the kinds of stimulation that tend to produce it, the kind of behavior it tends to produce, and the way it interacts with other mental states. This view made mental states fully internal and able to stand in the right kind of causal relation to behavior, answering the first objection, and it allowed mental states to be defined in terms of their interaction with each other, answering the second objection.

            Chalmers, David J.. The Conscious Mind: In Search of a Fundamental Theory (Philosophy of Mind) (pp. 14-15). Oxford University Press. Kindle Edition.

            …and…

            Functionalism can be seen as a version of structuralism, where the emphasis is put more squarely on causal roles and causal powers.

            Chalmers, David J.. Reality+: Virtual Worlds and the Problems of Philosophy (p. 429). W. W. Norton & Company. Kindle Edition.

            The key in equating them is to understand that a causal role is basically a type of structural relation.

            What specifically about this understanding of functionalism would you say undercuts the unfolding argument? I know IIT advocates often talk about the causal structure being crucial, but they seem determined to ignore that causal structures can be implemented at different levels.

            Liked by 1 person

          3. I’m not attributing behaviorism to you. Everything I said is perfectly consistent and congruent with your quotes from Chalmers. I even agree, technically, with Chalmers that functionalism is a form of structuralism. It’s just not compatible with all forms of structuralism. Such as mine.

            A generic structuralist – one who thinks that the structure of interactions is all there is to reality, but is not committed to any narrower class of interactions as defining mentality – would not accept the unfolding argument. That argument compares two different structures, and says that if we attribute consciousness to one, we have to attribute it to the other. Why? On what basis must we do so?

            Maybe I got this wrong, but I seem to remember from earlier conversations that you think functionalism is just structural realism (maybe together with physicalism), applied to the mind. If that were true, then any Ontic Structural Realist would automatically be committed to functionalism.

            Causal diagrams, and structural equation modeling, map causal flows from outside the system of interest, through its internal workings, and back outside. But in standard functionalism, the external causes and effects of interest aren’t specified in micro-physical terms. And neither are the internal workings of the brain, unless of course the microscopic details are shown to be crucial to achieving the pattern of responses that are being studied. Similarly for substrate. Do you think I’m unfairly restricting the scope of “functionalism” by these statements?

            Liked by 1 person

          4. “unless of course the microscopic details are shown to be crucial to achieving the pattern of responses that are being studied.”

            I think here is where you’re characterizing a view narrower than what most functionalists take. Yes, the details under discussion have to make a difference, both in behavior or in other survival terms, but also to the system itself, in its own modeling of itself. And this is in principle, not related to any particular study.

            So a functionalist is open to the possibility that dynamics at a lower level could be relevant. But we want to know how they are relevant, the mechanism by which they become relevant. Basically the same rules apply. What is its causal role for observable behavior, including what the system can account for itself? If the system itself can’t account for it, not even in principle, then in what way is it relevant?

            Crucially, unlike behaviorism, functionalism doesn’t ignore internal states. But those states have to be relevant, although the relevance can be just for other internal states.

            Hope that helps.

            Liked by 1 person

          5. I don’t think we disagree about what functionalism is, then. The person’s self-understanding will reveal itself in behavior, which makes it count for functionalists, under my definition. And thus any internal states strictly necessary for such a self-understanding.

            Liked by 1 person

          6. Our discussions have implicitly assumed that self-consciousness is a performance of our human minds.“Being able to think about our own entity” has always been an implicit and unavoidable background in our various positions, and I feel that point should be explicited to introduce the functionality of self-consciousness as a subject.That possible functionality addresses many perspectives (self, conscious self, anticipation, language, free will, anxiety management, …).Analysing today human self-consciousness is very complex, and perhaps looking at its evolutionary origin in primate evolution can be a exploitable entry point (https://philpapers.org/rec/CHRSCR-3).More is to come on that where a lot is to be done.

            Liked by 1 person

          7. Upon further thought, I think even a traditional functionalist should reject the unfolding argument! Functionalism allows consideration of the causal flows between internal mental states of the organism. That’s it’s biggest advantage (IMO) over behaviorism. The unfolding argument requires functionalists to limit their use of these internal patterns; to declare a whole swath of differences in patterns (recurrent vs unfolded) irrelevant.

            Liked by 1 person

          8. I missed this comment before my last reply, although I think I ended up addressing it, at least to some extent.

            However, in at least one of my posts about the unfolding argument, I did point out that there are functionalist reasons for rejecting it, in practice if not in principle. Recurrent processing is essentially looping, which any programmer knows allows for a lot more functionality in a given amount of code. Recurrence allows for a lot more processing to happen in a given neural substrate. It’s why the cerebrum can do what it does with 16 billion neurons, while the cerebellum needs 69 billion for its role. But speed is essential in muscle coordination, so it’s a good tradeoff for the cerebellum. The cerebrum seems to be for longer term planning.

            So there are functional reasons for recurrent processing. They’re just not reasons the typical proponents of IIT want to avail themselves of.

            Like

  10. Following Block’s preference to look at what consciousness does rather than what consciousness is brings concerns that we have been implicitly living with during our discussions. One is relative to the type of consciousness we were using:Was it Phenomenal Consciousness (PC) or Self-Consciousness (SC)? The former is about experience (the “what it is like question”) and per se highlights little evolutionary advantages. SC is different. It can be defined by “the capability to represent one’s own entity as existing in the environment like conspecifics are represented, with the ability to think about that representation”. Such formulation brings together social cognition, self-representation, reflexivity/reflectivity with associated functionalities and evolutionary advantages. Relating consciousness to SC or to PC makes biology impacts on consciousness significantly different.  (Regarding the postulate of Pre-Reflective Self-Consciousness, I feel it should soon become part of an evolutionary approach to SC. See https://philpapers.org/rec/MENRAA-4).Should we think about reformulating some positions that came up in our discussions with a better identification of what we mean by consciousness?

    Liked by 1 person

    1. Block did distinguish between what consciousness is and what it does, but he didn’t really express a preference for does. He’s the one who originally made the phenomenal vs access consciousness distinction, and seems to have spent his career since then looking for evidence that phenomenal consciousness exists in some way independent of access consciousness.

      But I completely agree that everyone needs to be more precise about what they mean by “consciousness.” Unfortunately, in my experience, that very assertion is contentious. I often wonder if there’s any real value to studying consciousness scientifically. We can study self report, memory, and a number of other capabilities, but which is “conscious” and which isn’t seems to instantly get us stuck in a definitional quagmire, with people talking past each other and often not even realizing it.

      Like

  11. Just an observation and I’m not exactly sure myself what to make of it.

    Life is an open system. Technology tends to be closed (or nearly closed) systems. Life constantly is exchanging matter with its environment: food, water, gases (oxygen, CO2 in our case). The last thing a technologist would want is water in the computer.

    Even when we look at the lowest levels in biology, the action of excitable membranes results in an exchange of ions with the environment. The avoidance behavior of the paramecium is produced by the influx of CA+ ions through channels in the cellular membranes. When neurons fire, they exchange matter with their environment.

    Technology can derive energy and information from the environment, but it doesn’t usually exchange matter in the way biology does. The gasoline in a car provides energy but it doesn’t become incorporated into the engine block.

    Maybe the philosophers, especially the panpsychists, could make something of this observation.

    Liked by 1 person

    1. It’s certainly a difference between life and current technology. But I think as nanotech and generic engineering progress, and eventually blend into each other, that will change. Imagine a future car that’s able to replace its own components and repair wear and tear by “eating” the right materials. Or a bridge that is constantly using photosynthesis, pulling carbon out of the environment, to keep its structure repaired and robust.

      And we’ve talked about self replicating interstellar probes, probes that have to be able to not only repair themselves, but build new ones. A number of thinkers have noted that we could regard them as a new form of life. (Some fret about what happens if they get out of control.)

      Liked by 1 person

        1. I think it definitely requires an open system, at least open in some fashion. Energy and information, at a minimum, have to come across the boundary. (Unless we want to imagine a totally isolated conscious system; but what would it be conscious of?) So at least photons and electrons like for the device you’re using right now.

          Are hadrons (protons, neutrons, etc.) specifically required? For me, as always, the question is: what’s the mechanism?

          Liked by 1 person

          1. You may be thinking of an isolated system as a closed system. Closed systems allow energy and information to pass but not matter. Robots would be closed systems even with “eyes” and “ears.” A cyborg might be an open system.

            “What’s the mechanism?”

            I’m not sure what sort of answer you are looking for. Could you give me an example of an answer. even you can only think of bad ones?

            It seems like you are assuming consciousness is something other than matter, energy, and forces; therefore, an explanation needs to explain how it comes from them. If McFadden or Eric says consciousness is the EM field, what more needs to be explained to provide a “mechanism?”

            Liked by 1 person

          2. I think in terms of capabilities. So when I ask about what the mechanism is, I’m asking what about the proposed dependency is necessary for the type of behavior, the type of abilities, that lead us to infer a conscious entity.

            For example, Block cites the chemical aspects of neural processing. He looks at inhibition, but has to admit there are other ways to inhibit other neurons. He then discusses learning. Chemical synapses definitely seem crucial to the way brains do learning, at least long term learning. But for the dependency he’s looking for he has to go further. He has to establish that the chemical component is the only way any system ever could do that type of learning, or at least a way that is completely inaccessible to current technology.

            For a more positive case, consider the proposition that a neural-like structure is crucial. That actually may well be the case. Most of the recent success with AI has been with neural structures. They seem to have left the old symbolic strategies in the dust. Of course virtually all of it has been with virtual neurons, so while the neural-like structure is crucial, it’s not established that the biological way is the only way to do it.

            And of course I talked, somewhere in this thread, about functional reasons why recurrence is important. But functional reasons leave no block to virtual recurrence, or other mechanisms that are able to accomplish the same causal role.

            Liked by 1 person

          3. It’s perfectly clear that a non-biological machine could have the capabilities for some of us to infer a conscious entity. So, if that is your criteria for consciousness, I would agree it could be non-biological.

            But I would argue a machine could meet that criteria without requiring consciousness. Can you explain why it would require subjective experience? What would conscious experience do or provide for such an entity that is required for its capabilities?

            If the capabilities are the same as consciousness, then the argument is tautological.

            Liked by 1 person

          4. So you buy philosophical zombies? Or at least behavioral ones? If so, how could subjective experience ever evolve? It seems like natural selection can only select on behavior or other adaptive or maladaptive traits.

            I think in a healthy / fully functional system the capabilities and the experience go together. (They’re clearly separable in a damaged or sick system, like in the case of locked in patients.) I’m a functionalist. I expect capabilities and their support structures to be the whole story. It’s not clear to me what it even means to say it wouldn’t be. Is that tautological, in the sense of being true by definition? Maybe it should be. Yet many still insist it’s false.

            Liked by 1 person

          5. Let’s build a mechanical arm, one that has the capabilities of a human arm.

            We hook up some cables and pulleys to steel rods of various sizes and one or more electric motors. We cover it with a plastic that looks like skin. We operate it with a small controller and have it pick up an object and lift. It has the capabilities of arm. It has “armness.” Most people would call it an “arm.” People tend to reuse words rather than invent new ones.

            But the mechanical arm tells us almost nothing about how a human arm works. Aside from sharing some basic mechanical principles with the mechanical arm, the human arm works with muscles and nerve firings in a coordinated fashion to perform grasping and lifting. We don’t have motors in our arms unless you now want to extend the meaning of “motor” (an electric one, no less) to include muscles. Do “motor” actions in an organism really mean we have mechanical motors like the tiny electric one in arm all over our anatomy?

            Is the electric motor in the mechanical arm the same as a muscle because both are defined WHOLLY by the same causal role? That’s the problem, they aren’t wholly defined by their causal roles. They are defined by their causal roles AND how they are implemented – their materials and processes. What would make mental states any different in this regard?

            Liked by 1 person

          6. Muscles could be said to be motors, in the sense that both convert a type of energy (electrical vs electrochemical) into motive force. And the nerves that connect to muscles are called “motor neurons.” Your engineered arm has pulleys and cables. The biological version has tendons and joints.

            Of course, what you describe doesn’t have all the capabilities of a human arm. The sensors throughout the internal components and on the surface (skin) are missing. If we add all that and attach it to a human being (say, Steve Austin the Six Million Dollar Man) then he has an arm there, functionally equivalent for many purposes.

            But it’s not completely functionally equivalent. Steve’s arm isn’t self repairing. (At least it wasn’t in the old show.) And it tended to glitch when he was in microgravity, and other situations convenient to the 1970s TV show. It can’t receive much energy from his body.

            Why wouldn’t minds be the same? I think they would. If we want a system that can model itself in a world, have preferences about the state of those models, be able to simulate its actions in the past or future, or be able to reproduce a number of other capabilities we associate with minds, I don’t think there are any barriers in principle to a machine eventually being able to do that.

            On the other hand, if we want it to replace a brain in a biological body, receive energy from that body, provide homeostasis management, and a host of other functions, then that’s technology still far in the genetic/nano engineering future.

            Liked by 1 person

          7. I guess you do want to make “muscles” motors. Or, is it “motors” muscles if they are the same?

            No doubt some bumpkin in his first biology class thought motor actions in an organism meant it had real motors, but most people don’t. My example doesn’t include everything a real arm does, but it includes more than enough for most people to call it an “arm,” just as probably a small subset of the “capabilities” of a brain could probably entice people into calling a machine conscious. Google “Blake Lemoine.”

            But the real problem is with the phase “wholly defined.” Why are mental states WHOLLY defined by their causal effect? What else, if anything, is wholly defined by its causal effects? I suspect it is a remnant of substance dualism where the mind substance is simply replaced by an amorphous set of interacting capabilities without physical existence. Substrate independence follows from it. A “capability” doesn’t have a physical existence. It is something abstracted by our minds from observations of what something does, but usually it is only a subset of what something does. And it certainly does not wholly defined what something is.

            “On the other hand, if we want it to replace a brain in a biological body, receive energy from that body, provide homeostasis management, and a host of other functions…”

            Critical to causal roles is a degree of compatibility of materials. An electric motor needs a battery or a cord. It won’t run on glucose. Prosthetics that directly connect to the nervous system only interact at discrete points and in limited ways. Even then, their integration usually requires weeks and months of specialized training of the actual nervous system to make them work.

            Liked by 1 person

          8. “What else, if anything, is wholly defined by its causal effects?”

            What isn’t? You just pointed out that we’d use the word “arm” for an appendage that was sufficiently similar in its causal profile. The same applies for mouse traps, music players, and cars. David Chalmers in his book, Reality+, points out that even solidity is a functional role, one of resisting penetration. Even a chemical element can be simulated and fulfill the same roles within the simulation as a particular count of protons do in the outside world.

            There’s no substance dualism with functionalism. If there’s a dualism, it’s the one between causal roles and specific implementations, between the relations with other entities vs the relations within. It’s the same dualism that allows each of us to use the same WordPress site even if we’re using radically different hardware.

            I actually think positing requirements for consciousness with no identifiable mechanism is much more driven by remnant intuitions from classic interactionist dualism. But I learned a long time ago that hitting anyone but self described dualists with that label isn’t persuasive.

            Liked by 1 person

          9. “mouse traps, music players, and cars”

            They all have physicality in themselves; hence, not WHOLLY defined by causal effects.

            The closest things might be gravity, electromagnetism, and other forces, but that would put “mental states” on par with other fundamental (or near so) aspects of physics. But even the forces have associated particles.

            Functionalism does a sleight of hand to hide its dualist roots. Dualism’s big problem is explaining how mind can have an influence or control on the physical body. Functionalism slickly redefines the “mental” as something wholly defined by its effects to eliminate the problem and frame itself as a form of physicalism.

            Liked by 1 person

          10. “hence, not WHOLLY defined by causal effects.”

            What non-causal, or more broadly, non-structural or non-relational aspects would you say they have? I’m a structural realist, so I’d find any specific identification of these types of properties very interesting.

            “And, if you want to say, we only know the physical by its effects, then you need to explain why rocks, mouse traps, music players, and cars aren’t conscious.”

            Why would I need to explain that? Functionalism doesn’t say every effect is conscious, only particular types of functionality, of structure and relations. It no more says that every cause and effect is conscious than saying Tetris is computation means every computation is Tetris.

            Liked by 1 person

          11. The problem is that the first part of Chalmer’s definition of functionalism “a mental state is defined wholly by its causal role” tells us zilch if everything is defined by its causal role.

            If everything (including such as the “solidity” of matter) defined by its causal role also is a function, then functionalism becomes a universal explanatory tool that can explain everything. If it explains everything, it doesn’t explain anything and, in particular, it doesn’t explain the physicality that results in a mental state, apart from the physicality that doesn’t produce a mental state. It provides no criteria for selecting which effects are conscious and which are not.

            Apart from that, the chain of causal effects in an organism is vastly different from the chain of effects in a machine when we look at the microlevel. The only way we can regard a muscle to be the same as a motor functionally is by inventing by a higher-level, abstract function (or capability) we might call a “mechanical power source.” But a “power source” is simply a concept. It has no physicality whatsoever (apart from whatever physicality it inhabits in our brains) until it is implemented with a particular chain of causality.

            But it is in some set of higher-level functions without physicality is where you want you want to find the effects you think are conscious. That’s why I think functionalism, at least as you are applying it, is hiding its non-physicality.

            Liked by 1 person

Leave a reply to Karl-Heinz Cancel reply