The substitution argument

A preprint came up a few times in my feeds, titled: Falsification and consciousness.  The paper argues that all the major scientific theories of consciousness are either already falsified or unfalsifiable.  One neuroscientist, Ryota Kanai, calls it a mathematical proof of the hard problem.  Based on that description, it was hard to resist looking at it.

This paper could be seen as building on an argument in a paper I shared last year.  The “unfolding argument” was used to attack the Integrated Information Theory (IIT) of consciousness, focusing on IIT’s requirement for feedback processing, and pointed out that the same input-output relation could always be obtained from an “unfolded” feed forward version of the neural network in question.  The argument was that the requirement made IIT unfalsifiable and therefore unscientific.  (Some IIT researchers later responded.)

The current paper gets into a mathematical formalism, but the gist seems to be that the authors make distinctions between physical systems like a brain, our observations of those systems, and the predictions and inferences we make about what conscious experience is happening in those systems.

In the paper, an inference is what an experimenter infers about conscious experience based on behavior, such as a report from a test subject.  A prediction on the other hand, is what the theory predicts about conscious experience based on the result of a brain scan or similar observation.  The authors note that for many theories, inferred and predicted experience are seen as independent variables, albeit ones that hopefully converge in experiments.

But, the authors point out, the output of such a system can always be produced by another physical system with different internal structures and processes, a substitute architecture, leading to the same inferences but not the same predictions.  Therefore, they argue, all such theories are already falsified.  They emphasize that this affects not just IIT, but other scientific theories such as Global Workspace Theories (GWT), or Higher Order Thought Theories (HOTT).  They call this the substitution argument.

The authors also take on theories that posit a strict dependence between inference and prediction, such as behaviorism, and access only interpretations of GWT, HOTT, and others.  These theories or interpretations typically are strictly in terms of behavioral dispositions or access dynamics.  The issue, it seems, is the lack of an explicit accounting for phenomenal experience.  Here, it’s argued, the theories, while not already falsified, are unfalsifiable.

If the authors are right, then scientific theories of consciousness start to look hopeless.  In their concluding remarks, they argue that theories could try to avoid the strict independence that leads to falsification, or the strict dependence that leads to unfalsifiability, by trying for something they call “lenient dependency”, although it’s unclear what this would look like.  Or they could assume that physics is not causally closed, a move few scientists would likely be enthusiastic about.

However, I think the paper’s argument is developed under an implicit assumption, that a theory claims to be the one and only identity explanation for consciousness.  IIT, as advocated for by Christof Koch, falls in this category.  Koch argues that only systems with a high Φ (phi) are conscious, and that every system with a high Φ is conscious, even if it doesn’t appear to be.

Conversely, Koch argues that although a Von Neumann type computer (like the device you’re using right now), or an unfolded network, can in principle reproduce all the behavior of a conscious system, the fact that it has a low Φ means it will not be conscious, that it will in fact be a behavioral zombie (a weaker version of a philosophical zombie).  Under the paper’s argument, such a system would falsify IIT.  I’m sure Koch and other IITers would disagree, simply arguing for the zombie scenario.

I think the paper’s argument has less force for theories which make no claim to universality, that only aim to be an explanation of consciousness in human or vertebrate brains.  And most advocates of GWT, HOTT, and other similar theories are functionalists, that is, people who see consciousness as functionality.  In principle, functionality can always be reproduced with alternate mechanisms.

To a functionalist, these theories aren’t about finding the one and only way to produce consciousness, just how it is produced in the systems we can currently study.  A functionalist will usually be open to the same functionality being produced by alternate means, such as a sufficiently fast Von Neumann machine.

There is the charge of unfalsifiability leveled at behaviorist and access theories.  But this is a long standing criticism of these theories, and it assumes that something other than a functional explanation is required for phenomenal experience.  (A notion which itself seems unfalsifiable.)

All of which is to say, it’s not clear any of these theories are taking hits here they haven’t already taken.  Some will see added credibility due to the formalism, but the formalism only makes the criticisms more precise.  You still have to buy the underlying assumptions for them to have force.

Unless of course I’m missing something?

75 thoughts on “The substitution argument

  1. To quote Mary Elizabeth Winstead in 10 Cloverfield Lane: Oh, come on! It’s bad enough when someone embraces the conclusion of a reductio ad absurdum argument – in this case, the idea that a Giant Look-Up Table must be conscious because it performs the same behaviors. But to then use that thesis as a premise in an argument against other views – well at the very least, it isn’t a very good move dialectically.

    I’m having a little trouble with your explanation of a “prediction” because you use “what the theory predicts” in the explanans. I assume you mean things like: the subject will take the apple and refuse the orange. And such.

    Liked by 1 person

    1. In the paper’s formalism, the theory makes a prediction about conscious experience based on observations of the system itself, such as brain scans. Experimenters make an inference about conscious experience based on other observables, such as subject report. As I noted in the post, ideally these converge. It’s a problem if they don’t.

      The idea is that if the theory is presented with the same behavioral output but with a different system, then it will make incorrect predictions. As I noted, this is really only a problem if that theory claims universality. If we accept that with a new type of system, we may need either a new theory, or adjust the current theory, then most of the problem seems to go away.

      There are some theories though that are rigid about this, as I noted in discussing IIT. But it’s a long standing issue with IIT.

      Like

  2. I got bogged down about half-way thru this paper, so I’ll withhold comment on it until I get it done. Meanwhile, you should check out this paper which is in a similar vein: https://www.tandfonline.com/doi/full/10.1080/17588928.2020.1772214. I think it makes a better, clearer case of the criteria for a theory of consciousness, and why the current ones, as currently articulated, don’t meet enough of them, allowing room for clarifications of the theories which could fix this.

    I note that the proponents of the Information Closure Theory of consciousness claim [by tweet] to meet the criteria in the linked paper, but I need to review that theory to double check.

    Which theory can be found here: https://arxiv.org/abs/1909.13045

    *
    [ahem]

    Liked by 1 person

    1. I don’t see how information closure explains subjective experience. It may be a valid description of the how the brain works but it doesn’t really deliver an explanation for consciousness. It doesn’t make a compelling argument why information closure is unique to conscious entities.

      The criteria paper you reference – isn’t that the same point addressed here in another post?

      https://selfawarepatterns.com/2020/07/15/hard-criteria-for-theories-of-consciousness/

      Like

    2. Incidentally, I still am not sure I completely follow the information closure argument but it sounds a lot like the idea that the brain generates consciousness primarily from its own endogenous activity. In other words, consciousness is like an emulation of the external (and maybe internal) world and somewhat insulated from it. I am definitely coming around to that view point and write about it.

      https://broadspeculations.com/2020/08/04/brain-as-emulator/

      Like

      1. For a minute there I was all “Wait, wuh? How could I miss that?” I was getting a little worried about early-onset … . But then I looked up my only comment there, which was “ I think I will hold significant discussion until you have read/commented on two other recent works:”. I withheld comment, and significant thought, and so, memory. Sorry.

        *
        [okay, maybe early-onset …]

        Like

        1. Ha. No worries. I actually started reading a paper the other day, with a strong feeling that it seemed familiar, before realizing I had done a post on it already several months earlier. Hard to keep it all straight.

          Like

  3. Theory and experimentation perform a kind of “leap frog” act. Sometimes theories guide research wonderfully well and other times they do not. So, when theories do not suggest directions to pursue, experiments do. Experiments often produce data that theories struggle to explain and thus it goes. Each aspect of science leads for a while cause the other aspect to grow to catch up. Then they switch roles.

    Sounds like it is time for more consciousness research data, as the theories seem to be mired in problems.

    Liked by 1 person

    1. As I note in the post, I don’t think the issues raised by this paper introduce any new difficulties.

      The greatest difficulty I see is that consciousness remains a muddled concept. Because of that, everyone can look at the same empirical data and disagree about what it means. I often think that trying to study consciousness scientifically is like trying to study love. You can’t really do it. You might be able to study various physiological, psychological, and social factors that lead people to talk about it in certain ways, but you won’t find it as an objective thing in nature.

      Like

  4. ” A functionalist will usually be open to the same functionality being produced by alternate means, such as a sufficiently fast Von Neumann machine”.

    The functionality that we are trying to explain is phenomenal consciousness. No quality or quantity of observable, measurable behavior can demonstrate subjective experience.

    Liked by 1 person

    1. That’s the conventional wisdom, but I’m increasingly skeptical of it. We can learn about the workings of a system to an arbitrarily precise degree, until we get to the point where we know why it says, “I’m having a subjective experience.”

      True, we can never have that system’s experience, because we can never be that system. But my laptop can never be an iPhone. It can never be in the same informational state as the iPhone. It can have the state of an iPhone as a subset of its overall state, in the form of emulation software, but that still isn’t being the iPhone.

      Liked by 1 person

      1. Learning about the workings of a system and duplicating its functions by an alternate means are somewhat different. Of course, we can learn about the workings of the brain and develop theories about how it is working and how it is generating consciousness. But simply duplication of some arbitrarily selected set of functions to the point that the outputs seem similar to the outputs of a conscious being doesn’t really provide insight into how the brain is generating consciousness.

        Liked by 1 person

        1. James,
          I’m somewhat on board with Philosopher Eric’s rendition that the physical system we call a brain generates a completely new system, one that emerges from the physics and chemistry of the brain. Although I would not use the analogy of a dual computer or one being conscious and the other not. I would eliminate the biases we hold and express them as systems. This could be a common vocabulary that could be agreed upon in order to reach a consensus of sorts.

          I inclined to think that what emerges as a separate and distinct system is what we refer to the Cartesian Me. If a physical material world is the substrate for all systems, then the Cartesian Me would not be an illusion, it would be a physical system, one that emerges from the brain and one that is quantum.

          This explication fits very well with Relational Quantum Mechanics.

          Peace

          Like

          1. I agree it is a physical system. The question is why kind and how does it work?

            I think the workings of the brain certainly exhibit wave-like properties. This can be seen through the oscillatory patterns that are present in all brains, apparently even down to arthropods. What’s amazing is that the frequencies even seem to be conserved at least across the mammalian species and seem to have a natural logarithmic relationship.

            https://www.researchgate.net/publication/228514491_Natural_logarithmic_relationship_between_brain_oscillators

            The question would be whether these wave-like properties is essential to generating consciousness and, if they are, does it work through electromagnetism, quantum mechanisms, or some other way.

            Like

          2. James,
            Well, we already know that quantum systems exist, and that for physics there is an impenetrable boundary which separates the quantum realm from the classical world as far as meaningful research is concerned.

            Unless or until we are able to resolve the measurement problem there will be no way to empirically determine with any degree of certainty. This quantum/classical boundary is the physical dimension, a physical intersection between the quantum Cartesian Me and the brain. There is no reason to assume that a classical/quantum boundary is limited to particle physics. The hard problem of physics is directly correlated to the hard problem of consciousness; solve one and the other will fall like a domino.

            But then again; if this “new” knowledge does not improve the probability of our own survival….. might as well follow the path of least resistance and go Zombie.

            Peace

            Like

        2. As a functionalist, I don’t think there is any consciousness separate and apart from the functionality. I think if we reproduced the right parts of that functionality, however we did it, we would reproduce consciousness. In my view, the conviction that there is more comes from the hold intuitive dualism has on us.

          Like

          1. To me, it’s not that there are many kinds of motors that can deliver the same RPMs and torque, because absolutely there are. It’s that you can’t replace any motor with a laptop simulating a motor (no matter how good that simulation is). 😉

            Like

          2. For a motor, that’s true, although a simulated motor might be fine for a simulated car.

            But a simulated navigation system might be fine to replace the original, depending on the scope of the simulation.

            Like

          3. Virtual cars would be fine in a virtual reality, but there is still the problem within the VR that embedded numerical simulations are in a different domain. The VR laptop still can’t provide VR RPMs and VR torque for the VR car. (Not without radically altering the physics of that VR from what we consider true.)

            A navigation system, OTOH, is an abstract information system, which is a different ballgame.

            Like

          4. Assuming the VR enforces the rules of physics as we know them, for the same reason a (as far as we know) physical laptop can’t provide RPMs and torque for a (as far as we know) physical car.

            Like

          5. If it is a theory of consciousness, then what inside matters. In a sense, it is all that matters. It is a theory of consciousness we are talking about.

            This gets back to a critique of criteria paper. A theory that explains consciousness as something other than or more than biological activity needs to be able to explain how the theory works in biological organisms. If the theory then can implemented in a non-biological form and the form manifests the outward functional indicators of consciousness, it might be believable. Simply imitating the functional indicators by themselves isn’t sufficient without a theory behind the mechanism with some plausibility as to how it works in biological brains. It would be like a theory of cars that explains them with manifolds, carburetors, and pistons.

            Like

          6. To add to this to elaborate on another comment I made below.

            The function of consciousness is the enable locomotion, learn about and interact with the world, and ultimately find food and reproduce. Many non-biological systems might be able to move about and learn about and interact with the world, but consciousness is how this capability is implemented in biological organisms. To implement the consciousness mechanism in something other than a biological entity requires an understanding the internal implementation methods and a way of implementing the same mechanism on some other substrate.

            Like

          7. It’s worth noting that we have very limited access to our own internals. So there are the introspectable internals, which are a subset of the total internals. The same introspectable internals might exist in systems with very different underlying mechanisms. Yet if you ask both systems for their state of mind, they will respond the same. Our inference would then be that the introspectable internals are the same. Even if the introspectable internals are in fact different, but not in a way that changes report, each system wouldn’t know the difference, and neither would we.

            Any non-biological intelligence that we’re tempted to regard as conscious will likely have been developed with a lot of insights from biology. But even if it weren’t, that wouldn’t mean it wasn’t conscious, just that its consciousness would be different. To say otherwise might just be biological chauvinism.

            That said, consciousness exists in the eye of the beholder, and I suspect whether people see a non-biological system as conscious will involve how much like us they perceive it to be. A system with the same spatiotemporal intelligence we have, but with non-biological affects, may not feel very conscious to us.

            Like

          8. “that wouldn’t mean it wasn’t conscious”

            Not a strong argument in my view. Any system that moves, learns, and interacts is going to appear conscious. If we had a plausible theory that could be validated in biological organisms, the system possessed the essential components of the theory, and the system also had the functional behaviors of consciousness, then I would say it is likely conscious whether biological or not.

            Limited access to our internals has very little to do with formulating a theory in my view. I wouldn’t expect to understand how the brain works by introspection.

            Like

          9. The only reason consciousness as a topic arises is because of our access to our own experience and what we infer about similar experiences in other systems by observing them. What other standard is there for assessing consciousness in another system, particularly one very different from us? (Note: saying similar internals would be begging the question.)

            Many of the strongest convictions people have about consciousness come from introspection. So understanding its limitations is crucial. Those limitations substantially reduce the scope of your assertion above that only the internals matter.

            Like

          10. We have ideas about how consciousness works.

            We can probably agree it has a lot to do with neurons and connections in the brain. I think, and you may not agree, it has to do wave-like oscillatory behavior of networked neurons. I think it may, in addition, have to do with EM fields generated by the oscillatory neurons.

            At any rate, whatever theory you have, eventually the theory needs to relate back to basic physical behavior in the brain. In any candidate theory of consciousness, there must exist some way of moving from the physical behavior in the brain to consciousness even if some details aren’t understood. Of course, we have access to our own experience and that is why the topic arises. But that is not the only source of knowledge about consciousness. We don’t have to go solely by external appearance and behaviors. We can take the system apart. Any given theory could be evaluated against other theories based upon all available evidence – EEG, MEG, fMRI, experimental research, neurochemistry, everything available.

            The theory depending on its details may or may not predict that consciousness could exist outside of living organisms. If it does predict it could exist outside of living organisms, then the theory should suggest how it could implemented in non-biological systems. If the implementation seemed to produce a system that met functional criteria for consciousness, it would be confirmatory evidence for the theory. Until such a theory exists I see no reason to believe that non-biological systems are conscious.

            Pockett’s EM field theory, for example, suggest that certain spatiotemporal EM wave-forms are consciousness; therefore, a system of any sort with sensory inputs and movement controls that generated those wave-forms should be conscious. Of course, there are practical difficulties in actually performing that experiment at this time, but there is at least theoretically path to doing the type of verification that I am suggesting.

            Without some theory tying back to how the brain actually works we would be in the absurd position of thinking furbies were conscious. But maybe you do think furbies are conscious. On purely functional grounds, I see no reason to think them not conscious even if they are somewhat more limited than humans. They move, they learn, they interact with their environment. What more could we ask of a conscious creature?

            https://en.wikipedia.org/wiki/Furby

            Like

          11. We see consciousness very differently James. Along with many other people who are interested in consciousness, I think you’re still looking for the ghost in the machine, maybe a naturalistic version, but one nonetheless. And there are plenty of theories out there positing things like that. It’s what most identity theories, such as IIT and, from what you tell me, Pockett’s theory, are aiming to explain.

            I don’t expect us to find any ghost, either spiritual, electromagnetic, quantum, or otherwise. I’m a functionalist, so I think it’ll be in functionality, and only in functionality. That’s been true for everything else so far in biology, and everything I see in neuroscience seems like a continuation of it. I won’t be shocked if the biology throws us some zingers, but I doubt they will resemble our traditional beliefs about the mind being something extra to the brain. But hey, maybe the evidence will eventually prove me wrong.

            Cute toy. But are you really going to tell me, even if you knew nothing about it, that you’d be tempted to see it as conscious after watching it for more than a few seconds? It doesn’t even appear to even be at worm level intelligence.

            I think to trigger our intuition of consciousness, it will need to at least show some ability to navigate its environment and global operant learning. I don’t know of anything like that yet, but the most primal version of it might not be that far away. As I always say, before aiming for human level intelligence, we should first aim to just reproduce crab level intelligence.

            Like

          12. BTW, you start out the last paragraph talking about consciousness and end it talking about intelligence. I have little doubt we can create non-biological, unconscious, but intelligent systems.

            Like

          13. I see consciousness as a type of intelligence, but not all intelligence is necessarily conscious, so I’d actually agree with your point. The difference is I don’t believe in zombies.

            Like

          14. My intuition would be that non-biological systems and systems without brains are not conscious. I can rely on more than observation of behavior to determine that. I can take the system apart or x-ray or apply other forms of measurement. If the system doesn’t have a brain, I don’t think it is conscious. So my intuition differs from yours and I don’t feel constrained by artificial constraints such as those in the Turing test or Chinese room.

            So I could x-ray the furby and declare it definitely non-conscious. Since I bought it on Amazon, I probably wouldn’t need to go the extra step.

            Just looking at its observable behavior and knowing nothing of its origin, I can’t see how the furby can definitely be declared non-conscious, even if its consciousness may be limited. I know other entities – cats, dogs, etc. – are limited in various ways but I think of them as conscious. I also know of humans who are paralyzed or have loss their ability to speak and I think of them as conscious. Some of these people may be able to do little more than wink or grimace in responses. I think of them as conscious because they appear awake and have a brain. I doubt there is a clear set of measurable criteria to the behavior of an entity that could be applied across the board and match my intuitions about what is conscious.

            Like

          15. One thing to consider is the lineage of your knowledge about consciousness being associated with biological brains. Would you have that association if you’d never observed the behavior of systems that have brains?

            But in the end, you’re right. It comes down to intuitions, and consciousness is in the eye of the beholder.

            However, intuitions can also change. It was once intuitive to see consciousness in rivers, volcanoes, earthquakes, and the weather. Today we understand these forces well enough that we no longer see volition in them. And it’s worth noting that, prior to the scientific revolution, the most popular intuition associated consciousness with the physical heart.

            On the furby and the comparison with paralyzed or injured humans, we have the examples of fully functional humans to leave open the possibility that an unresponsive ones may still have the same internal functionality. I haven’t seen any furby’s navigating around their environment, learning new things about it, etc, so I’m not inclined to see these limited ones as potentially harboring unseen cognition. I’m not saying a naive observer might not initially think there’s something conscious there, only that spending any amount of time with it would, for most observers, cause that impression to quickly wither.

            Like

          16. Regarding lineage and your question probably not. But part of that lineage also is an understanding that humans can create imitations, very good imitations in fact. So nothing in the circuit board or a chip provides me with a compelling reason to believe it is conscious no matter what sort of behavior it can manifest when attached to various actuators. The history of automatons using a variety of gimmicks and tricks goes back a long ways and with advancing technology there is no reason to believe that greater refinements will not occur.

            If I understand functionalism right (and I may not), then mental states arise in a system as a part of the relationships of the components of the system. Therefore, anything that reproduced the relationships in a similar system would be conscious. To me, this still is an inadequate theory without further elaboration unless the theory can specify the components and the relationships in the physical brain that give rise to consciousness. If it could do that, then it could implement those relationships in silicon, for example, and demonstrate conscious behavior to my satisfaction in a non-biological system. However, I haven’t seen anything that is anywhere near to doing that. It seems like you are just wanting to take a short cut and say that once we have created something with enough tricks that we can fool people then it must be conscious.

            Like

          17. “then mental states arise in a system as a part of the relationships of the components of the system. Therefore, anything that reproduced the relationships in a similar system would be conscious.”

            That actually sounds more like an identity perspective rather than a functional one. And using language like consciousness “arising” is pretty discordant with the functionalist view, except perhaps in a metaphorical sense.

            To a functionalist, what we call “consciousness” is a set of functionality, capabilities, including such things as reflexes and fixed action patterns, sensory discrimination and associations, attention, memory, prediction, action selection, scenario simulations, and the system monitoring aspects of its own processing for assessing its own performance and additional prediction.

            Individually, none of these things seem that special. (Although the difficulty of producing them quickly escalates.) But when you put them all together, there’s a tendency to view the result as something separate and apart, something magical that can’t be explained by those components. Yet remove too many of those components, and our impression of that separate whole tends to disappear, although our intuitions in this area are amorphous and inconsistent.

            Put another way, what you’re calling a shortcut, I call the path, because that’s all there is, a large bag of integrated tricks. The conviction that there is something more arguably only exists due to vestigial dualism.

            Like

          18. “Functionalism is the doctrine that what makes something a thought, desire, pain (or any other type of mental state) depends not on its internal constitution, but solely on its function, or the role it plays, in the cognitive system of which it is a part. More precisely, functionalist theories take the identity of a mental state to be determined by its causal relations to sensory stimulations, other mental states, and behavior.”

            https://plato.stanford.edu/entries/functionalism/#WhaFun

            I’m not sure what I wrote is that different.

            One way (and to me the most logical way) to discuss your view would be to stop writing about consciousness and papers that write about consciousness. Every time you do you are calling it out as a phenomena that is in one way or another apart from or emergent from the variety of capabilities that the brain or nervous system has. You could simply argue that any paper mentioning it is already on the wrong track. “Falsification and Consciousness”, “Hard criteria for empirical theories of consciousness, ” – they are discussing something that really doesn’t even exist so waste of time. I could see that viewpoint.

            Buzsaki’s books that I have been reading a lot of recently hardly even mention consciousness.

            Where does subjective experience fall into your list of capabilities? That is something I associate with consciousness – the sense of being like something in the Nagel manner. If that is irrelevant or doesn’t need explanation, then a functional reproduction of the capabilities of the brain is certainly possible. And whether it is or is not conscious wouldn’t matter.

            Like

          19. “I’m not sure what I wrote is that different.”

            The key phrase from the SEP quote is: “not on its internal constitution, but solely on its function, or the role it plays”. My interpretation of what you wrote (emphasis added): “mental states arise in a system as a part of the relationships of the components of the system“, is that it’s more focused on constitution. But maybe I misread it?

            A lot of people do see functionalism as denying that consciousness exists. I have to admit I’m sometimes tempted to take that line and run with it. But I’m interested in it for the same reason most people are, to understand our experience. And saying functionalism denies consciousness is really just begging the question of what consciousness is.

            In my view, subjective experience is the whole of those functions from the perspective of the system itself. As far as I can tell, any aspect of subjective phenomenal experience can be mapped back to functionality (even if we don’t yet know how the functionality is produced). If every aspect can be mapped back to a function, then the whole of the experience is effectively mapped to functionality.

            People often respond to this with things like love, awe, or other amorphous concepts. As a concept, these things are too ambiguous to be mapped, but looking at any specific example, we can identify the specific components for the mapping. (My experience though is people usually get exasperated at this point, although I’ve never had anyone provide an actual logical issue with it.)

            The temptation is to view the whole as something separate. This is natural since experience from the inside is of the whole, not the parts. And our access to the internals is very limited, leaving vast layers of functionality we can’t introspect. The result is an impression of an explanatory gap. Subjectively, we’ll never be able to completely close it. Objectively, there’s nothing in principle preventing us from getting a full accounting. But it will never feel like a full accounting. But then, I don’t feel like a collection of atoms, cells, or neurons, despite the evidence.

            Liked by 1 person

          20. Relationships of the components of the system = “causal relations to sensory stimulations, other mental states, and behavior”

            If there is subjective experience, then it needs to be explained. It’s fine if the explanation is that it is just some sort of mapping to other things but there isn’t a theory until there is an explanation for how the mapping is done. And a non-biological organism would need to have built into it a similar mapping for me to think it is conscious.

            The argument for consciousness being a system of some sort doesn’t necessarily means it is apart from its components. It means that the components that constitute it have a boundary with controlled inputs and outputs and that significant stuff happens inside the boundary independently from stuff outside the boundary. I think primarily what happens inside the boundary is a filtering and simplification of the inputs. That may be what qualia actually are. The roles of the outputs are the entraining of neurons during learning, refinements of perceptions of the environment by combining information from multiple senses, and ultimately the initiation of actions. Mental states don’t exist simply to map a bunch of internal component states but for initiation of actions.

            Like

          21. The word “causal” and the broader context of the SEP quote are important. The main thing to remember is that functionalism is more about what the system and its components do, and less about what they are. So an identity theorist might accept something like “consciousness is recurrent processing”. A functionalist wants to know what the causal role of that recurrent processing is to sensory states, report, behavior, etc, that is, what it does.

            I think “subjective experience” overall won’t have one simple explanation, just as “life” doesn’t have any one simple explanation. The discrimination of red, on the other hand, will, similar to how we have theories of heredity, protein synthesis, respiration, etc.

            I agree that filtering is part of qualia, but also discrimination, categorization, and affective reactions, most of which typically happens pre-consciously, so we end up only being aware of, say, the vividness of a red flower. I also agree mental states are for initiation of actions, but we could get just actions from reflexes. What mental states add is environmental and temporal predictions for planning actions.

            Like

          22. “What mental states add is environmental and temporal predictions for planning actions”.

            You seem to be wedded to the “sit back and analyze” mode of consciousness. It maps stuff, visualizes stuff, plans stuff but doesn’t actively do anything. Everything that is done is just like reflexes.

            Like

          23. Much of that planning is for what we’ll do in the next few seconds. And it translates into action selection: which reflexes or habits to allow and which to inhibit. I don’t see that as “sit back and analyze” mode. Of course, sometimes we actually do sit back and analyze when we plan further out.

            Like

        3. I really like your way of putting this point and hope to remember it and use it myself. Particularly the “arbitrarily selected set of functions,” which to my mind highlights the contrast between the functionalist approach vs how we usually classify and understand things. We usually start with a set of exemplars: this is gold, that is gold, the other is gold – and then we try to figure out why all these things are so similar (aha, atomic number 79!) We usually don’t start with a list of necessary and sufficient properties or functions.

          Liked by 1 person

  5. I’m not sure something as complex as the study of consciousness can be boiled down into a handful of logical expressions that say anything meaningful. (My eyes glazed a bit starting in Section 3, so I’ve just skimmed the rest so far.)

    (FWIW, there appears to be a typo in the paragraph following equation (2). They use obs(o) where I’m pretty sure they mean pred(o). They repeat that typo in the fifth paragraph, “Generally, inf and obs will make use…” Pretty sure they mean pred.)

    I’m generally okay with their analysis in Section 2, but they seem to assume what they label as o_i and o_r are disjoint — that predictions never include the same data that inference does. I question that, especially since both pred(o) and inf(o) lead to elements in E. Why can’t predictions include that data, especially since it can include non-report observations?

    If the implication is that physically observed states of the system are significantly different from data that leads to inference, then I’m not sure one can reason from elements in O. There is an implied partition, O_i and O_r, that may undermine their reasoning.

    I have to take a closer look at Section 3, but AIUI, given two systems, S_1 and S_2, an o_r-substitution is when pred(S_1)pred(S_2) (as the authors put it, their union is a null set), but that inf(S_1) = inf(S_2).

    For example, a real human and a simulated human might report the same conscious states, but the systems implementing them would make different predictions… presumably about actual states of the respective systems, but this is one place it starts to seem muddled to me. I think what they’re trying to say is that the two systems have different states (different inner functions), but would report identical experience (have the same outer function).

    That’s the functional argument in a nutshell — implementation is irrelevant; only results (outputs given inputs) matter.

    Until we advance to having experimental data, it’s all guesswork based on assumptions. I don’t see this paper as having moved the needle. (But I need to give the latter part a chance.)

    Liked by 1 person

    1. Not wishing to parse through all that again, I’ll take your word on their typos. 🙂

      The main thing to remember about the pred function and related variables, is it isn’t the system itself making the predictions, but the theory of consciousness making predictions based on observations of the system internals (o_i). So when another system has different internals and we try to apply the same theory, it will make different predictions.

      I agree that the paper doesn’t move the needle. It has assumptions that theorists simply don’t have to comply with.

      Like

      1. Yes, it’s the theory that makes the predictions based on obs(p) plus analysis of the generated observational data. (A theory may also build on other theories.) I would think a theory could also include inferential data.

        I’m fine with the idea that pred(o_i)e_p, that seems solid, especially with e_p being a set with more than one element. But I think inf(o_r) needs some unpacking. In cases of reporting, it seems o_r is e_r. If the only data is my self-report, there isn’t much inference going on.

        In cases of “no-report” o_r I’m not sure how it’s different data than o_i. Aren’t observations of “no-report” data pretty much the same as observations of any system internal?

        I also want to unpack their notion of independence between those, but I need to read more carefully before I comment. I will note that in Section 3.4.4 I’m underwhelmed by their examples which are all abstract information systems (ANN, UTM, Universal Intelligences). Since this is all about consciousness, and those examples aren’t necessarily conscious, I was hoping for an example including the human brain.

        Like

        1. No-report paradigms always map back to previous reports. So, maybe it’s been established that a certain stimulus always generates an affirmative report, and then a person is scanned while receiving the simulus without the report requirement. Or maybe it’s been established that a certain reaction in the eye is always consistent with a stimulus that generates a report, and again, subjects are then tested with just that eye reaction for affirmation of conscious perception. So it’s still distinct from observing the internals of the system, which experiments are usually trying to correlate it with.

          On examples, the problem is that other living systems typically accepted as conscious (mammals or birds) have a lot of commonalities. You could look at an octopus. If we accept them as conscious, they probably have the best chance of having an alternate architecture for it.

          Like

          1. “So it’s still distinct from observing the internals of the system, which experiments are usually trying to correlate it with.”

            If it’s “always consistent with a stimulus” then it’s a fixed physiological response of the system, which, at least to me, puts it in the same category as any other physiological response. It’s become an objective datum, rather than a subjective one, which is where I think the dividing line is.

            I think the whole “predicted” and “inferred” thing needs better unpacking.

            What I was hoping for was an example involving a living system compared to an artificial one.

            In such a case (per their Figure 4), presumably (as shown) e_r and e’_r would be identical. Both systems would report the same thing (“I see a cat”). But given that P and P’ are completely different systems, then certainly O and O’ will differ.

            Which means their Figure 4 (and general assertion) is wrong. It cannot be the case that o_r and o’_r are the same. Unless, of course, o_r in both cases is the subject simply saying, “I see a cat,” but then inf(o) doesn’t do anything.

            So I wanted them to provide some examples of what they think o_i and o_r might be for a living system, especially compared to an artificial one that seeks to be conscious.

            It’s also occurred to me that P and O and E might be uncountably infinite sets, which might also have an impact on their analysis.

            Liked by 1 person

    2. “That’s the functional argument in a nutshell — implementation is irrelevant; only results (outputs given inputs) matter”.

      Yep. And my argument is consciousness is implementation. It is implementation in subjective experience. It isn’t really a function but a way of doing a function in biological organisms only as far as we know now.

      Like

  6. Mike,
    I think your inability to conceive of mind being a quantum system is constrained by your interpretation of quantum physics being wave function. If one is willing to eschew wave function and go with Relational Quantum Mechanics, then we have a substrate in which to posit that another system emerges from the brain functions as a separate and distinct system, and that system would be quantum.

    According to RQM, it’s not a ghost in the machine (the brain), it’s a completely new and distinct physical system that emerges from the physical system of the brain. But like I said, if you believe the Copenhagen interpretation of quantum mechanics then this rendition will not work. So it depends solely upon which pair of glasses one chooses to view the world, Copenhagen interpretation or RQM. It really is that simple. Copenhagen leaves us in the lurch, eternally stuck in the quagmire and RQM provides a framework in which we can intellectually move forward.

    Peace

    Like

    1. I’ve never ruled it out an explanation as a quantum system, but I wonder why the compelling need to invoke quantum effects. EM moves at the speed of light, has characteristics that would allow information encoding, and can actually be detected in the brain. It isn’t a ghost. It is an actual physical force.

      I’ve gone back and forth in my own mind about whether to consider it a separate system.

      Like

      1. James,
        Just briefly; quantum systems interact directly with classical systems and that information highway, or intersection of information transfer goes in both directions. Information moves from the quantum realm to the classical realm and inversely, information from the classical world moves to the quantum world.

        Classical information has causal power in the quantum realm just as quantum information has causal power in the classical world. This is exactly what we see taking place in the classical system we call a brain and the hypothetical quantum system we call mind. We see information moving in both directions, and along with that information we see the causal power that comes with that information.

        Peace

        Like

    2. Lee,
      It’s not a matter of not being able to conceive of the mind being a quantum system, but a lack of evidence pointing in that direction.

      I’m not clear, even after reading your response to James, on how RQM makes a difference for the mind. I don’t see why we need quantum physics for causal interactions. (Aside from the fact that all classical physics are actually emergent from quantum physics.)

      Like

      1. Mike,
        (Aside from the fact that all classical physics are actually emergent from quantum physics.)

        Right. We know that quantum systems exist, so just run the scenario in reverse and one can rationally posit that a quantum system like the mind can likewise emerge from the classical system of the brain. There is nothing to restrict that as a viable thesis other than our own biases.

        Another factor to consider is qualia Mike. If qualia is a real phenomenon and not an illusion, then one has to account for the phenomenon. That would require positing that mind, the physical system that experiences qualia is a separate and distinct system. I know you don’t see it that way but Eric has relentlessly insisted it be the case.

        Fundamentalists run into the road block of illusionism, the only way to avoid that pitfall is to posit a separate and distinct system. We affectionately call that system the Cartesian Me. It is an autonomous quantum system that has causal power on a physical world through the anatomy of the supporting biology that supports that system.

        I’m really surprised that brainy acts like Dennett and Frankish haven’t picked up on the idea.

        Peace

        Liked by 2 people

          1. James,

            “How does a quantum system scale up to match the size of the brain?”

            You are looking at this in reverse. The quantum system of mind would not scale up to match the size of the brain, the quantum’s system of mind emerges or scales up from the physiology of the brain. It’s an opposing dynamic of how a classical systems originally emerge from a quantum system, only in reverse. Opposing dynamics i.e. polarity is consistent across the board with systems, it is a feature responsible for structure and form. So looking at it this way makes perfect scientific sense.

            Other than that, I don’t think its a question we can answer since we know absolute nothing about the quantum world other than it exists. I think we are confronted with the same problem with mind being a quantum system. In the quantum world as we know it there is nothing in which to correlate scale or size other than the analogue of a field or something like that. But I think even that analogue would eventually break down. I think the Cartesian affect, in which I mean a sensation of disconnect from our bodies provides a clue for positing a separate system that is distinct, one that emerges from the brain.

            The standard model of the Cartesian affect is considered to be substance dualism by most people. It’s this psychical disconnect that drives the intense divisions between idealism and materialism in their attempt to reconcile that psychical disconnect as such. The psychical disconnect is real, it’s what underwrites subjective experience. It’s an experience of “I know not what”, only that I am having it.

            Idealism is hopelessly lost, being nothing more than a religion whereas science has the clear advantage here, but only if one is willing to posit an architecture for the mind that makes scientific sense. The fundamentalist approach will not bridge the gap and as it stands is incoherent. But postulating that mind is a quantum system clearly offers a viable path forward. Like I told Mike, I really do not know why some of the brainy act professionals in neuroscience research have not consider the notion themselves??!! It’s not like positing a quantum system is a heretical act, it’s a common sense, pragmatic assumption. What do you think??

            Peace

            Like

          2. Wouldn’t the quantum system need to maintain coherence across the entire brain?

            Generally we talk about the classical world emerging from the quantum because all of the probabilities at the quantum level average out so we end up with more predictable behavior. How does this go in reverse? What is predictable generates something that isn’t.

            If you are arguing that the brain and its properties emerge from the quantum world (which doesn’t seem like you are) then what is it about the brain that makes it different from other matter?

            Like

          3. ” Like I told Mike, I really do not know why some of the brainy act professionals in neuroscience research have not consider the notion themselves??!!”

            As I noted above Lee, the evidence just doesn’t point in that direction, at least not currently.

            Like

          4. “Wouldn’t the quantum system need to maintain coherence across the entire brain?”

            Absolutely. The brain is a unified system made up of the aggregate of its parts, with each part performing a specific function.

            “If you are arguing that the brain and its properties emerge from the quantum world.”

            I”m not.

            Not quite sure I understand your second paragraph. But if I do understand it correctly, the answer becomes I don’t know. The only response I have is: why wouldn’t it go in reverse? There is nothing within the natural world that says it could not of should not happen.

            Peace

            Liked by 1 person

          5. But the coherence issue is what makes this implausible. It is the reason that McFadden, who is an expert on quantum biology, thinks the quantum explanation doesn’t work. All other quantum biological phenomena take place on extremely small scales. It seems like you would need to hypothesize some enormously large (on a quantum scale) phenomenon to make the theory work. What’s more it would need to work continuously and consistently for the life of every brain. And there isn’t anything like that.

            Like

          6. Mike,

            “…the evidence just doesn’t point in that direction, at least not currently.”

            Don’t lay that straw man answer on me. It’s not a matter of evidence Mike, it’s a matter of prejudice and you know that. Prejudice is a brute fact of subjective experience. If it doesn’t currently point in that direction than take the bold epistemic move and point it in that direction, and then see where the chips fall.

            We don’t have a clue about what goes on in the quantum world other than some made up story about wave function and wave function collapse. The story was invented by a bunch of “Dudes” who were partying it up in Copenhagen a hundred years ago.

            James,
            McFadden is still hung up on wave function. Of course it won’t work under that model, I already acknowledged that. It will only work if one eschews wave function and works with Relational quantum mechanics.

            Peace

            Like

          7. James,
            I don’t know if you got my last response because I had it as an addendum to a response to Mike. Your objections to mind being a quantum system are based upon the Copenhagen interpretation of quantum mechanics with all of the associated catch phrases like superposition, coherence and wave function collapse, etc.

            One cannot coherently posit that mind is a quantum system if that postulate is grounded in the Copenhagen interpretation and conversely, McFadden builds his biological models based upon wave function. One has to eschew the Copenhagen interpretation and go with Relations Quantum Mechanics. The notion of mind being a quantum systems fits perfectly with the RQM architecture.

            But then again, referencing your quote:

            “The picture that emerges for me is that of a brain that primarily is generating its own model of the world. Into the model, it allows as little information as it needs to function. The brain prefers to operate in a low-energy homeostatic state as much as possible. The brain in effect might prefer to be a zombie.”

            Fundamentally, this is “why” of why people are resistant to new ideas that lead to change. There has to be a payoff for a survival advantage: No payoff, then: “I’m not interested”. So at the end of the day, if mind is actually a quantum system: “So what”, right?

            Peace

            Like

          8. It seems to me from my limited understanding is that RQM says that different observers can have different views of the wave function. Some can see it collapsed and some in a superposition. It doesn’t really get rid of it.

            Still not sure how getting rid of the wave function, if that is what RQM does, would suddenly allow mind as a quantum system to emerge from brain.

            Like

          9. James,
            RQM makes a passing reference to wave function out of respect but at it core, RQM irrevocably rejects wave function and essentially states that the interactions and dynamics that take place between systems in the classical world is exactly what happens in the quantum world, nothing more, nothing less. The only difference is that we can observe the classical world and cannot observe the quantum world. There’s no mysticism or magic associated with RQM.

            People do not pick up on this but materialism with it’s wave function and infamous collapse is an analogue to idealism’s postulate. According to the wave function model, every possibility is in a superposition (call it formlessness) until the wave collapses resulting in form. According to idealism, every possibility exists in formlessness (call it a superposition) until a mind brings it into existence (form). See the correlation?

            Pick your poison wave function or idealism; magic lies at the premise of both postulates.

            Peace

            Like

  7. “Or they could assume that physics is not causally closed, a move few scientists would likely be enthusiastic about.”

    That’s the impression one would get from reading most blogs and magazines about science, but it’s dead wrong.

    Real scientists recognize that causal closure doesn’t exist in nature, and take many painstaking precautions — such as sterilizing equipment, repeatedly checking that devices are working properly, etc. — to produce it artificially.

    Also, real scientists recognize that nature gives them free will, and go to great lengths to artificially deprive themselves of the freedom to fudge their claims.

    Liked by 1 person

      1. I’m aware of the NTS fallacy, but at some point one has to bite the bullet. In an age when industries are widely considered de-facto definers of their products, there must be a way to challenge their claims.

        In this case, to say that scientists might be making unchained speculations rather than doing science.

        Or, to take another common example, cops might be conducting domestic terror operations rather than doing police work.

        Or even some cases that might at first seem bizarre, e.g., oil refiners denying climate change might be seeking to deliberately destroy the world rather than make a profit.

        Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.