The necessity of weak emergence

I’ve only recently discovered Ricardo Lopes and his interviews of all kinds of interesting people. Here is one from a couple of years ago of Keith Frankish, the most prominent contemporary champion of illusionism, the idea that phenomenal consciousness is an illusion, that it doesn’t exist, and much of this is him giving the standard explanation of his view.

But there’s a part of this interview I want to focus on relating to emergence. It begins at the 52 minute mark and lasts for a few minutes.

Frankish is largely dismissive of the idea of emergence. He acknowledges the distinction between weak and strong emergence. Strong emergence is the idea that in complex systems, something new emerges from the components of the system, something that, even in principle, cannot be reduced to those components.

The weaker form of emergence simply observes that the reduction from the higher level description to the lower level isn’t obvious, that at some point in the scaling, it becomes productive for us to switch models. In that sense, biology is emergent from chemistry, which is emergent from physics, but nothing about those relationships imply that any new ontological thing is being introduced in between the layers.

Much of Frankish’s criticism is directed at the stronger variety, which I’m fully onboard with. But he’s also largely dismissive of weak emergence, believing that we don’t need the word “emergence” to express these relationships. This actually makes sense for an illusionist. It’s the difference between an eliminative reductionist who dismisses the existence of phenomenal consciousness, and a non-eliminative reductionist who accepts phenomenal consciousness, but as something whose objective nature is radically different from our subjective impressions of it.

In some ways, Frankish’s stance on emergence and reductionism matches that of Adam Frank, who in a recent Big Think piece, argued against reductionism and in favor of emergence. Frank largely ignores the distinction between eliminative and non-eliminative reductionism, although he does acknowledge the difference between strong and weak emergence, promising to address it in a future post. I find it interesting that Frankish and Frank agree on definitions here, even though they’re coming at this with opposing views.

As a non-eliminative reductionist who accepts the concept of weak emergence, I find myself in the middle. The biggest issue I’ve always seen with eliminativism is it seems like a tough stance to hold consistently. Everything in our day to day life is (weakly) emergent from lower level phenomena. It’s the old question of whether we should consider a table to exist once we know it’s “just” a collection of atoms. It’s simply too hard to get by in day to day life without concepts like tables, chairs, animals, and trees.

And the common move of saying a concept is productive colloquially but not scientifically has never made much sense to me. If we find something productive in our everyday life, then it seems like we still want a scientific account of that concept. Simply saying something doesn’t exist scientifically seems like a dodge. Even if the mapping is complex and inconsistent, we still want to understand that mapping in all its inconsistencies.

Even from a scientific stance, it doesn’t seem productive. Again, staying consistent, we find ourselves obliged to dismiss the concept of molecules, atoms, and even elementary particles, much less things like rocks, planets, or stars. In this view, the only thing that exists are quantum fields, although there are people who doubt even they fundamentally exist. Reality might be structure, relations, and processes all the way down.

If so, then eliminativism leaves us with nothing, which just doesn’t seem like a productive outlook to me.

In the case of phenomenal consciousness, my own view is it exists, but what is an illusion is phenomenal consciousness being something separate and apart from access consciousness: information being accessed for verbal report, reasoning, and behavior. In other words, phenomenal consciousness is access consciousness from the inside. They are the same thing seen from two different perspectives. In that sense, we can say that phenomenal consciousness “emerges” from access consciousness. But that’s the view of a non-eliminative reductionist and weak emergentist.

Unless of course I’m missing something?

95 thoughts on “The necessity of weak emergence

  1. I think these topics could be cooled substantially by changing the terminology a bit. As far as I can tell, for instance, there is no functional distinction between “weak emergent-ism” and theoretical reduction. Weak emergence as a term, simply provides a superfluous reference to strong emergence and all its problems with downwards causation.
    Likewise, I’m not sure that the term “Access consciousness” clarifies anything about consciousness. It seems like sticking with the reflective/pre-reflective distinction provides a better representation.
    So, I think it would be better if we could say that we are able to theoretically reduce psychology to neurology, and that, though there is a distinction between “red” and “I see red”, both require intentional inexistence – in other words there is an inherent relational condition when we experience or refer to any phenomenon.

    Liked by 2 people

    1. The problem is that the word “emergence” is prolific in physics, and science overall, so I fear we have no choice but to deal with it. It’s unfortunate that the same word is used for both weak and strong emergence, but that’s true about a lot of labels. In some ways, it’s similar to the word “epiphenomenon”, which typically means something different in science vs philosophy. A scientist usually just means something has no functional role, while a philosopher often means it has no causal effects at all. So when a philosopher says something is epiphenomenal, they’re typically making a much stronger statement than a scientist.

      The distinction between phenomenal and access consciousness was made by Ned Block, although it’s now endorsed by people like David Chalmers. They mean it to distinguish two things they see as ontologically separate. I think access is usually considered to include reflection, but it’s also considered broader than it, such as content accessed for behavioral decisions we don’t reflect on. The question is, which is conscious, the pre-accessed, the accessed, or the reflected? And is there a fact of the matter answer on that?

      Liked by 1 person

      1. Well, getting rid of emergence counts as dealing with it.:)
        I agree with you that the question of whether there is such a thing as “direct experience” or only experience that we look back on in some way, is important. The advocates of access versus phenomenal consciousness just haven’t convinced me that discussion of access makes any progress on the matter. It seems to produce irrelevant distinctions which distract from the basic issue.
        I am not sure that consensus is possible, any more than everyone will someday agree on what Brentano meant by “intentional inexistence”.

        Liked by 2 people

  2. For me it’s about predictive power. An emergent level of description is useful to the extent that it enables more accurate prediction of outcomes, with less information about the initial state and boundary conditions. There may (or may not) be some ‘bottom of the stack’ description that is how it actually all works, but it doesn’t give us equations that are soluble in real world cases, so we are always dealing with higher level descriptions that are just about good enough much of the time.

    Liked by 2 people

  3. I think I disagree with Frankish. But perhaps I not clear on his distinction between weak and strong emergence. By “weak emergence” he seems to mean that there is a reductionist account, though it might be difficult to find it. By “strong emergence” he seems to mean that something mystical is happening.

    I’m skeptical of mysticism. But I am also skeptical of reductionism.

    Consider something vibrating. We can look at the vibration as movement over time. Alternatively, we can describe the vibration in terms of the frequencies of vibration. The movement over time is a description in the time domain, while the combination of frequencies is a description in the frequency domain. A Fourier transform converts one to the other.

    Perhaps Frankish sees what happens in the frequency domain as weakly emergent from what happens in the time domain, and perhaps he sees the Fourier transform as a reduction. But we could just as easily say that what happens in the time domain is weakly emergent from what happens in the frequency domain. The Fourier transform itself is symmetric between the two. For myself, I don’t see the Fourier transform as a reduction.

    Liked by 2 people

    1. I’m with you on being skeptical of mysticism. In my view, when we start invoking mysticism, it means we’ve given up on a causal explanation. It’s always possible there are things we’ll never be able to get such an explanation for, but it doesn’t seem productive to accept that in any particular case.

      A Fourier transform is often described as a decomposition of a function of a structure in space into a function of frequency, so they could be see as reductions of a sort. At some point, it seems like we start tripping over the language.

      Liked by 2 people

  4. “In other words, phenomenal consciousness is access consciousness from the inside.”
    Notice that access consciousness is only thing which have two sides: external and internal. For example, heart functioning is nothing from the inside. It is mysterious why consciousness is something from the inside.

    Liked by 5 people

    1. I think that’s a great point for Mike to potentially work through Konrad. Of course geometrically we can say that there are both insides and outsides of containers, but that will have no bearing upon a given container itself. The container should just be what it is. Or regarding club membership we can say that some people are inside while others are outside, but the club itself should just be what it is regardless of our additional “in” and “out” distinctions. Like containers and clubs, perhaps “consciousness” can exist beyond insides and outsides of it?

      Unfortunately until we have experimentally backed and accepted theory of how the brain creates the subjective dynamic by which you and I independently experience our existences, it should be difficult for science to resolve this highly confused matter. My current hope is that Johnjoe McFadden’s cemi will be experimentally validated some day to thus rid cognitive science of a great many funky ideas.

      Liked by 1 person

    2. Consider a smartphone. It has data on its local environment (local WiFi network, IP addresses, mac addresses, etc, and has internal models of its operation. It might even, in some cases, have copies of engineering documents about its own design. But if it does, that design information is different from the model it has of its own operations and environment.

      I’m not saying such a device has anything approaching our own internal experience. What I am saying, is that it processes information from a certain unique perspective, a perspective which would be different from another smartphone, even one of the exact same model, because of its location and current data.

      In other words, we have an internal vs external difference. We don’t see anything metaphysical about this because it’s just technology. But we do with ourselves. Certainly there is (currently) a vast difference in sophistication, but I can’t see anything to justify the different standard.

      Liked by 1 person

      1. Certainly there is (currently) a vast difference in sophistication, but I can’t see anything to justify the different standard.

        Okay Mike, you can’t see anything to justify a different standard between a smartphone and a human brain. One simply has sufficient complexity for information processing to thus create a subjective dynamic, while the other does not. But do you think that your belief here could ever change? If for example McFadden’s theory were scientifically validated numerous times, whether through my testing proposal and/or others, might you ever dismiss your long held belief here? Might you eventually decide that in order for a brain or smartphone to create something that has phenomenal experience (or Chinese room, or the USA, or paper with the right information on it that’s properly converted to other paper with information on it, and so on), that the right sort of electromagnetic field would need to be produced which ultimately exists as the experiencer itself? Do you think that evidence could ever take you this way?

        If you like I suppose that you could ask me the same in reverse.

        Liked by 1 person

        1. Eric,
          I think I’ve noted innumerable times that my views are always subject to change on new evidence. I’d also note, in cases where I can’t fully accept it, that my credence in the plausibility of proposition can increase if there’s compelling logic for it. But in both cases, the proposition in question needs to be the explanation with the fewest assumptions that fits the data. (Note that I avoided the phrase “simplest explanation” here, because that’s too often taken to be what equates to one’s biases.)

          I certainly hope you’d be swayed by evidence, or lack thereof, but given your zeal for an exotic proposition for which there is currently scant evidence, I’m nervous about it.

          Liked by 1 person

          1. I mainly just wanted you to formally admit this possibility Mike. And indeed, we’ve gone over this so extensively that I don’t think you’ll ever get to the place that I am without thorough evidence backing me, and even with very good reason to believe that your current position happens to be unnatural.

            In my case, I think evidence could also convince me that I’ve been wrong. But note that I equate the proposition which I spurn here with non worldly dynamics, or strong emergence. So to be countered my strong naturalism demands strong evidence to the contrary. Similarly I think I could be convinced in the existence of witchcraft if there were enough witches out there casting spells and such that science seemed wholly unable to account for.

            Here’s the thing. It seems most productive to define “information” in terms of the sort of machine which it’s set up to operate. For example there are VHS and Betamax video tape machines. Running a Betamax tape through a VHS machine should not play the designated video and sound, and this is because the tape shouldn’t be informative to such a machine in that sense. It should causally produce static rather than strong emergently produce the designated video and sound. Surely you agree that “information” only exists in relation to the sort of machine that it’s set up to animate in nearly all situations, though there does seem to be one situation where you consider information to work on any machine with the capacity to accept it. Thus your presumption that if the right information on paper were properly converted into another set of information on paper, that something here would experience what you know of as “a whacked thumb”. To me this exception violates the causal properties of “information”, and so would be strongly emergent.

            Liked by 1 person

          2. Eric,
            I think you know I don’t agree with that definition of information. It’s too narrow. It defines “information” as only usable information for a particular system. If you give me a copy of the Iliad in ancient Greek, it’s pretty useless to me, since I don’t know Greek (ancient or modern). But does that mean it isn’t information? Likewise, tree rings are informative for anyone who knows how trees grow, but merely an interesting pattern for those who don’t. Does it shift between being information and non-information depending on who’s observing it?

            I think a stable definition of information requires us to use the physics version. All other senses of “information” appear to be subsets of this version. (If you know of any that aren’t, I’d be very interested to hear about them.) Which brings us back to the view that information is causation.

            You might find this video on physical information interesting.

            Liked by 1 person

          3. Mike,
            If you like then you can define information as causation for your purposes. In that case it becomes my obligation to accept this definition in my attempts to grasp and assess your models. But then I wonder why you’d ever use the information term at all here when you could directly use the causation term? In any case if you have a model to propose, then I’d love to assess it from such a definition for information.

            As you know I consider all of reality to be a product of causation, so I won’t be providing you with an exception to this. The irony here is I’m suggesting that the powers that be believe in such an exception. As I’ve said, for the most faithful it may be that only contrary experimental evidence will kill off this strongly emergent position.

            In any case, just as I must accept your definitions when I assess your models, you must accept my definitions when you assess mine. Here I’m defining “information” as a subset of causality associated with functional machines. So a Betamax tape will not be functionally informative to a VHS machine, and even though for example it should have enough informational attributes to cause static. Or similarly ancient Greek will not be functionally informative to you since you don’t know the language and therefore can’t get coherent statements from it, though it may be a bit informative in the sense that you might grasp certain things, such as that words exist between spaces (if that’s the case). So yes, it’s “information” to an ancient Greek speaker, though not in the intended sense to you. I could provide you with an arbitrary number of examples of information, such as genetic material, which only exist as functionally so to a causally appropriate sort of machine. Note that from this perspective, inherent information will never exist in a causal world. Her information will only exist in relation to the causal attributes of a machine which is affected by it.

            The problem for your side is to justify why we should accept just one exception to this rule without also assessing that position as “strongly emergent”? Why should phenomenal experience be created by means of generic information processing, when information for all other machine dynamics seem only to exist in relation to what it’s causally set up to animate?

            Liked by 1 person

          4. Eric, even using your relative definition of “information”, I can’t see the “one exception” you’re discussing. I know you’ve tried to describe it numerous times, but I’ve never been able to parse those descriptions. It seems like one of us is confused. I’m open to the possibility that it might be me, but if so, as you note, I’d be far from the only one.

            Liked by 1 person

          5. That’s fine Mike. The one exception would be the “information” that your side associates with phenomenal experience. My thumb pain thought experiment displays this pretty explicitly I think, or not only that brains should be able to produce something that experiences this, but encoded sheets of paper that are properly converted into other encoded sheets of paper. From the position that I’ve just presented, machine information should only exist in respect to something that’s set up to accept it. So a Betamax needs a Betamax tape for information, not a VHS tape. The ultimate point is that in a causal world there should need to be causal mechanisms which create phenomenal experience, and all else actually. Electromagnetic fields seem like a strong candidate since neuron function does create them. But instead saying that all we need is certain information that’s processed into other information, would inherently depend upon strong emergence since this would bypass mechanistic sources of existence. This would become the one unique case of generic machine information.

            Does that make sense?

            Liked by 1 person

          6. Sorry Eric, but it doesn’t. What about information or its processing doesn’t count as causal mechanisms? To me, even if EM fields turned out to be part of the picture, it would just be another information processing mechanism. (It’s worth noting here that’s actually McFadden’s position.) Whether the information processing is happening via neural spikes or some kind of brain wifi, it’s still information processing.

            Liked by 1 person

          7. Mike,
            As you know both McFadden and I theorize that the brain non-consciously processes information by means of neural spikes and more, such as the EM field effects of ephaptic coupling. So you failed to mention anything that he and I would tell you is suspect. Let’s get into that.

            McFadden and I suspect that under certain conditions a given bit of skin will signal the brain such that synchronous neuron firing will alter an endogenous EM field in a way that an experiencer will now feel an itchiness to that skin, and in addition to sight, smell, and countless other subjective experiences that one might have at a given moment. Theoretically that’s all bound up by a single amazingly complex and changing EM field which the brain often produces.

            A converse position which I consider non worldly for example would be: If the information which the skin had sent the brain were effectively encoded on paper, and if that set of information were converted into another set of information on paper associated with the brain’s response, then something here would thus experience what we essentially do when we have itchy skin.

            I could tell you why it is that I consider the first scenario potentially natural and the other inherently unnatural, but could you try to tell me the effective difference that I see between them?

            Then as far as me attempting to grasp your position better, I wonder if the issue concerns “information to mechanism” order of precedence? To me mechanics explain everything though it can sometimes be helpful to speak of certain mechanical function in terms of “information”. For example we call the stuff that my computer sends my screen “information” even though it mechanically alters my screen’s function. Perhaps you consider things the other way around and so given the right information processing, that this in itself can create subjective experience? Thus itchiness could exist as brain function or informationally encoded paper function since there’s no need for any specific brain mechanisms to be animated?

            Liked by 1 person

          8. “I could tell you why it is that I consider the first scenario potentially natural and the other inherently unnatural, but could you try to tell me the effective difference that I see between them?”

            Eric, the reason I asked you about this above is because I don’t know the answer.

            On information and mechanism, information is mechanism 100% of the time, specifically the differentiation involved in that mechanism. Some mechanisms might involve a low ratio of differentiation to energy magnitude, such as what the heart does. Its function is more tied up into the mechanical force (energy) it exerts. This is in contrast to the nervous system, whose function is tied up into the differentiation of its mechanisms, with the energy held constant. (It delegates energy magnitude work to muscles or other organs.)

            You think there must be something else going on beside the differentiation of mechanics, that is, besides the information. My question is, why? What about the itch is not informative? What about subjective experience in general is not informative? What does it have that requires we look to energy magnitude dynamics, at least other than the dynamics supplied by the physiological reactions of the rest of the body?

            Liked by 1 person

          9. Mike,
            The answer to my question is: I consider McFadden’s proposal natural because it involves the informational animation of a specific variety of brain mechanism as phenomenal experience itself. Thus I consider the encoded paper to encoded paper proposal to be non natural — in that scenario no such mechanism exists. All that’s proposed to carry over is “information”, and in truth I don’t consider it productive to say that machine information exists beyond it’s effect upon a given mechanism. For example information that animates a heart, a Betamax machine, a computer monitor, and so on, all depend upon something which is able to use it. Otherwise I’d just call this “stuff”.

            I realize that what I’m saying here doesn’t make sense to you, and it may never. The same could probably be said for many distinguished intellectuals, such as Steven Pinker and Daniel Dennett (though it’s an open question how many of them would have the integrity to admit that my thought experiment does effectively demonstrate an implication of their beliefs).

            In any case this gets back to what I’ve lately been thinking, or that for progress to be made we’ll need good experimental evidence which progressively supports or refutes various falsifiable theories, such as McFadden’s. I like considering the mayhem that should occur if McFadden’s cemi were conclusively validated. In that case countless modern consciousness proposals should be virtually eliminated from consideration, perhaps with a few on life support given the possibility that they might productively adapt.

            Furthermore the Chinese room proposal would be gone, along with the China brain, USA consciousness, and my own thumb pain thought experiment. What we’d instead have is a world full of neuroscientists actively exploring the parameters of consciousness as an electromagnetic field, and psychologists trying to make sense of this in the supervenient sense of how we effectively function by means of this quite basic element of our nature. I doubt you’d consider McFadden’s validation here all that significant. But could you make a sensible argument against the massive paradigm shift that I speak of?

            You think there must be something else going on beside the differentiation of mechanics, that is, besides the information. My question is, why? What about the itch is not informative?

            Actually I do consider the itch informative, though not generically so. It’s informative to the experiencer of it and nothing else directly that I’m aware of. But we were considering the means by which an itch might exist. Many of us think of this as “a hard problem”. If validated, as I see it McFadden’s answer should get “consciousness” to the level of Newton’s “gravity”.

            Liked by 1 person

          10. Eric,
            “the informational animation of a specific variety of brain mechanism as phenomenal experience itself”

            Could you elaborate on what you see this meaning?

            If the answer is “EM fields”, then my question is, what is it about EM fields specifically, particularly the ones in and around the brain, that make them conscious? I think we’ve established in other conversations that you don’t see just any EM field as conscious. So the Earth’s magnetic field doesn’t make it conscious, or a WiFi network isn’t conscious. But then, what is it about the EM field generated by the brain that makes it different from the other EM fields? We know it isn’t the strength of the field, since we know the field is very weak by most standards, so this isn’t a magnitude of energy thing. So what is it?

            If you tell me that the answer is unknown, then I’m going to say you don’t yet have a theory of consciousness.

            Liked by 1 person

          11. Mike,
            Not only do I not know why a subjective dynamic would be created by certain parameters of an EM field, but McFadden’s theory hasn’t yet been experimentally explored, such as by seeing if certain patterns of fields that we create in the head were to affect a given person’s phenomenal experience. So right, I don’t yet have a “theory” of consciousness, which is to say an experimentally backed model. What this scout does seem to have however, is some very interesting speculation.

            Conversely your task is to argue as well as you’re able to that what I’m talking about if validated, would not be similar to Newton’s work regarding an older “hard problem”, or that of gravity. Newton of course didn’t know why mass would attract mass, but worked out lots of details about this in the early days of science. That’s what I’m talking about potentially happening here in our pathetically soft mental and behavioral sciences. So how are you going to argue that McFadden wouldn’t be similar to Newton if his theory were experimentally backed and became widely adopted?

            Liked by 1 person

          12. Eric,
            You don’t need “experimental exploration” to have a valid theory. (You do obviously need empirical methods to test the theory.) You only need a coherent causal model. The problem is that, like all identity theories, simply saying that the EM field is conscious isn’t a coherent causal account. McFadden, to his credit, admits that his theory is still an information theory: CEMI (conscious electromagnetic information). But unlike him, you’ve eschewed the role of information. Without it, you don’t yet have a complete theory to test.

            I don’t see that I have any obligation to argue over highly speculative vague notions.

            The interesting thing about Newton is, there were lots of people who made lucky guesses about the solution before he published his theory, but we remember Newton because he worked out the actual mathematics, a causal model that could actually be tested.

            Liked by 1 person

          13. Mike,
            I do not simply say that the EM field is conscious, or an identity theory. Instead I’m saying that this would depend upon evidence. Furthermore I’ve proposed an experiment which should either validate his theory quite conclusively, or at least suggest that it’s not right at some point. This would be to directly experiment with EM fields in the head to see if a person notices any alteration to their standard subjective experience (since waves of a certain variety amplify and negate other waves of that variety given their phase). So I’m not sure why you’re accusing me of merely using an identity theory.

            Furthermore I’ve also been quite supportive of the informational component of McFadden’s theory. Perhaps the issue here is that you’re having trouble grasping the relational element of my conception of machine information? As in the VHS tape will not be informational to the Betamax machine in the intended sense because its mechanisms will not be set up that way? Or even your example that ancient Greek is not informational to you in the intended sense, though it would be for someone familiar with the language? Similarly McFadden and I theorize brain information to animate an EM field which serves as the mechanism which makes certain brain information actually be “informative” — theoretically phenomenal experience actually exists as some portion of the brain’s general EM field. Like all other cases that I know of, machine information will only exist by means of associated instantiation mechanisms. If you disagree then perhaps you could provide a known counter example?

            One reason that I like the analogy between the work of Newton and what McFadden is trying to do, is because neither mean (or meant) to actually solve their respective “hard problems”. The goal was and is to set up a framework so that science might progress even given such ignorance. Newton couldn’t tell us why mass would attract mass, but was still able to help science move beyond this question to effectively predict the function of mass based dynamics. Similarly McFadden can’t tell us why certain electromagnetic fields would constitute a subjective dynamic, and merely theorizes that they’re associated with a variety that stems from synchronous neuron firing. But if experimentally verified, his work should create a similarly magnificent paradigm shift.

            Liked by 1 person

          14. Eric,
            It doesn’t seem like you understand what an identity theory is. If you did, you’d know that whether or not there is evidence for it is irrelevant. Having evidence would strengthen credence in the association, the identification of consciousness with EM fields, but wouldn’t do any work toward telling us why; the EM field, and that EM field in particular, is conscious. It’s like equating nociception with pain. Even if nociception turned out to always correlate with pain (it doesn’t but that’s not relevant for the example), it wouldn’t tell us why pain would just be nociception.

            I should note that some information theories of consciousness are also identity theories. Whenever you see someone say that consciousness just is what information feels like, they’re expressing an identity relationship. The thing about an identity theory is, it flubs on the causal account, and therefore is incomplete. It might be a theory about consciousness, but it’s not a theory that fundamentally explains it.

            I do understand your machine relative definition of information. It makes no difference for my point. But once you allow that information is the answer, then the question becomes, why information via one type of substrate instead of another? The EM field provides fast transmission, but there’s no capability we have that requires that transmission speed, and evidence that we’re constrained by the transmission speeds of neural spikes. Even if evidence eventually is found for information transmission via EM field, we would still need the actual causal functional account.

            Liked by 1 person

          15. Now that’s the sort of insightful scouting response that I’ve come to expect from you Mike! Sean Carroll says that he’d rather have an army of soldiers than an army of scouts (and this would probably serve him better given the “many worlds” pitch he’s trying to sell). Between you and I however, let’s try to figure out what does and doesn’t merit soldiering.

            You’re right that I didn’t grasp what was meant by “identity theory”. That does seem like a helpful term not only for the work of McFadden, but Newton. It could be that an Einstein or better will surface someday regarding consciousness, though not without founding work from which to build. If McFadden’s ideas become experimentally validated, they should serve as a pillar from which our soft mental and behavioral sciences are finally be able to build structures which endure. It seems to me that decades from now his name would be spoken with a reverence similar to figures such as Newton, Darwin, and Einstein.

            I do understand your machine relative definition of information. It makes no difference for my point. But once you allow that information is the answer, then the question becomes, why information via one type of substrate instead of another?

            For that you may need a slightly deeper appreciation for the relative nature of machine information. This is to say that such information will only ever exist in the right sense once instantiated by means of appropriate mechanisms. For example we like to say that our computers send information to our computer screens so that the pixels appropriately light up. From the position that I mean however, we shouldn’t technically call this “information” until actually instantiated. I suppose we could instead call it “proto information”. Similarly from my thumb pain thought experiment, observe that we end up with sheets of paper with certain markings on them. The “relative” point is that this shouldn’t yet exist as what you know of as thumb pain, and so shouldn’t yet be referred to as machine information. But if this paper were provided to a machine that was set up to use it properly, whether through an EM field or whatever physics our brains use to create phenomenal experience, then these sheets of paper could effectively be called “information” in this specific regard. Theoretically in a causal world there will never be machine information (unlike the proto kind) without the animation of associated mechanisms.

            That’s where I began in 2014 when I started blogging, and from this position had already developed a dual computers psychology based model of brain function. What McFadden added to my game was a plausible instantiation mechanism for the phenomenal side.

            So if all machine information is relative to associated instantiation mechanisms, why should one suspect that McFadden gets it right by choosing the EM field? First note that this is something which clearly exists in the head and has the potential to carry all of the, um, “proto information” that’s created by means of neuron firing. To me that’s huge. If neuron firing does matter here, as it seems to, what other brain mechanism has this naturally baked in?

            Then there’s the bound nature of phenomenal experience. Not only might I have a visual experience, but all sorts of senses, along with hunger, pain, hope, and so on, all combined into a single temporal experience. Theoretically a given amazingly complex EM field could harbor every bit of that at a given moment.

            I’m also encouraged by how synchronous neuron firing has been found to be most correlated with consciousness. That’s exactly how the tiny energies associated with individual neuron firing might become distinguished from the largely canceled out EM effects of general brain function. In an aesthetic sense there seems to be a wonderfulness here which aligns well with our greatest examples of hard science. I’m ready for a new chapter to begin which cleans out all sorts of crap in an area of science which has gotten quite accustomed to exactly that. Clearly experimental verification will be needed however.

            Liked by 1 person

  5. I’m with you all the way Mike, and perhaps further. I think refusal to accept weak emergence usually rests on the fallacy of composition: the inference that, if a property applies to the parts, it applies to the whole. (Classic example, if Alice is a terrific basketball player, and Bob is also, and Chris, and Dinesh, and Ellie, then together they must make a terrific basketball team. Except no, they don’t, because while each excels is one-on-one play, they have terrible teamwork.) But in ontological cases, I call it the “cherry pion fallacy” – the idea that if there are no cherry fundamental particles (pions), there can be no cherry pie. It’s still the fallacy of composition (extrapolating non-cherry-ness from parts to whole), but the name “cherry pion fallacy” is funnier, which is why I stole it.

    Liked by 2 people

    1. Paul, I’m tempted to talk about pancherryism, but I think it would intensely annoy the panpsychists, so I’ll pass on the opportunity. But basically I agree with everything you wrote here. I’ll just echo what you implied, that organization matters. We can’t make a cherry pie by just dumping all the ingredients together. We have to process them in certain ways.

      Liked by 1 person

  6. I only have a passing knowledge of Keith Frankish. I have not read a word he’s written. However, I’m confused at his apparent apoplexy at the very word “emergent.” He dismisses it as “nothing mysterious.” I quite agree. As far as I understand, it’s simply a broad term to describe a physical interpretation of reality. I was unaware that any serious argument about emergence was an attempt to claim it was mysterious. Frankish describes is as merely, “a functional property.” Ok, that’s fine. But changing the label does not alter the theory. In fact, I think it obfuscates it. He loses me when he argues that there’s “nothing extra going on” to get, what he calls, “that larger feature.” That is, he argues it “can’t be deduced from the activity of the components.” But that is the very point of emergence isn’t it? That is, as I understand emergence, it is an explanation of a new property of a complex system that arises from that level of complexity. And, moreover, it exists only as a result of that complex system. It cannot be explained by its parts nor can that new property be reduced to its parts. It is distinct from the parts.

    A description of reality in which there are emergent phenomena that are dependent on lower level physical properties and yet freestanding of those underlying physical structures has a lot going for it. For one, it nicely bypasses interminable debates over mind-matter duality or the necessity of choosing one over the other. But I apologize if my confusion seems hopelessly naive to you all. I’m sure someone with more knowledge of consciousness theories will explain Frankish’s apoplexy and why “emergence” is so controversial.

    Liked by 1 person

    1. I suspect his reaction to the emergence question comes from the fact that it’s probably the most common response to the illusionism proposition. As I noted in the post, that makes sense. If you’re going to be an eliminativist, then emergence is something you’re not going to give much credence to, but I think it leads to the issues I laid out.

      The stronger variety Frankish was criticizing is that there is something new that creeps in between the components and the whole. For a weak emergentist, the only thing this might be would be the organization, structure, etc. For a stronger emergentist, there is something else that gets introduced, a sort of mysterious something. I agree with Frankish on objecting to this.

      On the one hand, I dislike anytime someone answers a question with, “It’s emergent,” because that tells us nothing. It’s only useful if we can then discuss how it emerges, such as our understanding of how temperature emerges from particle physics. Weak emergence considers this a viable proposition, whereas strong emergence seems to imply it’s not possible, even in principle.


      1. Mike, at the risk of appearing vexatious, I feel the need to add a comment. You say “The stronger variety Frankish was criticizing is that there is something new that creeps in between the components and the whole.” Yes, I get that he is peevish about that. But that is what confuses me. That is because emergence does not claim something new creeps in “between the components and the whole.” It’s that the whole, because of its complex organization, exhibits a phenomenon which cannot be explained by the properties of the components. I think that’s the point. An emergent phenomenon is physically caused by the components because of their complex structure, but not reducible to the components. You further state that “[Emergence] is only useful if we can then discuss how it emerges…” I totally agree. Eventually we will have to provide a causal explanation for such things as consciousness. The fact that we cannot do so at this time is not a reason to reject emergence as a useful concept, however. Those who reject emergence can’t explain consciousness either.


        1. Matti,
          No worries on appearing vexatious. Disagreement is what keeps these discussions interesting. As long as we keep it friendly, it’s all good!

          I think what you’re describing is weak emergence. It’s basically what most physicists mean when they use the word. But there’s a stronger sense, often used by philosophers, where the causal explanation you note is thought to be impossible, even in principle.

          That’s the version Frankish spends most of his time attacking. Although as I noted in the post, he’s also dismissive of the weak variety, but his main criticism is it uses the same word as the stronger mystical version.


          1. Thanks Mike. I will definitely peruse the article you cite. As I think about it more, I think a fundamental difference between me and thinkers like Frankish may be our fundamentally different ontological starting points. Frankish clearly favors the ontology of Willard Quine (and Dan Dennett). That is, a minimalist ontology, for those whose “aesthetic sense [is for] for desert landscapes” to quote Quine. As I stated in a previous submission (Free Will and Social Responsibility), I tend to favor a point of view articulated by William Wimsatt, a philosopher of science at Chicago. To quote Winsatt, in response to Quine, I favor “a philosophy for messy systems, for the real world, for the “in-between” and for the variegated ecologies of reality… [this] is ontology for the tropical rain forest.”

            Emergence is an important part of such an ontological point of view which could also be described as a non-reductive naturalism. And, as I said, it happily bypasses those interminable debates over mind and matter. For Winsatt and others including myself reality is not mere matter (physics) or even mere mind. It’s pleural. Mind and matter are only two of many properties of nature or reality which “emerge” through the evolutionary development of complex systems. John Searle taps his way to a similar position in “The Construction of Social Reality” and other works.

            But, for sure, we are all trying to understand reality in our different fallible ways. I think, as I said, there is much to be said for working with the concept of emergence. I think it has promise.

            Liked by 2 people

  7. Okay, “Unless of course I’m missing something?”
    Maybe. Then maybe I am?
    So Physics births Chemistry which births Biology. Then what? … Death?
    Then what?
    So we have ‘religion’ versus ‘science’ explanations. In other words – out of nothing something? And so on and so forth.
    I offer “Terror (I.e.oblivion) Management Theory” or, denial of death. Versus religion and science! … And UFOs, lizard people, etc. and so.
    love your posts

    Liked by 1 person

    1. After biology? Then neuroscience, psychology, anthropology, sociology, and economics. Then Death 🙂

      Is the “out of nothing something” phrase in reference to the big bang? If so, I think it’s more “out of unknown, something”.

      Religion as terror management? That’s a very common theory. Although it’s worth noting that not all historical religions had comforting ideas about death. A lot of ancient religion saw death as a dismal ephemeral existence (see The Odyssey, specifically Odysseus’ descent to the underworld), if they posited an afterlife at all.

      Thanks! Comments always welcome!

      Liked by 1 person

  8. Hi Mike,

    Definitions are certainly challenging here, but if we leave some of the terminology aside, for me there are a couple of fundamental observations and/or questions at play here.

    One is, knowing the properties of atoms, is the existence of a rose bush necessarily so? I think the answer is no. But once we have a rose bush, we can work backwards and begin to understand how the pieces contribute to its various dynamics. That said, the sum total of relationships that describe a rose bush are not fundamental. They simply do not exist at the level of individual atom. Those relationships “come into being” through what we’d call a self-organizing process. It’s also certainly the case that the properties of atoms allow for those relationships to manifest. So the rub is this, are the set of relationships that describe a rose bush something novel? Or simply latent and fully explained by the properties of atoms?

    When we say “fully explained” I think we mean nothing is required to get from atoms to rose bushes besides the properties of the atoms, and I think that can only be true if one considers all the atoms in the universe in the conversation. Because there are countless factors and events and unlikely conditions that are all required to produce a rose bush from atoms. One cannot reduce all the atoms of a rose bush back to atoms, place them together in a container, and place them anywhere in the universe and expect to find a rose bush at any point in the future. One doesn’t have to resort to mysticism, but it’s also unfair to say it’s all explained by atoms and their properties, because it’s absolutely not! The properties of atoms are only one part of it.

    The sort of “weak” emergentism that makes sense to me is one that admits of this–that it’s not merely the properties of components that matter, but their histories. And if one wishes to say that all the elements of the universe that contribute to the history of a rose bush coming into being are explained by the properties of atoms, that’s wonderful. But it’s not predictive. One can only have a rose bush if one has the atoms and their properties, AND the history or process that is the production of novel relationships between them. You can’t leave the second part out. And you really can’t predict it either, unless you have perfect knowledge of the starting conditions of a purely deterministic universe. And even if one had all such information, there are still questions about the practicality of actually making those predictions. It may not be possible, for instance, to simulate the entire universe in anything less than a universe? Probably someone has tried to figure that out but I don’t have the answer.

    So emergentism makes sense to me because, in essence, the novelty that emergence is describing is the “payoff” of sustained relationship-building. Novelty is the accrued “capital” of performing a trace through history that involves repeated and consistent interactions between constituent atoms as well as all sorts of chance encounters. The production of novelty thus depends on more than component properties in my opinion. Those properties are necessary, but not sufficient. And I think this is undeniable. Unless, like I said, one wants to include ALL of the atoms in the universe, in which case we’re back to the absence of predictability in any practical fashion.

    Now to say that at some point, “something happens” and a new thing that we could never trace back to being even possible because of the properties of the various components, probably doesn’t happen really in my mind. I agree with Frankish on this. Frankish mentions consciousness here, and that makes sense, because that’s the only thing that might fit into this bucket, which is the strong sort of emergentism. But I think ultimately there is only one explanation for consciousness that avoids this resort to something new coming into being that is not in any way related to the properties of its components: consciousness has been present all along in some fashion. The question is simply in what fashion. It’s nothing more than an exquisite set of relationships between the fundamental elements of nature that was possible from the very beginning, like a rose bush, or if you’re a panpsychist it’s simply a property of everything that expresses in unique ways due to complexity, or dualism isn’t wrong and it’s a novel property of the universe that interacts with matter in ways we don’t fully understand just yet. None of these are really strong emergence. They all say consciousness has been there all along in some manner.

    Strong emergence as it relates to consciousness would be the idea that a sufficiently complex arrangement of fundamental elements somehow “creates” a new thing that still cannot be explained in terms of the properties of those elements, EVEN if the time history of those elements is understood. Does anyone go for this notion?


    Liked by 2 people

    1. “Does anyone go for this notion?”

      Absolutely Michael!!!! Another novel point well argued for the archives….

      “… the novelty that emergence is describing is the “payoff” of sustained relationship-building. Novelty is the accrued “capital” of performing a trace through history that involves repeated and consistent interactions between constituent atoms as well as all sorts of chance encounters.”

      This notion is profound and fits very well with the modality of Imperative Pansentientism. Keep the thoughts coming my internet friend.

      Liked by 1 person

    2. Hi Michael,
      I agree with most of what you say here, at least until you discuss consciousness. My way of describing what’s missing from the microphysics account is to note the higher level organization and structure. As you discuss, that organization comes about through the history behind the system in question. At the extreme, perfect prediction does require knowledge of every interaction within the system’s 13.8 billion year light cone, something that, due to the speed of light limitation and quantum limitations, is impossible, even in principle.

      Although what is possible is mapping higher level dynamics to lower level dynamics. We already do it with some phenomena we regard as emergent, such as thermodynamics. But using those lower level dynamics to predict the higher level dynamics is effectively impossible. If we attempt to use quantum field theory to model a rose bush, we’d need a computer vaster than the reality that QFT operates within, as you note, one more complex than the universe.

      In the case of consciousness, I don’t think it’s any different than any other complex system. Except that while all systems are causal nexuses, a conscious one, by its very nature and function, aggressively concentrates causality across time and space far more than any other system. This aspect of it makes it seem like something completely new, above and beyond all the other dynamics, when in reality it concentrates and channels those dynamics far more than any other system.

      Who goes for strong emergence? More than you might expect. A lot of philosophers, particularly those who ascribe to property-dualism or even the more physicalist dualisms. Even some scientists, like Sara Imari Walker, as revealed in this interview by Sean Carroll.

      Liked by 1 person

      1. Hi Mike,

        I didn’t intend to treat consciousness any differently than any other phenomenon in this reply above, so I’m curious why you agree up until that point? My writing may have been less than clear so let me try and illuminate.

        One of my thoughts while writing is that the idea of higher-order systems displaying properties that cannot be mapped to the actions of the lower-order systems and/or its constituent parts, and cannot be shown as plausible given the properties of those constituent parts, doesn’t make any intuitive sense to me. If there is something happening that is inexplicable on the basis of the actual properties of the lower-order domain(s) and/or constituent elements, then there’s probably something else involved we don’t yet understand. And I’m saying this is true of consciousness as well as any relatively complex system.

        I read the transcript of Carroll’s podcast with Sara Walker, and although she said she is a strong believer in strong emergence, I still didn’t understand that she thought the above was the reality of it. She seemed to be suggesting there are more fundamental physics at work in the universe than we’ve identified to date, if I understood her. So, that to me wouldn’t be an “emergence,” it just means there’s more involved than we thought. And once we understand it, it seems like we’d be back at a straightforward explanation or weak emergence.

        My point with consciousness is that I don’t think it makes sense to think of it in terms of strong emergence as I’ve defined it above—(i.e. not comprehensible in terms of lower-order domains, with those domains being the sole contributing elements to conscious systems)—and so that leaves only one conclusion: consciousness isn’t “created” somewhere along the way any more than a rose bush is. Either the potential for consciousness has been integral to the lower-order domains for all of time, just as one assumes the potential for a rose bush existed for all of time and required nothing “magical” to occur; or, consciousness is something that has been in existence all along in a different form and one that we’ve not fully understood.

        I think you would disagree with the second half of the statement, but I was just trying to say that there are two ways of avoiding the conundrum of strong emergence: either it emerged in a way we’ll come to understand, or it’s been there all along in some way. If consciousness is simply the action of fundamental particles and forces in complex arrangements, then one day we ought to understand how this may be so. But it seems unlikely to me that consciousness comes about through the action of lower-order domains alone, and will never be mappable onto those processes. The challenge with all this is that it gets us into the hard problem. Which there’s no need to re-explore.

        If any explanation that involves the suggestion of new fundamental properties to the universe that are not in the present catalog is strong emergence, then I guess I’m a strong emergentist, but I would resist the label because it makes no sense to me to call that emergence. Walker’s notion seems to be that something exists in the universe called information that interacts with matter and energy to build complex systems, but I didn’t get that she insisted this was the product of lower domains. I thought she was suggesting it was some sort of new physics we’ve not understood previously. If she’s correct, then as we come to understand there is more that is “fundamental” than we thought, then certain things like the emergence of life may be readily mapped to or be explicable in terms of this additional basic element of reality.


        Liked by 1 person

        1. Hi Michael,
          “either it emerged in a way we’ll come to understand, or it’s been there all along in some way.”

          Sorry, I missed above where you made the first part of this clear. I’m very much in the camp that sees the first option as likely, but as you noted, this is probably where we do disagree. I will say that if I thought the first option was unlikely, it would make the second one seem inevitable.

          I have to admit it’s been a while since I listened to that interview, but I think you got Walker’s position right. She’s not positing anything non-physical, just physics we don’t currently have. But I’m not sure how much distinction there really is there. They both amount to saying something extra comes into the picture during the emergence, that it’s something other than reverse reduction. Whether it’s new physics or non-physics depends on how we define “physical”.

          That said, I also have to admit I haven’t studied her position in detail. I just remembered her as an example of a strong reductionist (or at least someone who self describes that way).


  9. “Reality might be structure, relations, and processes all the way down.”

    What a lovely concept – nothing exists! In some senses this is extremely hard to grapple with as my non existent self and body sips my morning coffee and contemplates the point.

    In other senses the more mystical or imaginative side of my non existent consciousness is deeply attracted by the concept of everything as illusion and wonders at the sheer enormity of the possibilities it presents.

    As in a lucid dream it seems to present the possibility that if all is illusion we can conjure whatever we desire out of thin air. And indeed science seems to be pointing that way. A perfect example being the re-arrangement of atoms to form hitherto unheard of and non existent materials.

    Just the sort of wonderful concepts dreamt up from the depths of meditation or a medicinal blast of psychedelics.

    Liked by 1 person

    1. You actually zero in on a reason I think we have to consider all those structures as real, at least in some sense. They’re actually not arbitrary, but follow rigid rules that science can discover. And they are part of the causal chain, being caused by prior phenomena and causing subsequent phenomena.

      I think that’s why I don’t feel it’s productive to use the “illusion” label here. If they were truly illusions, I think reality would be similar to what you describe.


  10. “In other words, phenomenal consciousness is access consciousness from the inside.”

    I’ve seen you assert this quite often enough to become a slogan. I know you think there is something it is like to be a sufficiently complex information system — that such a system would have a phenomenal “inside”. Would you agree that it’s a belief not based on any present physics facts?

    You make the analogy of one smartphone instance being different in data processing from another, but this doesn’t address the subjective-objective divide that the notions of phenomenal consciousness and access consciousness do. Cellphones, nor any device we know of, has anything akin to phenomenal consciousness, right?

    Brains, for some reason, have a property nothing else we know of does, yes?

    As an aside, FWIW, I think reduction and emergence are just opposite directions of levels of organization. Michael above makes a very good point about reduction often being easier. Taking a rose apart to get atoms isn’t hard, but starting with atoms and coming up with a rose is another matter.

    And I find the argument about strong (or ontological) versus weak (or epistemological) emergence somewhat unsatisfying simply because it’s so hard to define what’s ontologically real. All chemistry is emergent. Is chemistry “real”? Depends on what you mean by real. Unicorns can be “real” if ideas are real.

    So I just take a simplistic view that emergence (reverse reduction) is just new behavior due to complexity.

    Liked by 2 people

    1. If I worded my “slogan” the way you did, it would be a brute identity relationship and would be a logically indefensible leap. However, that’s not the way I present it. We know consciousness requires a brain. (Or at least any consciousness we can currently infer seems to require it.) And we have abundant evidence for access consciousness, Chalmers’ “easy problems” (and more since he omitted affect processing). Access consciousness also requires a brain (at least currently). Every aspect of subjective experience anyone’s been able to identify seems to map to at least a plausible functional aspect of access consciousness.

      So, they are in the same place and have the same causal roles. I take that as sufficient reason to conclude they are the same. I’m open to being convinced otherwise. If someone could identify an aspect of phenomenal consciousness that can’t be accounted for with at least a plausible access explanation, I’d reconsider. (Warning: I’d be very interested to have a response that successfully does this, but I will analyze any answer that purports to.)

      Until then, it seems to me that the assertion they are separate is the belief based on no evidence. I noted above what would change my mind. My question to people who insist they’re different is what would change their mind?

      No, phones currently don’t have phenomenal consciousness (at least by most people’s definitions), but then no phone currently has access consciousness either. When a phone has access consciousness, I think it’ll have phenomenal consciousness. (Unless of course we declare it not so by definitional fiat.)

      Emergence as reverse reduction? I like that phrase and will have to remember it! I agree completely. In that sense, I think strong emergence would be emergence someone would assert can’t be reduced, that it’s something other than reverse reduction.


      1. I’m confused. The “slogan” I quoted was your words. Words I’ve seen you use more than once (which is why I said “slogan”).

        Regardless, I think many would dispute the plausibility that even a full account of access necessarily accounts for phenomenal experience. The bottom line is that you believe access consciousness and phenomenal consciousness are not separate. Some people believe otherwise. No one knows.

        It may depend on exactly what you mean by access consciousness. In my view it’s essentially Searle’s Giant File Room — which is why I don’t think it accounts for phenomenal consciousness. If you grant subjectivity to access consciousness (which the word “consciousness” suggests) then I can see why you conflate them. (But that just moves the goal posts. What accounts for the subjectivity of access?)


        1. I was talking about the way you characterized my position in the paragraph following the quote. But I can see how my wording (which was poor) might lead you to think I was addressing the quote in particular. Sorry, my bad.

          Certainly a lot of people dispute what I’m saying. But I haven’t seen anyone adequately address the points noted. And definitely some people believe differently. Of course, some people believe evolution is false, we didn’t land on the moon, lizard people control the government, and a lot of other notions. I’m more interested in what evidence or logic, if any, can be mustered.

          On access consciousness, I’m following the distinction Ned Block made, of information being accessible for verbal report, reasoning, and behavioral control. All of this is built on a system with exteroceptive, interoceptive, and affective processing. This system can account for every aspect of subjectivity I’ve seen cogently identified. But maybe I just haven’t heard the right one(s) yet.


          1. What part of my characterization misstates your position?

            “And definitely some people believe differently. Of course, some people believe evolution is false, we didn’t land on the moon, lizard people control the government, and a lot of other notions.”

            Dude! 😮 😦 It’s kind of backhanded ad hominem to associate serious educated opposing views with nutcases. And it has nothing to do with any of the points being raised by anyone here. Or do you really have that much disdain for those who think subjective experience is both unique (so far to brains) and “currently mysterious” (not an emergent behavior explained by current physics)?

            “This system can account for every aspect of subjectivity I’ve seen cogently identified.”

            Except the experience of subjectivity itself. I know you think that’s the illusion, but that’s a fundamental point of contention among even the experts. As I said above, it just moves the goal posts. Why is there something it is like to be inside access consciousness? The physics we know doesn’t mandate it, and it doesn’t appear to exist even in complex machines like plants (as far as we know).

            So, fine, why is my stream of access consciousness characterized by an undeniable, incorrigible, and fundamental sense of there being something is it like to be me? That’s one hell of an illusion.


          2. (This is a repeat of what I said above, but with the problematic language cleaned up.)

            If my “slogan” referred to what you describe, it would be a brute identity relationship and would be a logically indefensible leap. However, that’s not what I’m advocating. We know consciousness requires a brain. (Or at least any consciousness we can currently infer seems to require it.) And we have abundant evidence for access consciousness: Chalmers’ scientifically tractable “easy problems” (and more since he omitted affect processing). Access consciousness also requires a brain (at least currently). Every aspect of subjective experience anyone’s been able to cogently identify seems to map to at least a plausible functional aspect of access consciousness.

            So, they are in the same place and have the same causal roles. I take that as sufficient reason to conclude they are the same.
            (end repeat)

            Dude, if you’re going to rely on “That’s just your belief and others believe differently” arguments, then I think it’s fair game to point out who else relies on those type arguments. It’s also worth noting that being offended and calling into question the virtue of those disagreeing, are a part of their toolbox they are quick to deploy.

            I don’t know how many times I’ve said I’m not an illusionist. It is fair to say the difference between me and them is largely semantic, but I think that difference matters.

            The “something it is like” phrase is vague. If we interpret it as having the perspective of a system which collects information in certain ways, from a particular vantage point, and with a particular background, then I don’t see how we could have a system with access consciousness without having that be present. It’s like asking how 1+2 can equal 3. The result is inherent in the operation. Unless it’s more 1 + 2 = 4, but then I need to have it explained, in detail, what’s not being accounted for.

            You might respond that I’m not interpreting the “something it is like” phrase correctly. In that case, feel free to elaborate on what the phrase means to you.


          3. In fact I don’t think you’re interpreting the “something it is like” phrase correctly. I have elaborated on it many times in the past, so I’ll refer you to all those past comments and blog posts, but briefly:

            “If we interpret it as having the perspective of a system which collects information in certain ways, from a particular vantage point, and with a particular background,…”

            But what do you mean by “perspective” and “particular vantage point”? Those phrases assume something very different in a conscious being than they do in a plant or machine.

            We can talk about the “perspective” of a tree or smartphone, but the term means something far richer when we talk about the “perspective” of a human being. It isn’t just the rich set of linked concepts — the “particular background” you mentioned — it’s also feelings and the experience of having those feelings. In particular it’s the nature and quality of the real-time experience — the autobiographical now — that lies at the heart of the something it is like to be human.

            The phrase may feel vague, but that’s because consciousness has (currently) ineffable qualities. That’s why it’s so hard to define. It’s a tricky concept. But “something it is like” actually does capture that striking difference between conscious beings and plants or smartphones.

            As I said before, I think you’re just moving the goal posts. We don’t have an explanation for exactly why my perspective is so vastly richer than even the smartest animals and infinitely beyond anything we can assign to a plant or supercomputer.

            “The result is inherent in the operation.”

            Perhaps. On the one hand we have brains, which have that rich real-time experience of a perspective, and on the other hand we have all other systems, which don’t. At what point does the operation of the system give rise to subjective experience? (At what point is there something it is like to be that system?)

            More importantly: How? If it’s just access consciousness, then how does that get so rich as to give rise to the reporting of real-time feelings and experience? What’s the difference between brains and every other system we know?

            “Dude, if you’re going to rely on ‘That’s just your belief and others believe differently’ arguments,…”

            You misunderstand. It’s not an argument one way or the other, since the point is that both sides are extrapolating from known facts to what they believe is true about consciousness. I’m reacting to the perception you seem to assume only one outcome to the known facts. I think we have a long way to go yet to determine the outcome. We need more facts to draw conclusions.

            (Did you ever watch The Good Wife? If so, do you remember that Federal judge who always made the lawyers add, “In my opinion,” after every statement? I think it’s important to remind ourselves that much of this is just, “In our opinion.”)


          4. “… I’m not an illusionist. It is fair to say the difference between me and them is largely semantic, but I think that difference matters.”

            Seriously Mike🤨….. Semantics is the difference that makes the difference??????

            “I’m not interpreting the “something it is like” phrase correctly. In that case, feel free to elaborate on what the phrase means to you.”

            A good definition of “something it is like” would be: “A localized field of experience that is multi-faceted.” The cool part is that I’m not defaulting to semantics with this definition but hopefully, others might find the definition useful. I seriously doubt it though…………..😞

            Have fun kids

            Liked by 1 person

          5. Yes, seriously Lee. I think the illusionists are right ontologically, but I find the argument that if experience is an illusion, then the illusion is the experience, pretty convincing. Which makes calling it an “illusion” unproductive. On the other hand, it leaves us with a view of experience that is scientifically tractable.

            “A localized field of experience that is multi-faceted” sounds interesting. But you used the word “experience” in it. How would you define that word?


          6. I think you’re an illusionist in all but name! I get that the “illusion” is that consciousness “isn’t what we think it is” but the thing about that is that we don’t know what consciousness is, so we actually don’t think it’s something specific that could be an illusion in the first place.

            I think it’s pretty clear we experience something. There definitely is something it is like to be human (as vague or ineffable as that phrase might be, so too is consciousness itself). There’s no illusion to cogito ergo sum. I think the most accurate stance is simply admitting we don’t really know exactly what’s going on there. Calling it an illusion (as I know we agree) was a poor choice of words.


          7. I don’t mean anything rarefied or tricky with “perspective” or “vantage point”. It fits with what you describe about real time experience. I’d say it’s a system taking in temperospatial information about its environment and making predictions about that environment, as well as itself, in real time.

            On the question of when is it like something to be a system, I don’t think there’s any fact of the matter on it. Just systems that are more or less like us. There’s no magical line, no point when the “lights come on”. Just increasing capabilities moving up the hierarchy I often describe.

            I’ve never seen The Good Wife. I am usually careful to delineate when I’m talking about established science and where I’m talking about my opinion. There is a lot of cognitive neuroscience which is objectively established, and I don’t think I’m getting on the wrong side of that boundary when I say everything described has at least plausible neuroscientific explanations. They might be wrong, but the point is that there’s no insurmountable metaphysical boundary here, except in people’s imagination. You referred me to your past writing. Here I’ll refer you to the hundreds of posts I’ve done on these topics over the years.

            If it makes you feel better to think of me as an illusionist, fair enough. Along the lines of our emergence conversation, I prefer to think of it as something emergent rather than an illusion. But the ontology is the same.


          8. I know you meant “perspective” and “vantage point” in the normal human sense, and that’s the point. When you define something it is like as “the perspective of a system which collects information in certain ways, from a particular vantage point, and with a particular background,” you are invoking a human perspective and vantage point. Only conscious beings have a meaningful perspective or vantage point.

            The phrase something it is like very nicely picks out the difference between systems that also collect information and might be said to have a perspective or vantage point. There is nothing it is like to be those systems. You might believe it’s a spectrum, but there is one hell of a gap between conscious brains and all other systems we know of, and we can’t really know what’s in that gap.

            The gap between systems, such as plants or laptops, and systems where phenomenal experience is questionable (such as insects or fish) — let alone clearly conscious systems — is even larger. And as I mentioned to Eric below, the nature of the system is more a question. I don’t disagree about the spectrum so much as see some big “quantum jumps” in it — places where something critical comes together to produce a new kind of emergent behavior.

            FWIW: You’ve used the term “plausible” a number of times here. The point I’m trying to make is that “plausible” is not “factual” — it’s an opinion. It may be a very well grounded one, but it’s still not factual.

            Liked by 1 person

          9. “But you used the word “experience” in it. How would you define that word?”

            Now that all depends upon what the definition of is, is…….

            Liked by 1 person

  11. Wyrd,
    I’d say that I’ve had a productive discussion with Mike above involving the concern that you’ve expressed about his current belief that a phenomenal dynamic inherently exists with an access dynamic. I wonder if you would comment on the position that I’ve now developed?

    The position is that in a causal world, no machine information will exist as such without associated instantiation mechanisms for realization. For example the Betamax tape is not informative to the VHS machine in the intended sense. So even if effectively encoded with the “proto machine information” of Cowboy Bebop, it will only provide the intended information to a mechanism which is set up to deliver this audio/video in some manner. I know of no machine, whether designed or evolved, which is able to accept universal information. In all cases information seems to only exist as such to a given machine by means of physics based instantiation mechanisms. Mike hasn’t yet provided a counterexample.

    Instead of attempting to answer the unfalsifiable question which he’s presented (or to give him an element of the phenomenal that he’s unable to map back to access), might we help debunk their position by observing that they conceive of a subjective dynamic which exists in a way that nothing of this world is known to, or through non mechanism based function? Have I effectively reduced why it is that Searle’s Chinese room, Block’s China brain, Schwitzgebel’s USA consciousness, and my own thumb pain thought experiment, present various ridiculous implications of a status quo belief?


    1. I’m not really clear on what question you’re asking at the end there, but I have many times made the point that brains of conscious beings have an operational property (subjective experience) we don’t observe in any other system, natural or man-made (as far as we know). Mike’s basic point (and I actually agree with him this far) is the that the right kind of information processing system — necessarily very complex — would have that operational property (or something that seems like it).

      Where he and I part ways involves what the right kind of information processing system is that can produce it. Clearly brains are the right kind. The question is what other systems are.

      As for information, it’s certainly true that information always has a physical instance, and any system that consumes information has to “understand” the format of that physical instance. This seems almost trivially true to me; what makes it seem significant to you?

      Liked by 1 person

      1. Right Wyrd, it’s trivially true to us that any system which consumes information must “understand” that information format. This is to say that machine information can never be universal in a natural world — it will only exist in respect to the sort of mechanism that consumes it, whether genetic code, a CD, printed English words, and so on.

        Our opposition however presumes just one exception to this rule, or the information associated with phenomenal experience. Thus a Chinese room, or China brain, or the USA, or paper with the right information on it that’s converted into a proper second set of paper with information on it, will create something that has phenomenal experience. Doesn’t that seem like an extra convenient way around what many of us consider “a hard problem”? So doesn’t their single presumption of universal machine information seem like a vulnerable spot in their formidable academic defenses?


        1. Where do you get the idea anyone is positing universal machine information?

          Searle’s Giant File Room, for instance, is entirely dependent on the structure of the submitted questions as well as the filing system. So is any other information processing system including humans. For example, I am utterly unable to process information from radio or TV stations or cell phone towers. I have machines that do it with ease, but that information is inaccessible directly.

          As far as I can tell, format compatibility always matters. There is no such thing as “universal information.”

          (Are you perhaps thinking of abstractions, such as math or fiction, that have an implementation-independent character? Even so, such are always physically reified so format compatibility matters. I cannot, for instance, directly make use of an ebook stored in memory. I need a machine to convert it to a format compatible with my human I/O devices.)

          Liked by 1 person

          1. Wyrd,
            What I’m thinking of here is a position held by our opposition, not me. They believe in universal machine information regarding phenomenal experience (though I suppose that this is only implicit for the majority of them). Mike and I have discussed this enough to suspect he’d say that he explicitly believes in universal phenomenal experience information (right Mike?). Notice that without this element of their platform there shouldn’t be anything phenomenal regarding Searle’s Chinese room, Block’s China brain, Schwitzgebel’s USA consciousness, or my own such thought experiment.

            Consider mine. It’s presumed that when my thumb gets whacked, information is sent to my brain and processed in various ways that result in my associated phenomenal experience. But does this occur through the information processing itself as our opposition believes, or rather does this processing animate various unknown phenomenal experience mechanisms as we believe? If it’s as they believe then information beyond mechanism animation would be needed. Thus if the right information on paper (correlated with what my whacked thumb sends my brain) were converted into the right second set of information on paper (correlated with my brain’s informational response), then something in this paper to paper conversion would thus feel what I do when my thumb gets whacked!

            Conversely the position of our side is that my brain’s processed information must animate various phenomenal experience creating mechanisms for me to feel “thumb pain”. So we don’t mandate universal machine information since we reserve some unknown phenomenal experience mechanism to render what’s sent to my brain as “information”, whereas they deny such mechanical instantiation to instead presume universal information in this regard.

            The substrate of phenomenal experience remains speculative of course, so this appears quite like gravity before Newton or even Einstein. Given how entrenched our opposition happens to be, I doubt that science will be able to advance here without experimental evidence for the substrate of phenomenal experience. I personally am intrigued by the potential for certain electromagnetic fields produced by synchronous neuron firing to serve this role. Regardless my naturalism prevents me from going with universal information.


          2. I honestly have no idea what “universal machine information regarding phenomenal experience” is. I’d need to have that defined to comment.

            It sounds like you’re talking about computationalism, and if so it’s certainly something Mike and I debated for years. We gave it up quite some time ago as a fruitless discussion. One key difference there and here is that I’m more skeptical of notions I don’t see demonstrated as fact. (Experiment is better than theory! Facts trump opinions, beliefs, and feelings.)

            Back when Mike and I were debating this, my version of your thumb pain thing was laser light. Only certain physical materials under certain conditions (in part involving energy) emit laser light — physical coherent photons. On the other hand, we can simulate laser operation to any desired degree of precision. We can even simulate the effects a laser might have on materials, but no photons are involved, and the only energy spent is in the numerical simulation itself. We can simulate cutting something, but nothing is cut.

            So is the subjective experience of consciousness like laser light? Is it something that arises in virtue of the operation of a certain special kind of physical machine? I think a physically isomorphic artificial brain stands a very good chance of being conscious. I’m skeptical a numerical computation is sufficiently similar structurally to generate consciousness. (FWIW, recently I wrote about a big part of why.)

            The very interesting question is what happens when we simulate consciousness, either by simulating a living brain, or by directly simulating the mind? Such a thing processes information numerically; it essentially transforms on set of numbers into another set of numbers. Computationalists believe that the output numbers would reflect a conscious mind. If a numeric model of your thumb was struck with the numeric model of a hammer, the flow of numbers would reflect your pain. Presumably some of those numbers would be your vocalizations, so we would hear you numerically screaming from the numeric model of your mouth.

            Keep in mind this scenario may be correct, but it’s so far down the line that we can only guess. One issue I have is that the scenario — and zombie scenarios in general — tend to beg the question. They include as a given the object of contention. Imagining a software simulation of sufficient complexity to model your mind assumes it’s possible, but whether it’s possible is the question.

            It’s easy to say that given a zombie or a software that acts indistinguishable from a phenomenal being, if there is no apparent difference who cares if the zombie really has subjective experience? (If it acts like a duck, get the orange sauce.) But the question is whether such a complex simulation is possible in the first place.

            More relevant here perhaps is the notion that if such a complex system did exist and did act conscious — passed the Rich Turing Test — then how can we judge whether it’s actually conscious or what that even means? As I understand Mike’s position, he feels the question is meaningless and, in any event, the illusion is that there’s anything special going on. It’s just that there’s something it is like to be the right kind of complex system — including thumb pain.

            Liked by 1 person

          3. “Mike and I have discussed this enough to suspect he’d say that he explicitly believes in universal phenomenal experience information (right Mike?).”

            Eric, I don’t know what that means. Just based on the phrase, it doesn’t sound like something I’d find plausible.

            Liked by 1 person

          4. Wyrd,
            Mike is quite important to each of us so before directly responding to you I’d like to hear what he has to say about the following.

            Well I’d hope that you wouldn’t consider universal phenomenal experience information plausible. This would be one of many strange implications of believing that phenomenal experience exists by means of information processing alone, or that there aren’t specialized mechanisms in the head which create phenomenal experience. Machine information in general is known to exist mechanistically. This is to say that proto information changes into actual information by means of mechanical instantiation. The Betamax tape requires the right sort of machine to convert its proto information into the actual information of Cowboy Bebop for example — there’s no universal information here or in any known machine, whether evolved or designed. For phenomenal experience however many theorists skip the mechanism part for an exception which needn’t bother with this step. Thus there’d be universal phenomenal experience information that if, for example, were inscribed on paper and properly converted to certain other inscribed paper, would create something with phenomenal experience.

            So the question is, do you explicitly believe in this, implicitly believe in this, or instead presume that there must be certain mechanisms in the head associated with phenomenal experience that become animated through non universal information? Note that this final option dismisses the possibility for Chinese rooms, China brains, USA consciousness, and my own thumb pain thought experiment.

            Liked by 1 person

          5. Eric,
            I’m not able to parse what you’re saying here.

            I think the main bone of contention is I see phenomenal experience as information, or as being composed of information. I’m open to changing my mind if someone can identify components or aspects of it that are not information. Until then, all you need for information is information processing.

            You can substitute “causation” for information if it makes you feel better. The ontology is the same.

            Liked by 1 person

          6. Part of what makes it a bone of contention (from where I sit) is that everything is information, but not everything is phenomenal experience, so there’s certainly no identity there. And it doesn’t answer the question of why some information has a subjective character — an inside where there is something it is like to be processing that information.


          7. To say that phenomenal experience is information is not to say that all information is phenomenal experience. There are people who make that move, such as Chalmers with his double aspect theory, but I think it’s a mistake. Tetris is information, but not all information is Tetris. (Unless one is a pan-Tetris-ist. 😶) In both cases, a particular architecture is required. A large part of cognitive neuroscience is about discovering that architecture.


          8. And that architecture might turn out to be operationally significant!

            My point was that calling subjective experience “information” doesn’t explain anything. Of course it’s information; at some level, everything is. As you say, it’s the nature of that information, and how it’s processed, that we’re seeking to understand.


          9. The architecture is very operationally significant. But the key is that the currency of the nervous system is information, far more so than any other system. It’s the adaptive value it brings to the organism. And the architecture is a tractable scientific problem, actually a range of problems, and progress is steadily being made.


          10. I completely agree that information processing is exceptional in brains! (In both senses of the word “exceptional.”) The confounding question is about the subjective aspect of it. Calling it access consciousness just gives the mystery a different label.

            Liked by 1 person

          11. Mike,
            Apparently you’re right that you haven’t yet parsed my argument regarding the relative nature of machine information. But observe how you’re still able to counter my argument by asserting that you’ll continue to believe that phenomenal experience exists as information in itself until someone can demonstrate an element of phenomenal experience that isn’t “informational”. How might one even conceivably demonstrate that? So your argument has an air of unfalsifiability to it. It reminds me of various a priory proofs for the existence of God. They seem to boil down to “God is causation”. Perhaps so, though this conflicts with my own brand of metaphysics.

            It seems to me that science instead progresses through a posteriori arguments. For example there’s how I’d like scientists to test the idea that phenomenal experience exists in the form of an electromagnetic field created by certain synchronous neuron firing. To do so they’d see if they could use non brain synchronous charges to create field effects in the head which tamper with a person’s phenomenal experience. And as I recall you recently told me that even if that were the case, your theory wouldn’t be violated. To me that seems quite suspicious. From here we’d have an experimentally verified phenomenal experience medium, just as we have for the existence of sound and light, and yet you’d regress back to a universal information explanation?

            In any case the point that I’d like you to grasp is that the sort of information which I’m referring to, or “machine information”, will only exist in a proto sense until mechanically instantiated. So here existence would be relative to such mechanisms. Thus a book on your shelf would not harbor information in the intended sense, that is until its words are actually read. Or a CD wouldn’t harbor any music information on it, that is until mechanically realized through a sound system. In either case we might say that proto information would exist beforehand, or potential information, but by themselves they wouldn’t provide machine information in the intended sense until mechanically realized. I can’t think of a single example of machine information, whether designed or evolved, that doesn’t required associated instantiation mechanisms for realization. Your conception of mechanism universal phenomenal experience information, if true, would be the only exception that I know of. So that, rather than finding a non informative example of phenomenal experience, is why I think you should reconsider.

            Liked by 1 person

          12. Right Wyrd, the statement is quite a mouth full. Here I mean “universal” in the standard sense, or “for application in general”. The opposite would be “mechanism specific phenomenal experience information”, as I suspect to be the case. If there are specific kinds of brain mechanisms which create phenomenal experience, then they should be animated by means of the sort of information which apply to them. For example there’s McFadden’s cemi. Here certain neural firing patterns create the right sort of EM radiation to exist as phenomenal experience itself. So the right neural firing patterns could be considered such mechanism specific information. Similarly only a specific kind of information is able to animate my computer screen. We obviously can’t use the first kind of information (neural firing patters) to animate the second kind of machine (a computer monitor), or vice versa. Machine information in a natural world seems mandated to be mechanism specific.


          13. Then I have to return to the first question I asked: Where do you get the idea anyone is positing universal machine information?

            I don’t perceive Mike to have ever said anything along those lines. His first entry in this thread, as well as what he and I commented about architecture here, both imply specific information formats, protocols, and structures.

            Liked by 1 person

          14. It’s as you surmised when you asked me before Wyrd; people who believe in “computationalism”. In truth I consider that title far too generous. It’s been established pretty well in science that the brain functions by means of logic based dynamics such as AND, OR, and NOT. So I instead refer to their position as “informationism”, and in the sense that here phenomenal experience is thought to exist by means of mechanism general rather than specific information processing. Thus it may be found in a Chinese room, a China brain, the USA as a whole, and when paper with certain information on it is properly converted into other paper with information on it. It could be that each of these four systems are acknowledged to have different information formats, protocols, and structures, though I highly doubt that such systems have the potential to produce something with phenomenal experience as the brain seems to. Like all else that I know of, phenomenal experience should be mechanism specific rather than mechanism general. Surely we’re in agreement here.


          15. I’ve also heard “computationalism” referred to as “information patternism” (by philosopher Susan Schneider) — I rather like that term as I think it hits to the heart of the matter.

            I absolutely do agree any and all information is mechanism specific, but as I said before, I don’t perceive that anyone — Mike or computationalists — thinks otherwise. That’s why I don’t understand the term “universal” in this context. Do you mean “abstract”?

            The post isn’t about computationalism, so I suppose we’re off-topic, but assuming you mean that information can be abstract, and having seen some of the debates between you and Mike, I think the issue is that you feel thumb pain can only manifest as a physical property of a physical system? It cannot be numerically simulated?

            Put a poor human test subject in room A and an AGI system in room B. From room C push a button that causes both to be hit on the thumb with a hammer. In room A, actual thumb, actual hammer. In room B, a numerical simulation. The human yells through its mouth hole, and the AGI yells through its vocal output device. Both go on to say, “Ow! That really hurt!! Why did you do that?!” Both would give indistinguishable reports to questions about their thumb pain.

            The human and the AGI are indistinguishable from the outside, but what about from the inside? That, I think, is the sticking point. As I understand Mike’s assertion that “phenomenal consciousness is access consciousness from the inside” the AGI would have a similar “experience” of the world and thus of thumb pain. The human and AGI would be indistinguishable from the inside. (The “illusionist” stance is not that the AGI is more like us but that we’re more like the AGI.)

            My view, and I take it yours as well, is that while a physical system might have experience — whatever that is — it remains to be seen if a numerical calculation can. A key aspect of this view is denying knowledge of what the subjective experience is — it’s the “hard problem” and currently a mystery. Yes, absolutely the brain processes information like nothing else, but why, oh, why is there something it is like to be an information processing brain? We need new facts about how the brain works.

            Liked by 1 person

          16. Wyrd,
            Though we are generally in agreement here, we use different terms and have different ways of arguing our positions. Surely you must like your approach far better than mine, just as I’m the same in reverse. So it goes.

            On your perception that Mike considers all information to be mechanism specific, isn’t it strange that you’re taking a position on what he believes on his blog without him jumping in to support your assertion of my wrongness? All that he seems to say to me here is that he can’t quite parse my meaning.

            In any case I’m not talking about all information, since the term might very well be defined in a way that doesn’t apply. I’m talking about what I’ve referred to here as either designed or evolved “machine information”. I don’t know of a single case where such information is not mechanism specific. Nevertheless some people seem to think that phenomenal experience should be considered a machine trait which requires no mechanism specific instantiation. And notice that with no mechanisms to experimentally assess, there’s also no potential for refutation. Unfalsifiability is a great implicit bonus for a position, that is unless explicitly pointed out.

            I realize that you don’t think Mike’s side goes this far. We might quibble over semantics here forever however. Perhaps it would be best to make our cases through examples. Observe that I consider McFadden’s EM field theory to be a promising potential consciousness mechanism. Conversely they seem to staunchly deny any requirement beyond certain information that’s converted into other information. The mechanism specific element to my position is a certain type of EM field. Conversely their position seems mechanism general and so can be parodied with various thought experiments.

            On numerical simulations of thumb pain, technically we should say that they can occur to at least some level of predictive success. Of course just as simulations of rain shouldn’t get anything wet, simulations of someone in pain shouldn’t hurt. It’s strange that some have put so much stock into the “simulate” term when “not real” is baked right into the way the term is generally used.

            On your thumb pain versus simulated thumb pain scenario, actually I think I’d object earlier than you have. I don’t believe that physics would permit the “zombie” form to even act like the human all that well. If so then something could have evolved to effectively function that way.

            This leads me to propose an answer for why (oh why) it has to feel like something in order for something to function as we do. I consider the punishment / reward here to exist as motivation for agency. So I consider the non-conscious computational brain to set up a second mode of function by which existence will be perceived, also known as “consciousness”. Experimental evidence for such mechanistic function should be crucial, or what you’re calling “facts”.


          17. I think I should be clear that, while I agree with aspects of your position, I also agree with aspects of Mike’s. I think there is too much unknown for certainty. I can see things going either way.

            That said, I am indeed skeptical that a numerical calculation involving a model, either of the brain or of the mind, will produce results indicating consciousness, let alone phenomenal experience.

            At the same time, I have to acknowledge that it’s extremely hard to say why not. We are, in part, confounded by not only not knowing what consciousness is, but by not even knowing how to define it.

            It is true that a rain simulation doesn’t get anyone wet, but a sufficiently good one describes every aspect of it. A sufficiently good model of the brain likewise describes every aspect of brain function. If such a model describes the physics and consciousness is a natural physical process, then what is left out of the model? If consciousness isn’t in the model, where is it? (If EM fields are important, those are physical, so include them in the model.)

            It’s possible the only reason this won’t work is that such a model would be too formidable. I once calculated the bare minimum size requirement of just the brain’s connectome. I came up with 24 petabytes to describe just the connectome (based on number of neurons and average number of synapses). On top of that, all points of the model need to work in parallel, so running it, at least with conventional computing, may be effectively impossible.

            But technology evolves, so who knows. What output would an accurate brain model produce? Nothing in physics rules out such a model. It’s possible we will someday make one. What will happen when we turn it on?

            FWIW, I foresee, firstly, that software is always buggy, and it may be humanly impossible to make such a vast project to work. Remember the Star Wars defense program? Many big software projects have crashed on the rocks of “this is too hard to get right!”

            Secondly, assuming it works right, maybe the result is a living but comatose brain. Living tissue, but no one home. Or a consciousness that’s essentially white noise. Or a gibberish consciousness. Or an insane or incoherent one. Or an idiot or brain damaged one. Or any of the myriad other failure modes.

            Or it just might work. I think that has to be acknowledged.

            But I am skeptical. I think there is merit in the idea that a specific physical process is necessary for consciousness, and I do think consciousness emerges from the physical process (just like laser light emerges from the physical process of a lasing material).

            The thing about consciousness that makes it different from everything else is that it has an inside, a subjective aspect. If that inside, as one might think, actually requires being inside a specific physical process, then it seems likely a numerical calculation, being such a different process, won’t have that inside. (I just posted about how Nagel’s famous paper is an argument that we need to focus on understanding subjectivity. I quite agree.)

            In closing…

            “If so then something could have evolved to effectively function that way.”

            😀 That’s like saying birds should have evolved to fly like airplanes or that fish should have evolved to swim like torpedoes. I’ve never seen a naturally evolved microwave oven, either. 😉

            Liked by 1 person

          18. Eric,
            I haven’t weighed in because I don’t perceive it’s productive. Your description of my views show you either don’t understand them or are just interested in getting a rise out of me. Attempts at correction seem to hit an impenetrable wall. You just seem interested in segueing to your standard talking points. I’ll run on that treadmill for a few rounds per thread, but in the absence of anything else, that’s about it.

            Liked by 1 person

          19. Wow Mike, that’s pretty strong. Either I don’t understand your beliefs here (in which case I suppose that my criticism doesn’t apply), or I do understand them and simply want to mess with you? Well I’m certainly not here to mess with you! Regardless if it’s true that I don’t grasps the manner in which you consider information to be mechanism specific, then I’m just happy to hear that this is indeed your perspective! Hopefully some day I’ll also grasp the means by which you consider information to be mechanism specific.


            A sufficiently good model of the brain likewise describes every aspect of brain function. If such a model describes the physics and consciousness is a natural physical process, then what is left out of the model? If consciousness isn’t in the model, where is it? (If EM fields are important, those are physical, so include them in the model.)

            Consider my own way of concluding that describing every aspect of brain function shouldn’t create brain function. It’s because descriptions merely exist as elements of language rather than what’s being described. The only thing that a description might be the same as, is another description. So “5” can effectively be the same as “4+1” because each of them are descriptions in the first place. You mentioned “abstractions” before, and yes this term does seem to to apply here. Another way of saying this is that language based models of things (unlike smaller and less detailed versions of things that are also sometimes called “models”) can never be those things. Such models of brains, the weather, or any other non description will merely exist as descriptions, though what’s meant by “brains” and so on aren’t language elements (and even though I must use language to express this idea).

            Earlier I was instead referring to the idea of mechanism general function. The problem I see with this is that machine information seems to always be mechanism specific. In any case, what do you think of my “descriptions never exist as what’s described (unless what’s described is a description)” answer?

            Liked by 1 person

          20. “You mentioned ‘abstractions’ before, and yes this term does seem to to apply here.”

            Your example of 4+1 and 5 demonstrates that’s exactly what we’re talking about here. (This is why standard terminology is useful. The term “abstraction” would have put us on the same page several rounds ago.) And, yes, that a numerical model is an abstraction and description of a physical process is exactly the issue. I’ve written a lot of posts discussing the differences between a physical process and a numeric model of it. Mike and I discussed the point for years. I agree a model is a description,… but that description has the potential to be entirely accurate.

            As I wrote last time, I think we must acknowledge that it’s very hard to say exactly why a sufficiently accurate model of a brain (and body and environment) wouldn’t provide a description of a conscious entity operating in that environment. If consciousness is a physical process, and we model that process accurately, why wouldn’t the model behave like a conscious being?

            You said you don’t think the physics would allow such a model, but exactly why not? What would the failure mode be? Where does the physics fail?

            I mentioned some practical limits before, the model being too formidable to run or too difficult to get right, but the theoretical question, I think, is very open. My statement has always been that if consciousness emerges from, and supervenes on, physical structure, it might not arise in a numerical model of that process.

            The critical point is whether consciousness only arises from being inside a specific physical process (one so far only implemented in brains). That is, there is something it is like to be a particular physical system. If so, it’s possible a numerical model won’t capture that.

            But saying exactly why kind of requires understanding why there is something it is like to be a particular physical system in the first place. And since that’s the infamous “hard” problem, I sure don’t have a clue. No one does.

            “In any case, what do you think of my ‘descriptions never exist as what’s described (unless what’s described is a description)’ answer?”

            Yes, a known truth. The only completely “lossless” model is of another model. The usual example is of a calculator. There is an abstraction (or description) — a model of a simple calculator. The calculator model is reified in countless hardware and software designs with no loss of any property of that model.

            Keep in mind, part of the computationalist stance is that the human mind is ultimately akin to a calculator model — an abstraction that can be modeled without loss of any property. Our brains are hardwired computational devices that run that abstraction. It is of this point I’m most skeptical.

            Liked by 1 person

          21. Wyrd,
            It sounds like we’re on the same page. If I ever implied that I don’t think a sufficiently complex model would effectively describe mental function, then I consider that to have been an error, or at least a miscommunication. I believe that the mind functions causally, as well as that causal dynamics may be modeled (unlike non causal dynamics). So just as a generic calculator in whatever machine may preform the same essential operations to produce the description of “5”, theoretically a generic model in whatever machine could produce an effective “thumb pain” description. This is to say, model what a brain does to produce this felt sensation. But while the brain produces something that feels it, the model merely produces something that describes what it takes to create such a feeling, or an abstraction. It’s conceptually the same as a succinct human term such as “5”. (Or perhaps pi would be a bit more appropriate since a given terminological description should only provide some degree of accuracy?)

            I’m not sure I’d say that an “inside” element to mind changes anything. This would simply be a description of how the physics works, whether through EM fields or whatever. We’ll need experimental evidence of what’s going on in the brain here before we have any hope of grasping such physics, and so are able to build effective models.

            Keep in mind, part of the computationalist stance is that the human mind is ultimately akin to a calculator model — an abstraction that can be modeled without loss of any property. Our brains are hardwired computational devices that run that abstraction. It is of this point I’m most skeptical.

            Definitely! And for me the “skeptical” characterization would be overly diplomatic. What I experience when my thumb gets whacked should not be called “an abstraction”! It is not akin to the humanly fabricated term “5”.

            What’s strange to me is how highly educated people could believe that phenomenal experience exists as an abstraction, or like a made up term? Wouldn’t it have evolved and so have existed before language and thus abstractions? (And confusingly, “phenomenal experience” does happen to be a made up term, though we use this for thinking and communicating.) Perhaps when reduced this way they’d claim otherwise, but for the sake of convenience go back to mere abstractions when actually theorizing consciousness, peppered with all sorts of neurological work?


          22. Yeah, I think our views are fairly similar. I’m more a skeptic and see both you and Mike as more gnostic, which makes me feel like I’m in the middle. I can’t deny the possibility that computationalism is right, but I’m going to need to see it working before I believe it.

            “But while the brain produces something that feels it, the model merely produces something that describes what it takes to create such a feeling,”

            Your view has the brain producing something. Computationalism asks: Why wouldn’t a sufficiently accurate model also produce (a good description of) it?

            I think there being an “inside” to consciousness is the whole point. It’s a game-changer to me. No other physical phenomena has an inside. Geology, electronics, astronomy, what have you, the only system we know that has a subjective inside is a brain. The existence of that inside is Chalmers’s “hard” problem.

            The “inside” — the something it is like — may supervene on literally being inside a certain kind of physical mechanism. It may not translate to numerical calculation because the physical structures are so different.

            “What’s strange to me is how highly educated people could believe that phenomenal experience exists as an abstraction, or like a made up term?”

            Let’s be clear; they believe it could (or maybe would) exist in a numerical model — that the output data would reflect a conscious mind experiencing subjectivity in a virtual environment (that unfortunately includes virtual hammers smashing virtual thumbs). It would be a description, but so is a description of a friend. You know the friend is actually conscious, so the description reflects a conscious mind. Computationalism believes the same would be true of the description given by a model. It would reflect a conscious mind.

            But, I have a hard time seeing where the inside comes from there. I’ll need to see it to believe it.

            Liked by 1 person

          23. Well thanks for the conversation Wyrd. I realize that I can sometimes seem like the jerk who’s too dismissive of other’s beliefs. I guess it’s because I’m quite passionate about this stuff and it’s so clear to me that science has gone the wrong way. Often however my strong rhetoric doesn’t seem to help my case against the powers that be. Let me know if you ever want to talk about McFadden’s potential solution.

            So you’ve got a few days with Bentley? Sounds good.


          24. “…it’s so clear to me that science has gone the wrong way.”

            Well, I can very well relate to that feeling (and I agree), but always keep in mind that it’s exactly when things seem so clear that one needs to be the most cautious. In today’s blog post, I referenced two recent posts on the Titan Station blog written by astronomer Stacy McGaugh. There’s a great quote in the second post:

            “What gets us in trouble is not what we don’t know. It’s what we know for sure that just ain’t so.” ~Josh Billings

            Certainty is a killer.

            I can also relate to people reacting to strong rhetoric. I’ve lived there all my life! But people don’t generally handle well having their beliefs challenged. Very few people are actually as open-minded as they sometimes think they are.

            “So you’ve got a few days with Bentley?”

            Yep. Dog-sitting my favorite terrier! She’s 87.3% pure brown sugar.

            Liked by 1 person

  12. I would like to add one closing comment to this discussion: The “subjective substrate” upon which the system of mind relies upon in order to navigate its own experience is the illusion, not the other way around. Make no mistake kids: our localized field of experience is not a subjective one, it is an objective experience, all the way down the ladder of complexity. And to be ever more explicit, that “subjective substrate” upon which the system of mind relies upon is the exclusive contributing factor which ultimately makes the Cartesian Me a fundamentally delusional system.

    When one individual believes a delusion that individual is considered to be insane but, when more that one person believes the same delusion it is called a religion. The only way to avoid this delusional vicious cycle of subjectivity is to become intellectually self-reliant and to think for oneself.

    Good luck kids….


  13. Eric,

    “…I’m certainly not here to mess with you!”

    Although I can see that you yourself are not motivated by this purpose Eric but, there are others including Mike who choose that course of action as a back door ad hominem when someone like myself presents a viable argument that is original and cannot be effectively refuted. Whereas you choose not to engage, a tactic that I do not necessarily agree with but can respect.

    Consider my broad definition of consciousness that I posted earlier: Consciousness is “A (unified), localized field of experience that is multi-faceted.” Nobody else choose to comment but Mike challenged it by making a backdoor ad hominem statement:

    ““A localized field of experience that is multi-faceted” sounds interesting. But you used the word “experience” in it. How would you define that word?”

    Of course my rebuttal was: “Now that all depends upon what the definition of is, is…….”

    These type of comments are childish behavior. Personally, I do not believe that human beings in general are really interested in learning new things, they are only interested in playing the role of amateur philosphologists who endlessly debate ideas that someone other than themselves invented, the likes of Academics and other adult day care workers.

    C’est la vie my internet friends and I wish you all the best……

    Liked by 1 person

    1. Well I didn’t respond because “A (unified), localized field of experience that is multi-faceted.” seems true (depending on how you define “field”) but too general to add anything to the picture. Most things are multi-faceted, and I do think there is something to the point that, because “experience” and “consciousness” are roughly synonymous, using “experience” here makes the description circular.

      Consciousness, as has been said so many times, is really hard to define without being either trivial or laughably wrong. I spent a whole post trying, but I doubt I was any more successful than anyone ever is.

      The study of consciousness, in many regards, is in even worse shape than quantum mechanics when it comes to being in the dark and needing new facts. It can’t even come up with a definition for its object of study, and we’ve been thinking about it much longer. (Ironic that such fundamental questions as the nature of reality and the nature of ourselves are such tough nuts to crack.)


    2. I don’t think Mike is trying to mess with you Lee. I think he’s trying to engage just as he does for people at his site in general. People seem to like this. I think that’s why they come back. You always seem to in the end.

      There are certain people who see things fundamentally differently than I do, such as theists, and so even if I do like them, I don’t consider it possible to have productive discussions with them in this regard. And indeed, I like you. But in a nutshell I’m a posteriori while you’re a priori.


      1. “But in a nutshell I’m a posteriori while you’re a priori.”

        Well stated Eric. You are very perceptive; and for the most part, because I am a priori and everyone else is grounded in a posteriori, it is not productive for me to engage with those who do not share a common consensus point. But in spite of that fact, I’ve enjoyed my tenure here and have learned a bunch.

        Thanks for your tolerance folks…..


  14. Hey Mike,

    It’s funny you should bring this up. Do you remember the cycle of articles you, Tina and I wrote about AI? My arguments against algorithmic (strictly defined) AI came, partially, from this idea of emergence along with Godel’s incompleteness, an argument taken from Aquinas about the limitations of modelling and an econ principle called “Goodhart’s Law.”

    Anyway, I mention all this because both I think it is connected to your current article – I think I’d consider myself an emergentist – and because that discussion we had 3 years ago has morphed into an article I’m submitting to the World Education Research Association. The idea being that top down reforms to complex systems – a school district for example – are highly prone to backfiring regardless of how meticulously you model them and regardless of how powerful your supercomputers get.

    This is all a fancy way of saying thank you for inspiring me back then and keeping me thinking in the mean time.

    Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.