Strong vs weak emergence

The Neuroskeptic has an interesting post on a paper challenging theories of mind based on strong emergence.

A new paper offers a broad challenge to a certain kind of ‘grand theory’ about the brain. According to the authors, Federico E. Turkheimer and colleagues, it is problematic to build models of brain function that rely on ‘strong emergence’.

Two popular theories, the Free Energy Principle aka Bayesian Brain and the Integrated Information Theory model, are singled out as examples of strong emergence-based work.

I’m familiar with IIT (Integrated Information Theory), and as many of you know, I’m not a fan.  To be sure, integration is crucial, but in and of itself, it isn’t sufficient.  It matters what the integration is for.  IIT strikes me as a theory attempting to explain how the ghost in the machine arises.  Since I think the ghost is a mistaken concept, the theory seems fundamentally misguided.

I’m not really familiar with the Free Energy Principle, although it comes up in conversation from time to time.  The link discusses a Bayesian understanding of the brain, which seems plausible enough, although I’m not sure how strong emergence necessarily fits in.

But the reason for this post comes from a quote from the paper:

A system is said to exhibit strong emergence when its behaviour, or the consequence of its behaviour, exceeds the limits of its constituent parts. Thus the resulting behavioural properties of the system are caused by the interaction of the different layers of that system, but they cannot be derived simply by analysing the rules and individual parts that make up the system.

Weak emergence on the other hand, differs in the sense that whilst the emergent behaviour of the system is the product of interactions between its various layers, that behaviour is entirely encapsulated by the confines of the system itself, and as such, can be fully explained simply though an analysis of interactions between its elemental units.

I occasionally note that I consider emergent phenomena to be just as real as the underlying phenomena that it emerges from.  Temperature is just as real as particle kinetics, that is, it remains a productive concept in our models.  There’s often a sentiment to regard such emergent phenomena as an illusion, but that doesn’t strike me as productive, since too much of what we deal with is emergent.  Such an attitude can leave you questioning whether anything other than quantum fields and spacetime exist.

Emergence, for me, is strictly an epistemic concept.  It’s more about what our minds can cope with and understand than anything ontological.  It’s simply a point in the hierarchy of phenomena where it becomes productive for us to switch to a new model, a new theory to describe what’s going on.  This understanding matches up with weak emergence.

On the other hand, strong emergence is an ontological assertion.  It’s a statement that something wholly new comes into existence from the lower level phenomena, something that can’t be reduced to its constituents and interactions, even in principle.  This type of emergence strikes me as far more problematic.

While I do think emergence is an important concept, I usually resist it as an explanation by itself for anything, particularly something like consciousness.  Certainly what we call consciousness is emergent from neural activity, but simply saying that doesn’t seem like an interesting or useful explanation.  It matters a great deal how it emerges.  When we understand that emergence, similar to the way we understand how temperature emerges, then we’ll have something useful.

What do you think?  Am I too dismissive of strong emergence?  Or of IIT?  Anyone familiar enough with the Free Energy Principle to succinctly describe it?

This entry was posted in Zeitgeist and tagged , , , , , . Bookmark the permalink.

42 Responses to Strong vs weak emergence

  1. paultorek says:

    I agree with you that weak emergence is common and strong emergence is an extravagant hypothesis. And with the authors, I guess. But calling IIT a version of “strong emergence” is crazy talk. IIT is a hypothesized definition of consciousness. (I.e. the hypothesis is that the correspondence will work out so convincingly that we’ll make it a definition.) I don’t think it’s a good candidate for a definition, but that has nothing to do with strong emergence, which simply isn’t in the picture.

    A brief glance at the Wiki link on the free energy principle makes me doubt that Free Energy Principle has any strong emergence to it either.

    Liked by 1 person

    • I actually saw the link with IIT as plausible. IIT, as I understand it, imagines that information integration is consciousness, including all the various aspects of it. That inherently strikes me as an assertion of strong emergence.

      On the other hand, it’s also a panpyschist theory, so maybe not.

      Like

      • Strictly speaking, I’m pretty sure IIT is not panpsychic as most people understand that term. But it is pluripsychic, if that makes any difference.

        *

        Like

        • Tononi himself accepts the panpsychist implications. (This fits with what I remember reading from him years ago.)
          https://www.scottaaronson.com/blog/?p=1823

          Like

          • From the abstract of Consciousness: Here, There but Not Everywhere by Tononi and Koch:

            The theory vindicates some panpsychist intuitions – consciousness is an intrinsic, fundamental property, is graded, is common among biological organisms, and even some very simple systems have some. However, unlike panpsychism, IIT implies that not everything is conscious, for example group of individuals or feed forward networks. In sharp contrast with widespread functionalist beliefs, IIT implies that digital computers, even if their behavior were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.

            *

            Like

          • James, I stand corrected on Tononi’s views. I thought I recalled him explicitly endorsing panpyschism in that interaction with Aaronson, but after a quick scan, I’m totally mistaken. Thanks!

            Like

  2. [Donning contrarian hat]

    My understanding of IIT is that it is ultimately trying to explain “experience”, “what it is like”, i.e., qualia. It does this by by postulating that certain kinds of processes (ones which integrate information) are involved. Ultimately, it postulates that certain systems integrate information as concepts, and when a system is in the process of representing these concepts in a certain way, that is qualia. So qualia emerges from processing concepts (information) in a certain way, just like temperature emerges from molecules acting in a certain way.

    *

    Liked by 2 people

    • “So qualia emerges from processing concepts (information) in a certain way, ”

      Well, sure. But that description can be used for any physical theory of consciousness. It’s only interesting if it gets into the messy (and risky) details. When I read about IIT (admittedly a long time ago, before version 3), those details weren’t evident. It gets back to the using emergence as an explanation by itself.

      Like

      • Actually, I’m pretty sure version 3.0 starts getting into those messy details. Vectors in state spaces, which would be beyond me except I’ve seen them before. Remember Chris Eliasmith and Semantic Pointers? Vectors in state spaces. Wanna know something even cooler? What would you call a neural mechanism that can support a near infinite number of configurations of vectors in state spaces, and output signals recognizable as representing those individual vectors in state spaces? How about a global workspace?

        *

        Like

    • paultorek says:

      What James said. IIT posits that experience is a kind of information; and pretty much everybody accepts that there’s such a thing as information. That’s as weak as Weak Emergence can get.

      Like

  3. [adjusting contrarian hat]

    I think there is a decent argument to say some things are strongly emergent. Strong emergence is when there simply is no path that can predict the emergent pattern from the constituents. Stephen Wolfram has shown that there are simple systems of cellular automata with simple rules such that, given a starting configuration, there is no practical way to predict the state of the system after a large number of turns. Any pattern which emerges in such a system would be strongly emergent.

    Given that, I have reason to believe Consciousness is weakly emergent.

    *

    Like

    • [edit above: Strong emergence is when there simply is no practical path, as opposed to a theoretical path]

      Like

    • If I parse correctly, you’re saying that strong emergence exists but that consciousness is more an example of the weak variety. If so, I can agree on the latter.

      I do think that if by “strong emergence” you mean there’s no practical way to predict it, but that it’s possible in principle, I would think that would be weak emergence. Although it wouldn’t surprise me if opinions on this vary.

      Like

  4. James Cross says:

    Is there any quantitative difference between strong and weak emergence? Or, is the differentiation purely qualitative?

    Would the properties of water emerging from combining hydrogen and oxygen be strong or weak emergence?

    Liked by 2 people

    • As far as I know, they’re epistemically equivalent. If they weren’t, the difference would be testable. But in cases where we can’t follow the hierarchy all they way up or down, strictly speaking, we can’t rule out some ontological aspect. I’m a skeptic, so I assume that in the absence of evidence, those additional ontics don’t exist, but admittedly it’s more a matter of philosophy than science.

      Like

      • James Cross says:

        Not sure I totally follow your point. Regarding water…

        “Another example from physics of strong emergence is water, being apparently unpredictable even given a meticulous analysis to the properties of its constituent atoms. It would appear that no computational description of the system can exist, for such a simulation would itself constitute a reduction of the system to its constituent parts.”

        http://complexitylabs.io/strong-weak-emergence/

        Like

        • Wyrd Smythe says:

          Is the difference between “unpredictable” and “unexplainable” significant? I’m not sure why I should care about “unpredictable” because lots of fully deterministic systems are. Being unable to explain it seems like a whole other ballgame.

          Like

          • It seems like an explanation is a theory, one that we would expect to make predictions, even if imprecise ones. Of course, those predictions may not be testable.

            Like

          • Wyrd Smythe says:

            Both “theory” and “explanation” are words with a lot of definitions, so this may be semantics, but I’m not sure I agree. They are sometimes both the same thing, but it seems like a Venn diagram wouldn’t be a total overlap.

            What I’m trying to say is that I’m not bothered by an inability to a priori predict something emergent based on our understanding of the parts. I see that just as our limitation.

            I would be hugely bothered by an emergent phenomenon that could not be understood by understanding its parts (if even only in retrospect). The explanation was there; we just didn’t see it.

            Like

          • I’m not definite on this, and I fully realize we’re talking about word usage here, which is a cultural thing that often isn’t logical. But can you think of an explanation that is also not a theory? One that doesn’t make predictions of some kind or another? Even if only imprecise or probabilistic ones?

            It seems to me that even an explanation of a historical event amounts to a causal framework for how that event came to pass, and predicts what future evidence might reveal about it.

            “I would be hugely bothered by an emergent phenomenon that could not be understood by understanding its parts”

            I agree, although my default assumption would be that there’s something about those underlying constituents or their interactions we just don’t understand yet.

            Like

          • Wyrd Smythe says:

            Yeah, I would tend to assume the same. Within the context of physicalism, I just don’t buy “strong” emergence and feel it’s a made-up idea. There is emergence. Full stop.

            “My tire went flat because I ran over that nail I can see sticking out of it,” is an explanation I wouldn’t call a theory. (Or, “I’m out because I got three strikes.”)

            Does the QFT theory behind the standard model really explain anything? We don’t even know how to interpret the theory.

            But both those cases are arguable depending on what one calls a theory. HE physicists, for example, tend to think of a “theory” only as a mathematical framework (without the math, it doesn’t rise to the level of “theory”).

            As you’ve learned, I tend to be restrictive in my definitions of words, which means I also tend to see their differences more than others might. We’ve run into that a couple of times now. Just the way we see it.

            Liked by 2 people

          • Mike you said:

            “But can you think of an explanation that is also not a theory? One that doesn’t make predictions of some kind or another? Even if only imprecise or probabilistic ones?“

            Here’s one. “God did it!”

            Like

          • Yeah, I still see that as a theory, just one that doesn’t make any testable predictions, and is therefore useless, except possibly for the emotional comfort of believers.

            Like

        • In my mind, there are two interpretations. The first is that we have perfect understandings and measurements of the motions of the constituent atoms. If so, then only strong emergence could account for our inability to predict water. The second is that either we don’t have a perfect understanding of intermolecular forces, or our measurements have uncertainty in them, and the variances in those uncertainties add up to the unpredictable behavior.

          Since there’s no such thing as a measurement with infinite precision, I can’t see how we can favor the first interpretation, but many do.

          Liked by 2 people

          • James Cross says:

            It is interesting to have a discussion about emergence using one of the simpler examples. If an example of strong emergence exists for water, it would be unlikely it doesn’t exist for something as complex as consciousness.

            If you demand perfect understandings and measurement of everything, then you can always rule out strong emergence but that would require you to believe that perfect understandings might be possible, wouldn’t it? If perfect understandings aren’t possible, then there is something to cause unpredictable properties to manifest as systems get larger and we could call it “strong emergence” or just ignorance of system components it might amount to the same thing.

            Liked by 1 person

    • Hello James,
      I’ve been monitoring your discussion with interest. Just moments after my first post here, where I mentioned how I consider it problematic to mark strong emergence by a failure of computer simulation, you provided an article championing that perspective. Thus they called water “strongly emergent” from its components. You went on to observe that from this perspective it might not be anything more tangible than ignorance which causes this sort of thing. And what’s useful about ontological claims given the ignorance of an observer? Nothing of course. If water strongly emerges from hydrogen and oxygen then obviously consciousness strongly emerges from the brain. So what?

      Instead consider getting rid of that computer simulation stipulation, and so water “weakly emerges” from constituent chemicals given presumed causal dynamics. Here strong emergence would concern something beyond the causal properties of nature, such as godly influences. The only documented case of a modern scientific community backing the existence of such strong emergence that I know of (and from the physics community no less!), seems to concern popular interpretations of Heisenberg’s Uncertainty Principle. Given my own strong naturalism I don’t like that one either!

      Like

  5. Steve Ruis says:

    Re “While I do think emergence is an important concept, I usually resist it as an explanation by itself for anything, particularly something like consciousness. Certainly what we call consciousness is emergent from neural activity, but simply saying that doesn’t seem like an interesting or useful explanation.”

    I think we may just be “in process” on much of this (which is why I advocate it is too early to draw conclusions). Certainly which someone postulates strong emergence for consciousness (I am somewhat of a fan), it then behooves them to explain “how” this happens. The postulation is easy enough, finding a mechanism is much. much harder and we are not there yet.

    Love your posts! Very thought provoking!

    On Wed, Feb 6, 2019 at 6:25 PM SelfAwarePatterns wrote:

    > SelfAwarePatterns posted: “The Neuroskeptic has an interesting post on a > paper challenging theories of mind based on strong emergence. A new paper > offers a broad challenge to a certain kind of ‘grand theory’ about the > brain. According to the authors, Federico E. Turkheimer and col” >

    Liked by 1 person

  6. milesmutka says:

    I’m sorry to say that I don’t get the distinction between strong and weak emergence. The two-paragraph quote here is not making it any clearer, talking very loosely about ‘systems’. For me any system worth studying is emergent in the original sense of the concept; an ’emergent property’ is just another word for ‘systemic property’, distinct from atomic properties.

    Wired magazine had recently a piece on Karl Friston, the man behind the Free Energy Principle. The article was very approachable, and sort of a meta-description of the theory, and the buzz surrounding it.

    Liked by 1 person

    • On the distinction, I think weak emergence is about our understanding of a system, and how we might have to switch theories as we scale up or down to best understand what is happening, while strong emergence asserts that where’s something objective about the overall phenomena that exceeds what is provided by its constituents and their interactions.

      Thanks for the Wires magazine reference! I’ll look it up.

      Like

      • paultorek says:

        What you mean is right. What you say is wrong. Let’s fix it.

        Weak emergence can be identified by our switching theories as we scale up or down to best understand what is happening. Without denying that our small-scale theory still applies, we switch to the large-scale theory because it answers our questions more felicitously. There’s also an objective truth here, which is that the large-scale theory’s referents are a subset of the small-scale one’s.

        Strong emergence can be identified by the need to make exceptions to the general small-scale theory when the large scale structure applies. For example, if we had to say “two electrons repel each other with a force given by [formula] *except* when they are embedded in conscious brains, in which case [alternate rules / no rules at all]” – then we’d have strong emergence.

        Liked by 1 person

  7. Wyrd Smythe says:

    I haven’t given emergence a great deal of thought, but to the extent I have, I tend to be a reductionist, so I believe emergent behavior can always be explained through an understanding of its parts. What that bit called “strong emergence” sounds like magic.

    Liked by 1 person

  8. One thing that I like about this discussion is that no one has mentioned weak emergence as something that’s applicable for computer simulation, whereas strong emergence is not. Wikipedia uses this definition, and it bugs the shit out of me. It’s as if chemical dynamics produce what they produce (such as the function of life), given their potential for computer simulation. This not only doesn’t explain what’s effectively meant by the strong/ weak distinction, but it’s quite anthropocentric. It’s like the wonders of nature must be referenced against their ability to be reflected by our comparatively pathetic machines! So let’s abandon that association, not to mention help fix Wikipedia there.

    What we mean by “strong emergence”, as Wyrd pointed out just above, is magic. This is to say a void in causality, or where something exists that nature didn’t produce. Dualism applies, as does all proposed supernatural dynamics. In this domain faith is required rather than reason.

    (Here the naturalist must remain humble, I think, both for epistemic responsibility as well as for rhetorical effectiveness. So try not to say things like “Strong emergence is ill conceived.” Magic may indeed be real, so by implying otherwise the cause of naturalism is harmed. Just say that figuring out how non-causal reality functions would then be pointless.)

    There is but one example of strong emergence that’s accepted in science today as I understand it. Modern physicists generally believe that the uncertainty associated with Heisenberg’s magnificent principle, does not reflect a weakly emergent property of nature given human ignorance. Instead they interpret this as a void in causality itself, or strong emergence.

    As Mike and I have discussed, in truth Einstein’s naturalistic ontology probably goaded Bohr and Heisenberg away from merely epistemological statements. Regardless they went on to kick his ass with it. I wouldn’t mind this so much if modern physicists of the “Sorry Einstein, but God does play dice” persuasion would acknowledge that their metaphysics supports supernaturalism in this regard.

    Liked by 1 person

    • I totally agree that whether we can currently run a simulation of it isn’t the standard for strong vs weak emergence. I suspect that’s not what the Wikipedia author literally meant. He was probably only using it as an example.

      I’m agnostic on the ultimate ontology of quantum physics, and I don’t have strong feelings about the precise definition of naturalism, but I find yours a bit too restrictive. If we observe phenomena that can’t be explained with causality, but does appear to obey regularities (as quantum mechanics appear to do) then I personally don’t think naturalism has been abandoned.

      For me to say someone has done that, they’d have to resort to non-material agents (gods, spirits, etc) as causal factors. Certainly some interpretations of QM do that by positing that our conscious observation is what causes the wave function collapse, but most physicists stay well away from those interpretations.

      I remain open to the possibility that there’s a deterministic explanation for QM, but anyone who does that has to be willing to live with the potential costs: non-locality (which some equate with supernaturalism), many worlds, reverse time causality, etc. QM will not allow us to come away unscathed with traditional metaphysical notions about how reality works.

      Liked by 1 person

      • Mike,
        I think it can usefully be said that there is such a thing as “clean epistemology”, as well as “dirty epistemology”. By “clean” I mean that a given statement does not leave room for error, which is to say that solid grounding exists for what’s said. “I think therefore I am” would be an example. Consider its profundity — my existence is the only aspect of reality that I can know for certain is real. All else may be false. It’s a solid position from which to build.

        Conversely in “dirty” epistemology various unresolved contingencies are permitted to fester. It’s like step two of that hilarious Sam Harris cartoon from the Neuroskeptic article that you linked to for this post — “…Then a miracle occurs…”. One reason that Einstein was able to go so far beyond everyone else, I think, is because he kept his epistemology clean.

        There is a clean and so epistemically responsible way to consider the uncertainty that we perceive Heisenberg’s principle to describe. It’s that we’re ignorant of what’s going on, though perfectly determined events must occur in the end under our non-supernatural presumption of causality. Einstein has been persecuted for holding such a clean perspective.

        I’m quite sure that with the experimental verification of quantum entanglement non-locality, that Einstein would have ultimately been happy to adjust his associated understandings. No need to invoke anything supernatural here! Are there more dimensions to existence which facilitate such entanglement? Maybe. But certainly the associated complexity suggests that we ought to remain epistemically responsible rather than make ontological claims about the existence of “natural uncertainty”, and so resort to dirty epistemology.

        Then regarding the interpretation of QM known as “many worlds”, Einstein would have surely labeled this pure science fiction. I consider it to be the absolute worst display of what dirty epistemology can lead an otherwise sensible modern physicists into. Sure the “conscious observation” explanation for wave function collapse does smell bad, but at least it’s simple. “Many worlds” is just plain ridiculous!

        Still I don’t consider this sort of thing to be the fault of modern physicists in the end however. It’s not technically their job to found the institution of science — that’s the job of philosophers. Science remains in need of accepted principles of metaphysics, epistemology, and axiology, and therefore these sorts of problems should be expected today. Given this void it makes perfect sense to me that the outer limits of our hard sciences would display the same softness which is so standard in our mental and behavioral sciences.

        One person it would be nice to hear from about this is Stephen Wysong. He surely must have some thoughts about quantum strangeness given his affinity for Einstein.

        Calling Stephen Wysong….

        Like

        • Eric,
          On “clean epistemology”, I think we have to be careful about prejudging what we will find. In the end, all we have are conscious observations and our theories about why we have those observations, that hopefully predict new ones. It’s nice of those theories are aesthetically pleasing and meet our prior sense of propriety, but that sense is cultural and has changed over time. For example, it used to insist that humans were the center of creation and separate from the animal kingdom, but Copernicus and Darwin demolished those precepts.

          On Einstein, I wouldn’t call him persecuted, at least not by the scientific community. (The fact that he was forced to flee actual ethnic persecution in Germany is a different matter.) Given that he saw entanglement as a reductio ad absurdum, I’m not sure what he would have thought of Wheeler and the subsequent experimental results.

          I actually suspect he might have been attracted to the many world interpretation for its deterministic nature. Remember, this is the man who accepted that time and space were relative. He wasn’t afraid of staggeringly counter-intuitive concepts. But he couldn’t accept quantum indeterminism. An interpretation that restored it in full accordance with relativity, might have been more acceptable than the alternative. Although he might have preferred the more relational versions to the mainstream MWI.

          Liked by 1 person

      • “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.”

        Albert Einstein from “On the Method of Theoretical Physics,” the Herbert Spencer Lecture, Oxford, June 10, 1933. This sentence may be the origin of the much quoted sentence that “everything should be as simple as possible, but not simpler,” and its variants. I can’t imagine anything more antithetical to this philosophy than a MWI.

        Like

        • The thing to understand about the MWI is that it has one central assumption: the wave function never collapses, that the Schrodinger equation continues uninterrupted. It only appears to us that it collapses because we become part of the spreading superposition. It’s the implications of the interpretation that are profligate, not the central assumptions.

          Ultimately we’ll never know what Einstein might have thought. (Pity Everett couldn’t have published a few years earlier.) Any suppositions about it are untestable. Of course, the quantum interpretations themselves are currently untestable, which makes this multilayered speculation 🙂

          Liked by 1 person

      • Mike,
        I have a very high regard for Einstein and low regard for many worlds interpretations of quantum mechanics. Therefore there really isn’t much of a question here for me. I’m not entirely sure if you’re being contrarian with me now as encouragement for me to think through these issues further, or believe that I should think less of Einstein and/or more of MWI. And in a psychological sense me bringing this up naturally sets up a response of “Oh no, I’m not being contrarian. You do seem to think too much of Einstein and too little of MWI.” So it goes.

        Like

        • Eric,
          I think you’re overthinking the conversation.

          I do sometimes ask questions to prompt people to think things through more (or reveal that they already have), but here I was just responding to each of your opinions with my own.

          Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.