Information, computation, and reality

David Chalmers in his book: : Reality+: Virtual Worlds and the Problems of Philosophy, has a discussion on information and reality. He identifies different types of information: semantic, structural, and symbolic.

Semantic information is what we colloquially think of as information, it’s the patterns that tell someone or something about reality. A map of a city is semantic information, at least for an observer who isn’t blind and can understand the language of the labels. On the other hand, a sheet of paper with random patterns isn’t semantic.

But to a physicist, the random patterns are still information. I’ve often called these kinds of patterns physical information to distinguish it from the semantic variety. I like this name because it emphasizes that all information is physical. In this view, semantic information is a subset of physical information.

Chalmers introduces the concept of structural information, which I initially thought was equivalent to physical information. But physical information, as I conceive it, is agnostic on the ultimate ontology of its patterns. Chalmers’ conception of structural information is specifically patterns of bits.

Symbolic information is structural information that encodes semantic information. In a way, it could be seen as the relation between structural and semantic information.

Chalmers’ concept of structural information is related to an ontological view he calls it-from-bit, the idea that reality, at its base level, is digital. The idea is that reality, if we go down far enough, perhaps down to the Planck scale or lower, is binary in nature. So elementary particles, as well as spacetime, are ultimately composed of bits.

(Chalmers notes that this is somewhat distinct from John Wheeler’s original idea of it-from-bit, which involves a participatory view of reality being constructed based on what yes-no questions we ask it. Digital physics keeps the yes/no part, but ditches the semi-idealist aspects.)

Chalmers admits that every bit we currently know of is actually a bit-from-it arrangement. The bits in commercial computers are transistors with their own components. Even in quantum computers, the qubits are typically the spin of particles which have a lot of other characteristics. He also admits the idea that reality is digital isn’t particularly favored in physics. It isn’t indicated in quantum field theory, or even in more speculative theories like loop quantum gravity or string theory. Although the idea has been explored by some physicists, in some cases in more of the it-from-qubit type form.

But Chalmers argues that for his purposes in considering the simulation hypothesis, it’s only relevant that it hasn’t been ruled out by fundamental theories. (This feels a bit weak to me, since lots of things haven’t been ruled out that we have little reason to think are reality.)

In the later parts of the book, Chalmers ties this in with computation in physical systems. He notes that computation isn’t just mathematical, it’s physical, requiring the right kind of causal structure, including counter-factual constraints. It isn’t enough that the system have a sequence of states, but those states need to be just one of many that could have happened had the initial conditions been different. Put another way, there had to have been a necessity to that sequence.

Chalmers sees computers as causation machines. And that causal structure, he notes, isn’t cheap. Which isn’t to say it can’t be implemented in different substrates, just that the substrate can’t be something arbitrary, like a rock. This is an argument against the idea of unlimited pancomputationalism, of a type that makes any talk of computation trivial, but not of the more limited variety that leaves the idea of a simulation viable.

Chalmers argues that this counter-factual requirement also applies to scientific theories.

Structural realism (which I’ve discussed before), is the view that what is real in reliable scientific theories about fundamental reality are the relational structures described by the mathematics of those theories. This sits on the epistemically cautious side of scientific realism, which argues that reliable theories represent reality, as opposed to anti-realism or instrumentalism, which argue that theories are only prediction mechanisms.

Anti-realists often point out that reliable scientific theories are often replaced with newer more reliable theories, sometimes with radically different views of reality. Therefore, the pessimistic induction is that we can’t count on theories representing reality. Realists point out that each successive theory gets us to a closer approximation of reality, and that it would be a miracle if a theory made reliable predictions without approximating reality in some fashion.

Structural realism seems like a compromise between these positions, albeit one that slightly favors realism. (It also sits well with my own suspicion that the distinction between reliably predicting observations and representing reality may have little if any meaning.)

A criticism of structural realism is that it seems to be saying that the mathematics itself is reality. But Chalmers argues that these are causal structures. Similar to the fact that computation must happen in a causal system, these structures must themselves have a causal aspect, one involving counter-factuals. In his view, this makes the pure mathematics view untenable. (I’m not sure this stands up to scrutiny since causality is itself arguably a relation, not that I’m in the math-is-fundamental camp myself.)

Structural realism has two sub-groups: OSR (ontic-structural realism) and ESR (epistemic structural realism). OSR makes a stronger statement, that reality just are these relational structures. ESR makes the weaker statement that the relational structures are all we can know about reality.

Chalmers muses that OSR could be viewed as a pure and uncompromising it-from-structure view. ESR, on the other hand, could be seen as an it-from-structure-from-it view, or more speculatively an it-from-structure-from-bit scenario. In other words, he sees ESR more compatible with the possibility of a digital reality, and the simulation hypothesis. Although it could be the case that OSR is true for base reality, while ESR is the case for simulated ones. Which might leave us with a more it-from-bit-from-it view.

My own understanding of OSR is it only excludes intrinsic non-relational properties. In that sense, it seems compatible with everything Chalmers describes ESR as being compatible with. But this is complex stuff and I may well be missing something.

But Chalmers’ overall point is that if these structures can be instantiated in a simulation, then for all intents in purposes, they’re as real as the structures outside of that simulation.

Okay, I think that’s enough for this post. What do you think of Chalmers’ reasoning? Are there reasons he misses to dismiss the idea of digital physics? Or anything that makes it more likely? Is merely instantiating the structures of structural realism sufficient for reality? Or is something else required?

Featured image source

54 thoughts on “Information, computation, and reality

  1. Maybe this can be related to the ideas in: Indeterminism in Physics, Classical Chaos and Bohmian Mechanics: Are Real Numbers Really Real? by Nicolas Gisin. (Can’t remember where I came across this paper, maybe some of your writing already referred to it!)

    Liked by 1 person

    1. I don’t think I have, although it sounds familiar. Maybe you’ve cited it before. Is this the one?
      https://arxiv.org/abs/1803.06824

      Based on the abstract, he seems to be arguing that there is a fundamental discreteness to reality, which would render real numbers not real. But his formulation of it is indeterminate. That seems different from what Chalmers is exploring. On the other hand, Chalmers is basically silent on how one of these bits in the it-from-bit framework affect each other. Maybe there’s room in his view for those effects to have some level of indeterminancy in them.

      Have you seen this Quanta piece? Someone shared it with me a while back. It gives a broader overview of Gisin’s work.
      https://www.quantamagazine.org/does-time-really-flow-new-clues-come-from-a-century-old-approach-to-math-20200407/

      I’m not sure I buy the logic for indeterminancy, at least as expressed in the Quanta overview. The argument seems to forget that the causal forces for anything happening today have a light cone 92 billion light years wide, which seems like plenty of information to determine those events, even if there is a limit to precision in reality. But as always, I might be missing something.

      Like

  2. This comment may be somewhat rambling, because you have covered a lot of territory.

    I mostly disagree with Chalmers. I won’t say that he is wrong, because we do not have agreed standard that we can use to judge right or wrong (true or false). I have changed my own view of information over the years. I’m a pragmatist, so I go with what works for me.

    From my point of view, information is abstract. Basically, it is a useful fiction, so it isn’t real. We do use physical representations of that abstract information, and I take “physical information” to refer to those physical representations. I have no problems with the way physicists discuss physical information.

    For me, the idea that reality is composed of information — that’s off the table. If information isn’t real, then we cannot use it as the basis for reality. I am increasingly inclined to go with the Kantian view, that the world in itself is unknowable. We create the world that we describe. We create it out of our experience. But our experience is not given to us. We create our experience by means of our behavior, our interactions with the world. That makes reality real enough for me. This is not Berkeley’s idealism.

    No, I don’t think the world is digital. But if the world in itself is unknowable, then I suppose that whether it is digital is also unknowable.

    What is clear, is that we attempt to digitize the world. In traditional terms we categorize the world (divide it up into parts). But I think people are confused by the multiple meanings of “categorize”. So, these days, I prefer to say that we thingify the world. That is to say, we divide it up into things. The idea that the world is a collection of discrete parts comes from this thingifying of the world.

    From this way of looking at it, the brain is involved in thingifying the world. The brain is not doing information processing. Rather, it is doing information construction. It is dividing the world into things and keeping us informed about those things.

    I am not a structural realist. Yes, we experience a structured reality. But the structure comes from us. The Ptolemaic astronomers saw cycles and epicycles, while Kepler saw confocal ellipses. These are different ways of structuring the same aspects of reality (the solar system), so we see that the structure is not fixed by reality but depends on the observer.

    I do not see computation as causal. Yes, our physical computers are causal machines. But they are really electromagnetic appliances. However, I see computation itself as abstract and thus non-causal.

    Liked by 1 person

    1. Appreciate your thoughts Neil.

      One question I’d have on the information is abstract point, is there ever information that isn’t physically instantiated? If so, then are we talking about platonism. (I think you told me you’re not a platonist, but just checking.) If it is always instantiated somewhere, such as in our brain, then in what sense would you keep it in the abstract category, or a useful fiction?

      In principle, I can see the Kantian view. We never know reality in and of itself, only the way it manifests to us in phenomena. All we can know of noumena is the theories, the models of it we form based on phenomena. But it seems like we can develop models that are able to make increasingly accurate predictions of that phenomena. My take on that is it’s like we can know reality. But how like something does something else have to be that we just decide it is that thing? In other words, how accurate do the predictions of our models need to be before we decide that, for pragmatic purposes, we at least have an approximate knowledge of the noumena?

      I do agree that we are constantly categorizing phenomena. And often that trips us up, because there are always edge cases that expose the fragility of our little categories. But I think evolution made us pattern recognizing machines because it’s adaptive for us to be so, so it seems like we’re doing what we’re supposed to be doing?

      If computation isn’t causal, how would you say a computational system moves from one state to the other? Or would you say something like the electromagnetic appliance does that and computation goes along for the ride?

      Liked by 1 person

      1. is there ever information that isn’t physically instantiated?

        I think that misses the point. We we talk about information, we are not talking about the physical instantiation. Whether I read your post on an LCD screen, or I printed it out and read that printed material, I was reading the same information but different instantiations.

        how accurate do the predictions of our models need to be before we decide that, for pragmatic purposes, we at least have an approximate knowledge of the noumena?

        I’m not thinking of noumena as mysterious. We don’t have to work out what the noumena are like. We add to them. The noumena are just undifferentiated stuff, so not very interesting. We add to that by making distinctions. But the distinctions that we make depend on our biology and partly on our culture. So they are something that we add. While we do categorize phenomena, we are mostly categorizing reality and thereby creating phenomena.

        On computation, we need to distinguish between computational states and physical states. Those are not the same. If a fleck of dust lands on my computer, that changes the physical state but normally does not affect the computational state. What we count as a computational state arises from our theoretical ideas about computation, which is what makes computational states abstract.

        Liked by 1 person

        1. Neil, I ask some question here, but feel free to view them as rhetorical if you wish. I’m more asking to spur thinking, although I’d be interested in any answer you might feel like providing.

          In terms of the information of this post, suppose every physical instantiation of this post was erased, including the copy with all its edited versions in my blog database and backups. Would the abstract information still exist?

          I do agree that a lot of reality is what we create, including the aspects we create due to our species or culture. (The culture part is far more powerful than many people realize.) This is actually a subject Chalmers touches on in the book, that I might eventually post about.

          On the distinction between computational and physical states, it seems like the same distinction could be made between any functional states and physical ones. And yet, similar to the information question above, if we remove the physical states, is there anything left of the computational or functional ones?

          Like

          1. In terms of the information of this post, suppose every physical instantiation of this post was erased, including the copy with all its edited versions in my blog database and backups. Would the abstract information still exist?

            I’ll respond to this one, because I think that might clarify my point. I won’t comment on the issue about computation, because it is sufficiently similar.

            “Would the abstract information still exist?”

            I’m not a Platonist, so I don’t believe that abstract information exists at all, except in a very technical meaning of “exist”.

            Information is abstract because the way that we talk about it requires it to be abstract.

            Data in my computer is in the form of electrical charges. Data on my disk drive (rotating rust disk) is in the form of magnetic charges. We talk about copying data from the computer memory to disk, but we most certainly are not copying the instantiation. What we are copying is something abstract that is represented in different ways in the different instantiations.

            If we want “information” to refer to the instantiation, then we have to change how we talk about. We could no longer talk of copying information from memory to disk. We could no longer talk of sending information down an optical fiber. In order to actually talk about instantiation, the language that we use would have to become far more cumbersome.

            Liked by 2 people

  3. As you may know, information plays a core role in my understanding of Consc., so I’ve been thinking about this A LOT. So first, your questions:

    I’m inclined to agree with everything Chalmers says regarding simulations. Reality is just the pattern of the way things interact, whether it’s the virtual with virtual, or even the real with the virtual, although that does not make a virtual cat a real cat. It’s just a real virtual cat to someone (something) outside the simulation.

    However, (without having read it yet) I don’t think his take on digital physics is useful. You can get all the same information processing with analog. Digital is only useful when you want error correction.

    So now information. I think there is a better way to understand it and how it relates to physics. The core of information is correlation, aka mutual information. Every physical process CAUSES correlation. The best way to diagram it is like this:
    Input (A, B, C, …) -> [system S] -> Output (P,Q, R, …),
    saying S causes the output when presented with the input, with some probability above chance.
    (This is largely from Deutsch’s Constructor Theory)

    Let’s consider a simple example:
    A -> [S] -> O

    If this is all we have, O is correlated with A and S. [Note: informationally, this is an AND operation] If we measure an O, that means we would have measured an A and an S, back at time 0. (Note counterfactual nature.). I’m guessing this is the Structural information Chalmers refers to.

    If we expand our example, we may have these possibilities:
    A -> [S] -> O
    B -> [S] -> O
    C -> [S] -> O
    Alternatively:
    A or B or C -> [S] -> O

    O then has the correlation S AND (A OR B OR C) (and the process is the obvious information op.)

    This gives an explanation of “It from Bit”. The relevant yes/no questions are
    (
    “Do we measure A at t0?” Or
    “Do we measure B at t0?” Or
    “Do we measure C at t0?”
    )
    And
    “Do we measure O at t1?”

    If the answer is yes, then we have an It, and we call it S.
    This is the structural explanation of S.

    Now semantics. Semantics is about interpretation, which requires an extra ingredient: purpose (or goal). Let’s go back to
    A -> [S] -> O
    O has the correlation A AND S. But let’s say we only care about A, because we created S for the purpose of generating O when it sees A. Then we only care that O is correlated with A, and this process can be interpreted as a COPY, so O means “A”. Alternatively, a different system may care whether there is an S there at all, and assuming S is the only thing that can produce an O, this second system interprets O as meaning “S” (also a COPY).

    Note: what is copied in a COPY operation? Answer: correlation. So in above process, O has the same correlation A (plus an extra correlation with S, which can be ignored, or not). So if A is the output of a cat recognition program/network, A is correlated with a real pattern of cats, and now so is O. If O is “running away”, S effectively correlates running away from “cats”. A different system S2 may correlate the same input with “let’s go pet that”. If both of these are in the same super-system, there will presumably be a process to decide which one wins.

    Does this make sense? Does it change anything?

    *

    [hope you don’t mind my writing a post in the comments of your post, complete with questions at the end.

    🙂

    ]

    Liked by 1 person

    1. I’m replying to JamesofSeattle because he requested my thoughts.

      I agree with you that, with respect to information processing, the use of digital has to do with error correction. But I don’t see the brain as doing information processing, so this is not a major point for me.

      Yes, we can get information from correlations. But we do not find correlations in the real world. Rather, we find correlations in the data. And thus I see data (i.e. information) as prior to correlation. Our most important information is information about things. So we must thingify the world before we can have that information. This is why I see thingifying the world as the primary task of the neural system. Hebbian learning in neural systems can be seen as a kind of calibration of this operation of thingifying the world. The calibration is partly a matter of cross-calibrating with others in our community. But the bulk of the calibration is a kind of pragmatic tuning or tweaking to improve how well it works. We are not logical/truth machines. We are pragmatic biological organisms.

      Like

    2. Thanks James. You often leave comments that work my brain. I think I agree with everything you said. However, I have a question. What, fundamentally, is a correlation? How do we recognize it? Let’s say we’d never seen or heard of the relevant [s]. How would we recognize that the correlation was a correlation, at least in case other than random coincidence?

      It seems like to recognize a correlation requires a comparison operation, and that operation needs to be familiar with [s] in order to do so, at least in non-trivial cases. But then understanding the transition from the inputs to the outputs of [s] requires an understanding of the causal dynamics. I think this is why the causal account seems more fundamental than the correlation one. Although I don’t think it invalidates the correlation one at all.

      [No worries on the long comment. The only danger with them is it’s very easy for me to sometimes miss important points, so don’t be afraid to point it out if I did.]

      Like

      1. “ What, fundamentally, is a correlation?”. That’s a tough one. I think I’d say that correlations are patterns which are the results of processes being repeatable, i.e., Nature having laws. This allows counterfactuals: if this happens, we can expect that happened, and this other will happen later. We recognize it by repeatedly seeing or generating processes and remembering results. “What happens if I do this? What happens if I do something just a little different.”

        To recognize a correlation doesn’t require we be “familiar” with S, but it may require we start with a theory of S. Think of the Higgs boson. It’s hard to say we were familiar with it when we had never detected it before. But we had reasons to think what the inputs and outputs would be, and eventually we were able to generate the inputs, and then we saw the expected outputs. If we hadn’t, we’d have to change our theory of the S. Similarly, sometimes we simply notice repeating patterns, which requires memory, and so we speculate/theorize an S. Sometimes we’re wrong. (Ach! Gremlins!). And we don’t necessarily have to make an abstract theory. Sometimes our hardware does that for us. (See Pavlov.)

        And yes, comparing different instances of the process, via memory, is how we learn about causality. But I think causality is inseparable from correlation, so not sure it could be more fundamental.

        Now we can talk about composite systems of s’s within S, in which case you can talk about causality which is more fundamental than the correlation caused by S. Then you’re talking about the correlations caused by those little s’ s.

        Make sense?

        *

        Liked by 1 person

        1. Hmmm. I would tend to think that Peter Higgs was familiar with the Higgs mechanism back in the 1960s when he first deduced it. Of course, it was only a model back then, albeit one that appeared to be logically necessary to make the rest of the standard model work. But definitely no one had data on that model until the LHC captured it by 2012.

          So on the distinction between s and S, would that be one between the fact that most physical theories are reversible, and so cause and effect can be flipped, at least until we get complex enough systems where statistical entropy starts to kick in?

          I’m not sure what to call the reversible relations that manifest as cause and effect in complex systems, when they’re in simpler ones. I can see why people object to the word “cause” there, but it does seem like there’s a necessity to that relationship that’s missing if we just stick with “correlation”.

          Like

          1. The problem, I think, is that correlation is a broader concept that cause, even in time reversible theories. It’s why simple correlation does not establish causation. So using the word “correlation” for these relationships seems underdetermined. But in a time reversible theory, cause and effect can be flipped, so some people do object to the word “cause”. Personally, I’m still inclined to use the word “cause” until someone provides something better.

            Like

          2. Still not getting your point. I’m saying:
            1. Every process generates correlation. (I prefer to say causes, for reasons)
            2. Every process has an info theory description using the operations COPY, NOT, AND/OR, and these operations are operations on correlations.
            3. The info theory description can be modified by an interpretation for a purpose.

            Are any of these problematic?

            *

            Like

  4. “Chalmers’ overall point is that if these structures can be instantiated in a simulation, then for all intents in purposes, they’re as real as the structures outside of that simulation.”

    To structures inside the simulation, sure, but that’s a given. To structures outside, their reality is a very different matter. As the saying goes, ‘simulated rain doesn’t make anyone wet.’ One problem with the “reality” of simulations, as any game player has experienced, is that the software can have bugs that create unreal behavior since all causality is due to the software rules which need to be perfect.

    Chalmers seems off his game here; I usually find his views better grounded. That review, Reality Minus, seemed on-point and certainly killed any interest I had in the book.

    Liked by 3 people

    1. Definitely whether we’re in the simulation or outside of it makes a big difference to our perspective. Inside of it, the rain does make us wet, but for someone outside, it’s a simulated wetness. The idea is “wetness” for anyone inside the simulation, just means what we outside would mean by “simulated wetness”.

      I agree about the bugs. It’s another reason to doubt the plausibility of a perfect simulation. I do think Chalmers accepts that proposition far too easily. Although as I noted the other day, there are other scenarios, but based on what we currently know, not ones, I think, that are parsimonious.

      Just skimmed parts of that review. From what I glimpse of the author’s views, yeah, I can see how reading this type of book was probably an exercise in frustration for him, and would be for anyone with a similar outlook.

      Like

    2. So much I agree with in the review. Thanks for sharing.

      “As mind has never been discovered anywhere except in organisms, where it appears to be associated with brains and nervous systems and nerve tissues and organic cells, what precisely justifies the belief that mental activity resides entirely in purely electrical activity and in the relations of the circuitry that permit it? Would it not make more sense to assume that the brain’s capacity for mentality has some causal connection to the cells and tissues and enzymes, synapses and axons and myelin, composing it, as well as to its uniquely organic and ontogenic history of continuous growth, development, catalysis, regeneration, and neural routing, as well as to its complex relations with the complete neurology, biology, organs, and history of the body it belongs to”.

      That’s my view almost exactly. Mind/consciousness has a language but it is a language understood by biological tissue. It works in feedback with living material. Nothing could be truly isomorphic to it except other biological material because it is only the complex carbon+ structures that understand it.

      Like

      1. Thanks, but credit where credit is due, it was first linked by stolzyblog on Mike’s first post about the Chalmers book. I thought it was a pretty good analysis; definitely worth sharing.

        As we’ve discussed before, I generally agree with your view except I think it’s possible that a sufficiently isomorphic — but not necessarily organic — brain might also produce consciousness.

        Liked by 1 person

        1. Yeah, actually I was a little puzzled why you liked the review since it seemed to contradict your “isomorphic” position as I understood it.

          Any thoughts on how isomorphic it would need to be or on what sort of dimension its structure would need to be organized. I’m not totally satisfied with that term “dimension” but I’m trying to get at the idea as to how the isomorphism would manifest. For example, a larger rectangle could be isomorphic to a smaller one if the sides were proportionate. However, a square could be isomorphic to a rectangle if we only consider angles and corners. At some point, we have to identify how the artificial brain would be similar to an actual one and why that similarity would produce consciousness: numbers of interconnected nodes, number of layers, ability to switch dynamically connections, presence of upstream and downstream processing, etc.

          Like

          1. Well, I don’t have to agree with everything said to appreciate a persuasive analysis. An argument is a set of multiple vectors, and any judgement I make on that argument is the sum of those vectors. In this case I find strong agreement with most of what the author said (nearly all of it, really).

            Further, I don’t particularly disagree that neuron replacement (per the “fading qualia” argument) would probably result in fading consciousness. I agree with the author that “the cells and tissues and enzymes, synapses and axons and myelin” almost certainly matter. I think the necessary replacement level is far below neurons, probably below synapses down to some cellular or chemical level. Imagine instead replacing carbon atoms with silicon atoms, which have the same valence (same chemical group). Perhaps nitrogen could be replaced with potassium on the same basis. Sulfur has the same valance as oxygen, etc. Even if “Positronic” brains are possible, they may need to be “grown” and trained just like organic ones. I did find his argument about the brain’s plasticity compelling. “Positronic” brains would need that ability, too, I think.

            I very much agree with his opposition to Chalmers’s “Principle of Organizational Invariance” especially when applied to software simulations. (I’ve posted about that in the past.) But I speculate that, in a sufficiently detailed isomorph, it might apply. But the facts are the only form of consciousness we currently know is found solely in organic brains, so I can’t fault the author for sticking to the facts.

            Liked by 1 person

          2. Sounds more than you disagree with the review than agree with it. 🙂

            I think we had the silicon discussion before and its larger nucleus and electron cloud don’t allow it to form as tight bonds as carbon. So I would be negative on the silicon replacement working.

            I think if the key to an artificial brain, if one is even possible, would be in the electromagnetic field. The brain not only needs to “think” but to hear itself “think”, so to speak. Maybe something along the lines of the circuits developed by the COGS group of the University of Sussex years ago.

            Like

          3. The silicon issue, I think, illustrates the replacement problem. If you combine carbon and oxide, you have a gas; you combine silicon and oxygen you have glass. Nothing other than carbon in this universe has the attributes of carbon.

            Like

          4. “Sounds more than you disagree with the review than agree with it.”

            Huh? Where did you get that idea when I explicitly said the exact opposite?

            Replacing carbon with silicon was just something I threw out there, but, yeah, the chemistry is different enough that 1:1 replacement at the atomic level is almost certainly a non-starter. (I have read SF stories where silicon-based life excretes silicon dioxide, but the lower-weight atoms do seem more useful. CHON is amazingly versatile.)

            All I’m saying is that I can’t commit to biological chauvinism on the premise that the brain is a machine from which consciousness seems to arise, so it’s not hard to imagine that some other sufficiently isomorphic machine might also have emergent consciousness. (I guess your usual “Broad Speculations” don’t extend that far?)

            As I think you know, I’ve compared the emergence of conscious in brains with the emergence of laser light in materials with the right configuration. There are many variations on the latter, so it doesn’t seem out of the question there may be other configurations of the former. But I readily admit it’s just a speculation.

            Liked by 1 person

          5. Actually I don’t completely rule out some sort of machine consciousness. It is inherent in EM field theory. I personally think it unlikely but don’t rule it out.

            The usual “replacement” argument is piece by piece of brain is replaced while preserving functionality. Eventually you end up with a non-biological, presumably conscious, brain. It is hard for me to imagine how that would work. Let’s say we replace a serotonin receptor with something non-biological. It detects serotonin and behaves like a receptor behaves. What would power it? Would it run on glucose and oxygen from blood? Let’s say it does. What would happen when you finally reach the end point of replacing the blood? Would it stop working or would it need to have a alternative mechanism built-in to power it? A bigger problem is what replaces the serotonin and now how does the artificial serotonin receptor suddenly switch what it detects to, perhaps, some sort of electrical impulse instead of the serotonin. Or do we need to go back and replace the artificial serotonin receptor V1 with artificial serotonin receptor V2 which actually doesn’t detect serotonin at all, but just detects the artificial ‘serotonin”. In other words, there would be the a constant churning of replacement parts to maintain compatibility with the new other parts that get replaced.

            The problem is that each piece you replace would need to be compatible with major aspects of a biological organism until you reach the final stage where all of the pieces would need to stand on their own without any biological input.

            So I agree with this:

            “But, of course, there is no reason for believing any of this; and it seems far more likely that the process of replacing one’s neurons with computer chips would be little more than a very slow process of suicide, producing not the same behaviors as would a living mind, but only progressive derangement and stupefaction, culminating in an inert mass of diffusely galvanized circuitry.”

            I don’t think the replacement argument works at all.

            Like

          6. The problem I have with the isomorphic argument is that it completely lacks any details that makes it plausible. But sure, I guess there is a lot we don’t know so anything is possible. I just can’t put too much energy into it without some details on how the artificial brain would be isomorphic to the real brain and how that would make the artificial brain conscious rather than a zombie. I think Solms says he is working on something based on Friston’s free energy principle. Until I see some results I don’t such much to talk about.

            Like

          7. Yeah, as I said above, ” I don’t particularly disagree that neuron replacement (per the ‘fading qualia’ argument) would probably result in fading consciousness.” I can’t entirely rule out some possible way it could be done, but I have a hard time imagining it would work.

            “The problem I have with the isomorphic argument is that it completely lacks any details that makes it plausible.”

            More, or less, plausible than: string theory, SUSY, the MWI, the VRH, the BUH, the MUH, various multiverses, etc? 😀

            To me, it’s a pretty simple argument: [A] The brain is a machine that produces consciousness in virtue of its mechanism (no magic, no duality). [B] There is almost always more than one way to make a machine that does the same thing.

            As you suggest, we don’t know enough about the mechanism to say. By analogy, imagine, shortly after the invention of the electric lightbulb, that we stumbled on a case of laser pointers that fell through a time wormhole. Technology at that time wouldn’t be able to make much sense of the mechanism, let alone have any idea how to replicate it (even flashlights being in their future). But would it be implausible to think there might be more than one way to implement such a thing? (As we know now, there are, indeed, many ways to make a laser.)

            Like

  5. It’s interesting to me that Chalmers particularly is making the apparently unfalsifiable claim that “its” come from “bits”, when he’s also academia’s most famous proponent for the idea that phenomenal experience does not arise by means of “its” (and thus now “bits”), but rather by means of forces beyond worldly causal dynamics. 🤣🤪🤣

    Mike, is there any way that you’d differentiate “physical information” from worldly causal dynamics itself? If so then how would you describe that difference? If not then it seems like when you use the term it may be helpful to mention that this is what you’re referring to.

    Regarding various varieties of information, one that I like is “machine information”. We don’t usually consider rocks, molecules, planets and so on to be “machines”. Therefore naturalists can say that they function by means of default causality rather than machine information. But anything teleologically created to do something, such as a mouse trap, would function by means of machine information. Furthermore we generally consider anything teleonomical (such as a tree as well as our bodies) to also function as machines and so associated information would be driving their function.

    Another variety that I like is “computer information”. Here input information becomes algorithmically processed to incite output function. For example pressing the “m” key on my computer may be taken as information that’s algorithmically processed to produce output function such as the letter appearing on my screen. The genetic components of cells seem to function computationally as well as central organism processors (which are also known as “brains”). Interestingly I consider brains entirely non-conscious computers, though certain varieties of them algorithmically animate phenomenal experience producing mechanisms (possibly in the form of certain varieties of neuron produced electromagnetic radiation). Furthermore I consider phenomenal experience itself to function computationally in the sense that the experiencer weighs different options to figure out what might make it feel best, or an algorithmic value assessment. So here I seem to have reduced things back to semantic information!

    Liked by 1 person

    1. Hey Eric,
      I think causal dynamics and physical information are tightly intertwined. As you know, for a while I had concluded that physical information was causation itself, mainly because I was struggling to find a distinction between them. But when considering entropy a few months ago, I realized that a very high entropy system could have a lot of information with little causal potential left in it.

      So now, I’d say that physical information is a snapshot of causal processes, a physical pattern of a state somewhere in that processing. Rather than just information itself being causation, it’s more accurate to say that physical information processing is causal processing, or vice-versa. So Chalmers’ points about computers being causation machines resonated pretty firmly for me.

      At least that’s my current conclusion. It might be different next month.

      It seems like computer information is a special case of machine information, which itself is a special case of semantic information. So I’d agree they reduce back to semantic information. (And, of course, all semantic information is a special case of physical information.) I also agree that phenomenal experience is computational, and that value assessment and weighing are an important part of it.

      Liked by 1 person

      1. Mike,
        I consider causality ultimately in terms of a unified entity. Here there will be no part of it that has more causality than another part of it, high entropy or not, and because the entire system exists in a fully continuous way. Ultimately the full universe (and beyond time and space, but the whole causal structure itself), should exist as a single thing that functions in unison. I presume determinism here because that’s what naturalism mandates. Any deviation would exist as a void in this system, or magic that’s thus impossible to grasp because no explanation would actually exist for such an event.

        From this perspective I don’t currently see much reason to add a distinction between “physical information” and “causality” itself. You’re always welcome to pick an element of causality to designate as “physical information” if you like, though always do so definitionally rather than ontologically. There are no “true” definitions but rather only more and less useful ones in a given context. A good thing to avoid here should be the “what is…” meme and try to go with “what’s useful…”.

        Try this: Semantic information is at the top of this particular list (though specific kinds like “English” could go above), which would supervene upon computational information, which would supervene upon machine information, which would supervene upon causality itself. More classifications could be added as well, though for the naturalist it should all reduce to a single kind of stuff, or system based causality. So it could be that last time I didn’t actually reduce things back to semantic information, but rather extrapolated.

        Liked by 1 person

        1. Eric,
          I’m good with the usefulness criteria, but I wonder what that says for your definition of naturalism. It seems like an ontological philosophy that requires magical voids in its view of reality isn’t as useful as one that doesn’t. And whatever you want to say about quantum mechanics, it is imminently useful. The device you’re using right now is composed of technologies enabled by it. It seems like a more pragmatic view of naturalism is open to all phenomena that works according to discoverable principles. That excludes the supernatural while retaining science.

          On physical information and causality, it sounds like you’re where I was last year. That view worked for me for several months, so I won’t argue against it. I’m not sure enough of my current conception anyway.

          Liked by 1 person

          1. Mike,
            To be sure I don’t require ontological voids in causality, and in fact they can only exist if my stance happens to be wrong. In that case however I see no point in sugar coating these voids — they’d reflect a magical reality. Some decide that we can avoid there being any magic here if we simply call it something else, such as the oxymoron of “natural uncertainty”. That’s simply not the way I roll.

            So for quantum funkiness either there is a determinism displayed that science clearly does not grasp, which seems quite plausible to me, or there’s an indeterminism displayed that thus has no possibility to be grasped given associated magic. I’m certainly not going to go the route of Sean Carroll and fabricate an unfalsifiable “many worlds” idea to account for human ignorance here. I consider that explanation to be inherently magical, and a far more profound example than simply not grasping what’s behind an apparent wave-particle duality and whatnot.

            In any case I’m not denying quantum funkiness any more than quantum biologist Johnjoe McFadden does, whose theories depend upon that funkiness. Actually like me he even claims to be an ontological determinist. There should be something real here that science does not grasp and may never. Ultimately this will either be because it’s magical and therefore has no potential to be grasped, or be because causality does exist here which thus would have some potential to be grasped, and even if we always remain too stupid to do so. Naturalism itself however mandates a causal explanation, and surely not some Occam violating “many worlds” nonsense. (Not that I’m implying that you’re a many worlder, though given his amazing popularity and charm I do like to bitch out Sean Carroll on occasion when presented the opportunity.)

            Liked by 1 person

          2. Eric,
            I actually share your conclusion that reality is, ultimately, deterministic. Although the key word there for me is “ultimately”. I’m open to the possibility that in some cases the determinism may be of a type we can’t cash out. That said, what’s different between us, I think, is that I see that as a theory about reality, one that, despite the fact I think there are good reasons to hold it, I’m still open to possibly being false (or not useful).

            The question of determinism is why I find deterministic interpretations of QM interesting, and the many-worlds one in particular since it also preserves local causality. But that, I think, exposes another difference. I’m more willing to explore the current possible answers for QM, even though all of them call into question one bedrock principle or another of how we think reality works. The history of science has been one of many paradigmatic shifts. That history, it seems to me, means we shouldn’t hold on too tightly to any of those bedrock principles, which may not be as bedrock as we think.

            Liked by 1 person

          3. It’s good to hear that you’re an ontological determinist as well Mike. And indeed, I also agree that we could be wrong about this. In that case reality would function magically, right?

            You’re certainly more interested in quantum mechanics than I am, commonly reading books and posting on the subject. But if anyone tells me that bajillions of full extra universes spring forth from ours each second to address quantum funkiness, and that bajillions of full universes must then spring forth from each of them per second as well for infinite regress, then I don’t know what to do except laugh.

            By the way I was busy during the last post you did for this book but suppose I can weigh in on it a bit too. From my perspective there is nothing inherently spooky about manipulating someone’s brain so that they feel like they are experiencing all sorts of things when they’re actually just in a room hooked up to a computer. Here we’d effectively bypass sense organs such as eyes and ears to provide a person’s brain with information which would be neurally processed to create a designed world of qualia. I suspect that this qualia would exist in the right form of neuron produced electromagnetic radiation per McFadden. Conversely Chalmers doesn’t believe that brain stuff can be sufficient to create qualia and so relies upon some sort of supernatural additional explanation.

            On whether or not someone would know that they’re in a simulation, that seems a bit arbitrary to me. Theoretically if done with the same detail of our world then no, though that should be ridiculously complex to do. Or the designer could simply make a person feel like the world isn’t a simulation even given countless imperfections since they’d control how the person feels. Or the designer could even directly tell the subject that they’re in a simulation and thus under mind control — no problem proving it.

            On the idea of full simulations which thus lack any types of brain, I agree that popular consciousness theories permit the possibility. This is because they consider qualia to exist when certain generic information is properly processed into other information. I consider each of these proposals to depend upon supernatural dynamics however because causality mandates that computer information cannot exist generically — it should only exist when it animates appropriate output mechanisms. Thus the task is to figure out what those qualia producing mechanisms happen to be. We have a great clue to work from however since the brain seems to sometimes implement those mechanisms. Thus we should be able to experimentally narrow this down and generally answer the question, that is once enough professionals in the field come to my way of thinking.

            Liked by 1 person

          4. Eric, I’m not inclined to use the word “magic” in that case. Maybe it if were a complete free for all, but QM operates according to well tested principles. The outcome of a single particle interaction can’t be predicted, but probabilities of the possible outcomes can, along with the resulting behavior of large populations of those particles. It seems like calling that magic just encourages quantum mysticism.

            Liked by 1 person

          5. Mike,
            Ontological determinists do not consider QM magic because they presume all events to be perfectly caused to occur exactly as they do in the end, and even if science doesn’t grasp some of the presumed causality. So apparently that’s you and me. It’s the person who says that there’s an ontological randomness to QM who invokes the existence of magic. And sure, they could say QM seem mainly causal, though at some point their position would be that causality breaks down to result in ontological indeterminacy in at least some capacity. Therefore this wouldn’t reflect natural events ontologically but rather supernatural. Apparently there are only two effective choices here.

            Or would you like to propose some kind of middle classification between function that is systematically caused and function that is not? In the end I don’t see how it would be useful to say that something functions sort of causally in some capacity and thus sort of naturally. But what do you think?

            Liked by 1 person

          6. Eric,
            It’s a question of what someone thinks is fundamental. You and I see deterministic interactions as fundamental with any indeterminate phenomena about our lack of knowledge. An indeterminist sees indeterminate ones as fundamental, with determinate phenomena emergent.

            I disagree with the indeterminists, but I’m not inclined to characterize their position in pejorative terms. They might turn out to be right. Although it’s likely this is never conclusively resolved since it’s always conceivable that something else exists just beyond our knowledge or extrapolations.

            Liked by 1 person

          7. Ah, so it’s too pejorative to characterize their position as “magical”. But then we’ve got to reduce their position back to something. If it makes anyone feel better we could use terms such as supernatural, otherworldly, spooky, dualism, “God playing dice”, and so on, not that the meaning ever changes And actually I’m afraid that I’m not all that sympathetic either. These are the people who continually use their position to degrade the career of Albert Einstein, or a true naturalist.

            Furthermore consider the massive popularity of David Chalmers. Apparently he’s convinced lots of people that because we can conceive of philosophical zombies, that phenomenal experience can only arise by means of something outside the natural order of things (though correct me if I’m wrong about this since it’s mainly just what I’ve heard). I suspect that close to zero of his followers including himself go by the classification of “supernaturalist”, and yet that’s what their position would effectively reduce back to. So apparently he’s talented enough to write an argument that influences people to believe things that they wouldn’t otherwise, and essentially because there are too few of us who call a spade, “a spade”. So it seems to me that political correctness itself can have such costs.

            I wonder if it’s even logically possible for something apparently determinate to generally emerge from something that is fundamentally indeterminate? To me that seems wrong a priori, such as maybe tricking ourselves that a true circle could ever be composed of any straight segments. To me the two ideas seem incompatible by definition. This is not to say that fundamental randomness doesn’t exist. I realize we can’t know that. But I don’t see a coherent way of denying to such a reality as “magical”.

            Liked by 1 person

    1. You made me go back and look up where Chalmers discusses analog information in the book. In terms of simulation, his point is that, in principle, any discrete system can simulate any continuous one to arbitrary levels of precision. The key phrase there is “in principle”, since simulating a physical system perfectly might require a level of precision below the Planck scale, a possibility Chalmers doesn’t really get into.

      He also notes that structural information can be generalized to quantum and analog differences, which actually makes it more like physical information than I thought. That’s good to know. I might start using that term rather than physical information, although I still like the “physical” prefix, so we’ll see.

      Anyway, I think Chalmers would argue that if we can simulate any analog system, like a wave, to enough precision to do it accurately, then it follows that those systems could ultimately be composed of discrete components. Although again, he doesn’t consider the possibility that discreteness might be so far down the ontological scale that it might be inaccessible. (To be fair, inaccessible doesn’t necessarily mean undiscoverable.)

      Liked by 1 person

      1. It seems like it goes beyond a question of whether a wave could be simulated digitally. If you are simulating the world, wouldn’t you need to simulate all waves, not simply any given wave? But how much computational power is required to do that?

        Liked by 1 person

        1. That is the question. I think Chalmers (and other philosophers like Nick Bostrom) accept the proposition that we can perfectly simulate the universe within the universe far too hastily. I don’t see how that could be done without some level of coarse graining, or the whole thing running far slower than reality, or with some other compromises.

          Of course, maybe those compromises have happened, and some of the paradoxes of modern physics, like quantum mechanics, are the result. Or if we ourselves are part of the simulation, maybe we’re just incapable of noticing its flaws, by design.

          It’s also possible that if we’re in a simulation, that simulation is in a host universe that is very different and far more complex than ours, so the computing power isn’t an issue. Although if that’s the case, any insights we think we have from the way computers work in this universe seem dubious. That scenario seems more akin to us just being in a constructed universe.

          Liked by 2 people

          1. Ultimately doesn’t our classical world derive from a quantum reality? Is Chalmers or Bostrom suggesting that underlying the quantum reality is a different reality that looks more classical?

            Liked by 1 person

          2. The classical world emerging from the quantum one definitely seems like the universe we live in. Of course, exactly how that happens comes down to which interpretation you favor.

            I haven’t read Bostrom, so can’t speak to what he’s saying, but Chalmers, stipulates that when he says it-from-bit, he’s talking about something that would equally apply to it-from-qubit. Of course, most engineered qubits today are based on particle spin. It-from-qubit ones would be something much more fundamental, so that even the analog properties of a particle would ultimately reduce to them.

            Liked by 1 person

          3. “It-from-qubit ones would be something much more fundamental, so that even the analog properties of a particle would ultimately reduce to them”.

            Is there any evidence from physics that this is likely?

            To me, it seems like part of the complexity of everything comes from a certain amount of quantum randomness.

            Liked by 1 person

          4. Seems like that puts the whole thing in a wildly speculative realm. I’m not especially averse to that but I wouldn’t write a couple of hundred pages based on it either unless I used a pseudonym.

            Liked by 1 person

    1. That’s a good way to describe it. And it’s inert because it has no extrinsic relations with anything. It doesn’t affect anything. Which makes its existence or non-existence utterly academic.

      Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.