A basic question on the black hole information paradox

The black hole information paradox has been receiving some attention lately. This is the fact that information, that is any pattern of matter, that falls into a black hole is completely crushed as it approaches the singularity, losing whatever differentiation it might have had before. This has long been recognized as a problem, because in physics, information is supposed to be conserved.

Sabine Hossenfelder did a post on this issue a while back that I thought was pretty good. Interestingly, Hossenfelder explained the whole thing with nary a mention of “information” until the end. It’s all about deterministic processes and reversibility. It was one of the steps in my own realization that information is causation. But that left me interested in how this issue might eventually be resolved.

One solution that has historically been considered is that maybe all the information is preserved on the black hole’s event horizon. This is caused by the fact that, to an external observer, due to time dilation, an object never completely falls into a black hole, it just gets slower and slower as it approaches the event horizon.

As I understand it, a difficulty for this view is Hawking radiation, a very small amount of radiation that, due to quantum processes on the event horizon, manages to escape the black hole. Eventually that radiation leads the black hole to decay away. (“Eventually” for a respectable black hole might be something like 10100 years.) The problem is that, afterward, all that is left is an undifferentiated cloud of Hawking radiation in the universe, with no possibility, even in principle, of reconstructing what had fallen into the black hole throughout its history, again violating the conservation of information.

Recently, there was a Quanta article implying that maybe the information paradox has been solved. As I noted when I tweeted the article, it’s extremely interesting, but it seems like there are a lot of assumptions in the calculations, many of them non-trivial. And I’ve seen a number of physicists express skepticism that the paradox has really been solved, most recently Ethan Siegel at Starts with a Bang.

I have nothing intelligent to say on whether the paradox has been solved. That’ll have to be decided between theoretical physicists. But I do have a question, an extremely basic one, one that I’m hoping someone more familiar with the physics can answer. I’ve tried finding the answer in popular accounts of the paradox, but have come up empty so far.

As I noted above, any pattern of matter entering a black hole is crushed, right down to its sub-atomic components, as it approaches the singularity. (It does approach the singularity even if it remains smeared on the event horizon. One is true from the perspective of an observer falling in, the other from an external observer watching what happens. Relativity makes my head hurt.)

This crushing away of any differentiation leads to the black hole “no-hair theorem”, which basically says that a black hole, any black hole, can be fully described by its mass/energy, linear-momentum, angular-momentum, position, and electric charge. So black holes can’t be distinguished by what they consumed to acquire their size. A black hole that reached its size by consuming stars is indistinguishable from one that reached its size by consuming dark matter, except by the above attributes.

This seems to fit with the information paradox issue. Except that no account I’ve read talks about what effect the incoming information has on the attributes that are distinguishable. Suppose we have two books, each the same size and weight with the same number of pages, but with completely different text. A casual reading of the no-hair theorem, as well as the information paradox, might be that each book, if thrown into the black hole, should have exactly the same effect as the other.

But this doesn’t seem true to me. While the only difference between the books is the text, that is a difference, a physical difference. The difference amounts to where the atoms of ink are attached to the various pages. It seems like these different configurations of atoms should make a difference in the black hole’s attributes, an infinitesimally minute difference to be sure, but a difference just the same.

If so, if these differences result in changes in angular and linear momentum, however minor, changes that put the black hole at a different position at a different time than it otherwise might have been, then those seem like changes that affect the outside universe. The momentum, position, and path of the black hole make a difference in its gravitational effects, effects which propagate to the surrounding universe, altering the way the black hole affects surrounding stars, gas, or anything else.

If that’s true, then my questions is, why isn’t the lost information contained in all these perturbative effects as they propagate into the universe? Why do physicists even need to concern themselves with things smeared on the event horizon or the final nature of Hawking radiation?

I fully realize this is me not understanding something fundamental about what is happening. I’m just not clear on what it is. I’d be grateful for any ideas or references that would set me straight.

66 thoughts on “A basic question on the black hole information paradox

  1. Interesting question! One minor nitpick: The no-hair theorem specifies only mass, electric charge, and angular momentum. The (linear) momentum and position can be zero from a given frame of reference, so they’re not usually considered as part of the no-hair theorem. That said, they would certainly apply to your question, though.

    Maybe we can reason like this: If the books had the same mass, electric charge, and angular momentum, is the only difference the configuration of components? That configuration would be the information that seems to be lost behind the BH horizon.

    (BTW, another minor nitpick. It’s not the crushing that leads to the no-hair theorem, it’s being “forever” lost behind the BH horizon. In spinning black holes, the singularity can be a ring, and it’s possible for an object to fall behind the horizon and forever orbit inside. In some BHs, not all geodesics lead to the singularity.)

    Clearly if we threw a more massive book in, or one with different angular momentum, or highly charged, that would make an observable difference. But if the only difference is the configuration information then I can see that not making a difference.

    But who knows. Good question!

    Liked by 3 people

    1. Thanks Wyrd.

      On the nits, there are probably nuances I’m missing, but for the no-hair stuff, I was going off this section of the wikipedia article: https://en.wikipedia.org/wiki/No-hair_theorem#Changing_the_reference_frame

      On being lost forever behind the BH horizon, this paragraph from Leonard Susskind in The Black Hole War is what led me to believe that’s not the issue:

      It wasn’t the fact that bits of information might be lost behind the horizon that so deeply disturbed ’t Hooft and me. Information falling into a black hole is no worse than locking it away in a tightly sealed vault. But something much more sinister was at play here. The possibility of hiding information in a vault would hardly be a cause for alarm, but what if when the door was shut, the vault evaporated right in front of your eyes? That’s exactly what Hawking had predicted would happen to the black hole.

      Susskind, Leonard. The Black Hole War (p. 184). Little, Brown and Company. Kindle Edition.

      In the case of the book, I think the different configuration of the ink atoms would translate into differences in the book’s angular momentum as it’s falling in and being consumed, difference that seem like would get imparted to the black hole’s linear and angular momentum. Again, this would be unimaginably minute changes on the scale of the black hole, but if we’re trying to account for information loss or conservation, it seems like it would matter.

      But as I noted, this is far too basic to be anything any of the physicists have missed. I just don’t know the answer.

      Like

      1. Regarding the Wiki article, note the last paragraph of the linked section (bolding mine):

        By changing the reference frame one can set the linear momentum and position to zero and orient the spin angular momentum along the positive z axis. This eliminates eight of the eleven numbers, leaving three which are independent of the reference frame: mass, angular momentum magnitude, and electric charge.

        Note also the first line of the article, but that’s the no-hair theorem. Your question does seem to invoke all eleven numbers since we’re talking about externally observing the BH.

        Susskind doesn’t seem to explicitly say matter is destroyed (and I think per information not ever being destroyed, it can’t be). He may indeed mean that, but he might also be simplifying for reader clarity. The metaphor of the vault evaporating is slightly misleading because that evaporation process includes the stored information. The contents of the BH vanish as “evaporation” proceeds. The conundrum is that Hawking radiation is essentially super low-level waste heat containing (as far as we know) none of the information that comprised the infalling matter that is now “evaporating.” It’s not like the outer walls of the “safe” vanish leaving the pristine contents. It’s that the whole thing, every bit of it, is radiating away.

        I’m not sure ink atoms would change the L (angular momentum) of the book. Each ink atom would have its own (essentially random) L, and those would tend to cancel out as far as contributing to the book’s L. The L of the book would be how much it was physically spinning. (L is essentially a classical property, although in QM its values are quantized. It’s the property quantum spin is said to be similar to but utterly different from. 🙂 )

        I didn’t mention this last time, but I’m among those (along with Penrose, IIRC) that thinks the idea that information is never lost needs to be re-examined. We’re not sure it’s true. So it’s possible there is no paradox here. Conservation of properties is based on symmetries, and it’s not clear what symmetry applies to information. The point is, there may be less of a paradox there than thought. Maybe information is lost in a BH.

        But if that isn’t the answer, then I’ll stick with my earlier guess that the printed text in the book represents only configuration information that doesn’t really affect the basic properties of the book. Different books would not have differences that mattered in this case. But I’m definitely just guessing here.

        As an aside, one thing that catches my eye about Hawking radiation, is that the radiation from the CMB is (currently) greater, so even BHs with no infalling matter still absorb more mass from the CMB than they lose due to Hawking radiation. Their evaporation won’t even begin until the CMB becomes faint enough to allow it.

        The other interesting thing is that the image of Hawking radiation being virtual pairs created right on the edge of the horizon and one of the pairs being sucked into the BH… is completely false. It’s a metaphor to illustrate what happens, but it doesn’t happen that way. Like, at all. I’d have to review the material, but IIRC, it happens in a region around the BH up to a distance of 2R (twice the radius of the horizon). And it has to do with how the event horizon creates a “wall” that affects the local vacuum.

        Like

        1. I would agree that if we change our reference frame to that of the black hole’s then it does eliminate many of the variables. However, to be consistent, we’d have to account for what’s happening in the rest of the universe in relation to the black hole, which really just brings back those attributes in a different form. And it’s the relationship of the BH to the rest of the universe I’m wondering about.

          Susskind had spent some time on Hawking radiation before that paragraph, so in its context, it’s clear what he means by evaporation, that it wouldn’t include the information.

          I never said that matter gets destroyed. It just gets reduced to the extent that it loses all its individual combination of properties. (At least according to the theorem.)

          It’s actually okay if the momenta of the ink particles cancel each other out. As both Hossenfelder and DM below mention, it’s the reversibility that’s at issue. If, in principle, you could account for all the outputs of the system and work backwards, would the contents of the book become available in that accounting? Canceling out shouldn’t matter since all the factors should still work out in reverse.

          Definitely if physical information is not conserved, then this isn’t an issue.

          On Hawking radiation, I know Hossenfelder made her point about the way Hawking described it. However, a lot of physicists on Twitter pushed back. If you read what she said, pointing out that the formed particles are smeared out over a vast area, it doesn’t seem to change the basic proposition. Hawking just didn’t go into the details about the new particle’s vast wave function. However, my understanding is that if it underwent a measurement event, relative to an observer, it would still be a particle.

          Like

          1. “And it’s the relationship of the BH to the rest of the universe I’m wondering about.”

            Yes (as I’ve agreed)! 🙂

            “…so in its context, it’s clear what [Susskind] means by evaporation, that it wouldn’t include the information.”

            Right, that’s the point. It does contain the mass/energy of what fell in, but in the extremely low-entropy information-free form of Hawking radiation. Thus the substance evaporates, but what happened to the information?

            “If, in principle, you could account for all the outputs of the system and work backwards, would the contents of the book become available in that accounting?”

            My opinion for a long time on that has been no. It’s possible for information to combine in ways that lose previous information and are not reversible. (I don’t subscribe to the notion that reality is reversible. And I don’t subscribe to conservation of information.)

            As a trivial example, if an analog clock shows 3:00, how many times have the hands gone around? If presented with a 64-bit number, what calculations generated that result? In principle, the entropy increase of those operations is detectable, but I’d argue the information is not recoverable.

            “On Hawking radiation,…”

            Yes, what you end up with is a photon that carries away a tiny bit of the BH’s mass. I’m just commenting that the picture of two particles springing into life right at the BH horizon and one of them being trapped inside it while the other one escapes is a false picture. It encapsulates what basically happens without requiring a treatise on quantum field theory and Fourier transforms.

            For one thing, the escape velocity right at the horizon is formidable, so the energy required to climb out of it is beyond the very low waste-heat energy of Hawking radiation.

            There was a PBS SpaceTime video that got into this. I’ll try to find it.

            Like

          2. “I don’t subscribe to the notion that reality is reversible. And I don’t subscribe to conservation of information.”

            So for you, this whole question may be meaningless. I’m open to those not being true, but I expect them to be reality. But it’s one of those things we’ll never know for sure unless they’re falsified.

            On your examples, I think the trick is to widen our scope until we have the information we need to make those determinations. For a specific clock in a specific environment, or a specific number recorded in a specific place, if, in principle, we know enough about all the effected entities, we should be able to work backward and make the determinations. (In practice is, of course, a whole different matter.)

            Like

          3. That’s a theoretical presumption, and exactly the one I’m challenging. Often “working backwards” implies knowledge of earlier conditions. The question is whether what one can measure (of as large a scope of environment as you like) of the current condition of the clock or 64-bit number contains information that unambiguously unravels all the steps that led up to it.

            I’m not sure I believe that’s possible, even in principle.

            Liked by 1 person

        2. “Conservation of properties is based on symmetries”

          That’s a whole ‘noether subject. Nyuk nyuk.

          But I thought the theorem said that if symmetry, then conservation. Not necessarily that if conservation, then symmetry.

          Like

    1. James,
      I’m afraid that I won’t be able to help Mike out here either. The proposed physics of black holes is so far beyond me that I don’t feel that I even understand the question. Mike’s “information is causation” proposal however, is something that he has used to counter my own “thumb pain” thought experiment. Furthermore as I recall it first occurred to him when professor Eric Schwitzgebel mentioned something to that effect when I mentioned my thought experiment over there. Though I have no further indication that Schwitzgebel puts any stock in the idea that information exists as causation itself, Mike seems to really like it.

      One counter is that this seems to beg the question. We naturalists demand causal dynamics in order for something to exist. But if one claims that something “is causality itself”, well that mandates inherent naturalism. For example “God is causality”. I’m afraid it’s not that easy to refer to something that way effectively.

      Furthermore it seems to me that nothing is known to exist by means of information in itself that isn’t also realized by means of associated instantiation mechanisms of some kind. Thus asserting that if information on paper correlated with what the brain receives from a whacked thumb were processed into other information on paper correlated with the brain’s processing of it, for “thumb pain” to exist to something by means of this paper shuffle, it would be incredible. Why would qualia be the only thing known to exist in such a way?

      Mike and I have gone over this extensively, but while searching through our discussions to better respond to you I see that he once linked to a post on the matter to suggest that it was more than Eric Schwitzgebel that gave him the idea for information as causation.
      https://selfawarepatterns.com/2017/07/15/does-information-require-conscious-interpretation-to-be-information/

      Apparently you and I were there, though this was before I realized the full extent of Mike’s position (and thus before the creation of my countering thought experiment). I mention this because I now see that in this July 2017 post he discussed how information such as DNA would not exist as “information” without instantiation mechanisms to make use of it.

      Wow! That’s also my argument regarding qualia! In retrospect it could be that the strength of my thought experiment, combined with his longstanding belief that he might exist as a computer simulation of something else, have pushed him into a naturalistically precarious spot.

      Liked by 1 person

      1. Eric,
        When I referred you to that 2017 post, it was to show that my evolution toward the causation position predated Eric S’s remark. That remark just clued me in to a short way of expressing it.

        There are over 1000 posts on this site over the last seven years. Given that my views are constantly evolving, finding inconsistency in language throughout those years is trivial. But it has no bearing on my current stance.

        Nonetheless, you don’t seem to have understood what I said back then about DNA. Here the snippet I think you’re referring to:

        For example, if the long molecules that are DNA chromosomes somehow spontaneously formed by themselves somewhere, there would be nothing about them that made them information. But when DNA is in the nucleus of a cell, the proteins that surround it create mRNA molecules based on sections of the DNA’s configuration. These mRNA molecules physically flow to ribosomes, protein factories that assemble amino acids into specific proteins based on the mRNA’s configuration. Arguably it’s the systems in the cell that make DNA into genetic information on how to construct its molecular machinery.

        Today, I would only be more careful to say that DNA chromosomes that spontaneously formed outside a cell wouldn’t be genetic information, but that they would still be physical information of some kind. So the difference between what I said then and what I’d say now is pretty minor, and makes no difference for my position on qualia.

        Liked by 1 person

        1. “…but that they would still be physical information of some kind.”

          I’m so glad you clarified that! 😀 😀

          As an aside, anyone heard from James Cross lately? I just noticed he hasn’t been around in a while.

          Like

        2. I don’t know about that Mike, your point seems pretty clear to me. Let’s look at your first line:

          For example, if the long molecules that are DNA chromosomes somehow spontaneously formed by themselves somewhere, there would be nothing about them that made them information.

          Exactly! And I accept your recent clarification as well. Though it may not now function as the “DNAing” kind of stuff of biology, there should still be causal interactions with its environment which might thus be productively referred to as “informational”. Was there any other point that you wanted to bring to my attention as well here?

          Regardless there is also the question of how well you consciously grasp the point that I’m making. Apparently in 2017 you had no problem defining a given form of information in terms of the mechanisms that it animates, such as DNA only existing that way in terms of biological dynamics. But that was before you incited me to develop what I refer to as my “thumb pain” thought experiment. Back then you merely had to admit that Searle with a vast lookup table to convert inputted scribbles on paper into outputted scribbles on paper, would constitute a machine which itself could conceptually “understand” essentially what a person might. “Understanding” however is a famously vague term and so quite ripe for all sorts of interpretation. Thus with this submission I don’t think that you sacrificed very much.

          Today however you’re faced with my own non ambiguous thought experiment. Thus in order to preserve certain past beliefs you seem to have decided that qualia requires no unique physics based mechanisms in order to exist as such, unlike DNA and all else that you know of existence. This is why I consider “informationism” to be founded upon a supernatural premise. Information on paper that’s somehow converted into other information on paper should not create something which experiences what you do when your thumb gets whacked, any more than DNA outside the domain of biology should not do any “DNAing”. This is to say that in order for brain information processing to produce qualia, associated mechanisms will be mandated in a natural world.

          Just as Newton’s gravity proposal would have relied upon the a supernatural premise if he did not theorize an unknown mechanism to be at work as well, today global workspace theory and several other consciousness proposals depend upon supernatural dynamics when they are taken alone.

          Like

    2. James,
      The identification of information with causation was something I described and we discussed in this post: https://selfawarepatterns.com/2020/05/26/pain-is-information-but-what-is-information/

      Hossenfelder’s post was just one step on my journey to that conclusion. Her describing the information paradox purely in terms of issues of determinism and reversibility without mentioning information until the end, to essentially note that the word “information” actually caused confusion, was one of the things that made me go “hmmm” about what information fundamentally is.

      Like

      1. Looking over that discussion I see we didn’t really resolve the issue, namely, the identification of information as causality. I recognize/appreciate/agree with the association between causality and information, but don’t think you have sufficiently defined either. Don’t need to continue that discussion unless you have something to add, like said definitions.

        *

        Like

  2. Hi Mike,

    I’m no physicist, but I’m not sure I see the problem you see.

    You’re a programmer, so think of it like a hashing function. In all the rest of physics, there is no hashing functions, only one-to-one functions. Information is preserved.

    But dump stuff in a black hole and it is hashed. You can imagine that making a tiny change will make a difference in the output, and you’ll generally be right in practice, even if in this case the difference is infinitesimal (it wouldn’t be in a good hashing function). But there’s no way to work backwards from the output to the input, because in theory there’s any number of different inputs that could have led to that output.

    In the scenario you imagine, you seem to be distracted by the difficulty of constructing different books with absolutely identical masses, momentums, etc. Indeed, this is impossible for practical purposes, just as it is often impossible for practical purposes to find hash collisions in a good hash algorithm. So certainly your choice of book will make a difference to the black hole.

    But you can’t work backwards because the only information that is preserved is the effect on the net momentum of the black hole (forgetting charge). In principle, any number of objects could have had this net effect, just as in a hash algorithm any number of inputs could have produced this output.

    You’re not alone in thinking that this doesn’t feel quite right. That’s why it’s a paradox!

    Liked by 2 people

    1. Hi DM,
      I do understand that reversibility is the main issue. Of course, everyone understands that this is reversibility in principle rather than practice, since capturing all the Hawking radiation emitted over the life of the black hole is impossible.

      But my question goes to why are physicists are only looking at the Hawking radiation for the answer. If the minute differences that make up information do have causal effects on the black hole’s angular and linear momentum (and yes, possibly also charge), then it seems like those causal changes would reflect back out into the universe, in the precise gravitational effects the black hole has on everything else.

      Put another way, why isn’t a large amount of the information physicists are wringing their hands about not reflected back into the universe much sooner than the Hawking radiation?

      For that matter, even the stuff that’s smeared on the event horizon would be sending out electromagnetic radiation back into the universe, which itself should be having causal effects.

      Again, this is all far too basic for theoretical physicists to have missed. I’m just wondering what the answer is.

      Like

      1. Hi Mike,

        I’m just not seeing the problem really.

        Consider the following scenarios.

        Scenario 1: Suppose a non-rotating black hole absorbs one hydrogen atom travelling north at 0.5c and one travelling south at 0.5c at the same time, both targeted directly at the center of the black hole.

        Scenario 2: Suppose a non-rotating black hole absorbs one hydrogen atom travelling east at 0.5c and one travelling west at 0.5c at the same time, both targeted directly at the center of the black hole.

        These scenarios start with different configurations, but the final result is the same. The black hole’s velocity, mass and angular momentum are unchanged, but its mass has increased (I guess by the mass and energy of two hydrogen atoms travelling at half the speed of light). The result being the same, there’s no change in the black hole’s gravitational effects on the rest of the universe, not even a negligible one. I’m not sure why any of the missing information should be reflected back into the universe. Are you thinking that the atoms would have impacted on the rest of the universe before falling in, or are you thinking that they will have different effects on the distribution of mass of the black hole after they pass the event horizon, e.g. negligibly stretching the black hole along the north-south or east-west axes? I think such negligible stretching may occur for a brief time but then it’ll settle down into a perfect sphere again.

        You raise an interesting point about the infalling matter emitting electromagnetic radiation. But not everything falling into a black hole would emit radiation in this way. Photons falling into a black hole wouldn’t, for example.

        Like

        1. Hi DM,
          Interesting scenarios. But I think you provided the answer. The effects of the incoming particles have to have some kind of effect on the black hole, even if it is infinitesimally minute and for the briefest period of time. However, that effect will have (also minute) gravitational effects, which should have effects throughout the universe.

          That said, my question does assume that all the unique aspects of whatever falls into the black hole are crushed away. If so, then the effects of that crushing should effect the black hole’s numbers which should have effects in the universe, allowing information to be reflected away.

          But if that’s not what happens, if the matter that falls in is allowed to retain its information in some form or fashion, then it does seem like it becomes a serious problem if the Hawking radiation isn’t affected by it. With time dilation, it seems possible that all the incoming information may not be crushed away before the black hole dissolves through Hawking radiation.

          So that may be why the paradox is real.

          Like

  3. Science has the cart before the horse on this one. Black holes don’t “consume” and anything because they are integral to the structure and form of the universe. This is a concept that even Neil deGrasse Tyson is willing to entertain.

    Peace

    Liked by 1 person

          1. To get serious, continuing the sub-thread above, if the only outputs are heat and CHON, is it really possible to reconstruct the inputs? There is indeed this idea that, strictly in principle, it should be possible, but I think it’s worth questioning.

            It’s part of the larger question about just how deterministic reality really is.

            Like

    1. I watched this PBS video. It’s like the moderator briefly mentioned; all of these theories are based upon General Relativity and Quantum Field Theory being correct. Neither GR nor QFT are correct so everything else that follows is false.

      This kind of stuff is a joke: good for entertainment but that’s about it. I’m not sure which metaphysical position is more screwed up materialism or idealism. I think we all need to relocate to Oregon, do some legalized psilocybin mushrooms, meet Jesus and get out head straight.

      Peace

      Like

      1. I think it does GR and QFT poor service to dismiss them as not correct when both theories provide such outstanding results compared to observations. We know both are, at best, incomplete, and there are unanswered questions, but I think it’s premature to throw out the baby with the bathwater.

        Like

        1. GR and QFT are models Wyrd, you know that and I know that. As an instrumentalist like Mike will be quick to assert, models are “useful” if thy can make accurate predictions. But just because a model is able to make accurate prediction does not mean the model is the REALITY. I am opposed to that type of dogma coming out of our scientific community. Furthermore, there is something more fundamental underlying REALITY that both materialism and idealism are missing (or I should say, uh hem, unwilling to admit).

          We shouldn’t discard the models just because they do not reflect the true nature of reality, because from a technological aspect they are very, very, very useful. But metaphysically, they are use-less. That’s the main thrust of my point in case you were wondering??!!?? We need more understanding of the nature of our being if we are going to survive the madness of this world, not more technological advancements.

          Peace

          Like

          1. Mike the instrumentalist points out that any understanding of reality is a model. So comparing a scientific model to that understanding of reality is comparing models. We never know reality except through our models of it, even if it’s just the primal one built by our senses.

            Like

          2. “I am opposed to that type of dogma coming out of our scientific community.”

            I’ve never met anyone pushing such a dogma. Every scientist I’ve met knows theories are just models. Many of them have said so explicitly in their popular writing.

            Like

          3. Since when is the metaphysical position of materialism promoted by the scientific community not dogma?

            Peace

            Like

    2. Thanks! Watching that video made me realize an important assumption inherent in my question in the post. I’m assuming that the no-hair theorem variables are a complete description of the system, rather than just what we can know about it. If the former, then my question makes sense, since the act of the unique configuration of the matter being erased should cause effects in the black hole’s numbers and hence be reflected back into the universe via the changes in gravitational effects.

      But if information does somehow survive in the black hole, or at least on its event horizon, then the question of what happens to it as it evaporates into Hawking radiation does seem more germane.

      I need to look up his video about information conservation in quantum physics.

      Liked by 1 person

        1. Yes, another good video! Some of the things he touches on connect with the view that information may not always be preserved.

          Firstly, it’s not based on Noether’s continuous symmetry principle, so it doesn’t have the same status as conservation of energy, momentum, charge, etc. Note, too, that he mentions time reversal also isn’t a conservation symmetry.

          Secondly, he notes that the Copenhagen Interpretation does mess with the notion of information conservation. The MWI preserves it overall, but measurement still erases information in a given branch.

          Lastly, he mentions that quantum information is preserved in the whole wave-function. It requires multiple experiments to extract the possible outcomes and, thus, the information. But, of course, that requires separate experiments. So while QI may indeed be preserved, it can never be fully measured.

          There is also that, even granting information conservation and quantum threads of information that weave from the Big Bang to forever, I think the quantum/macro divide is important. The macro examples of the clock hands or 64-bit number show information can certainly be lost at the macro level. It’s possible (in my view) that quantum information, even if preserved, is lost to any macro observation.

          The bottom line is that it’s possible conservation of information is more complicated or more conditional that other conservation laws. One answer to the BH Info Paradox is that information simply isn’t conserved the way we think it is.

          Like

          1. My overall takeaway from the video was far more pro-information conservation than yours.

            “The MWI preserves it overall, but measurement still erases information in a given branch.”

            I think it’s more accurate to say that information is inaccessible from a particular branch. As you note, it’s preserved overall.

            The macro divide only seems relevant in collapse interpretations. It doesn’t seem like an issue in non-collapse ones. In those, everything is quantum. In collapse interpretations, the issue is how to define “macroscopic”. At what threshold does it take hold? At what point will efforts to hold ever larger objects in superposition fail?

            One thing these videos are making increasingly clear to me, is why so many cosmologists are skeptical of Copenhagen, at least an ontological version of it. (The purely epistemic version is trivially true, but it remains true even if another interpretation is also true.)

            Like

          2. Well, yeah, that was kind of the point of the video. The idea that information isn’t always conserved is definitely a minority opinion.

            (FWIW, I think understanding the quantum-macro divide is ultimately what will falsify the MWI. It will make it clear that measuring devices going into superposition is an absurdity.)

            Like

          3. Indeed. I’m impressed by how challenging it is to preserve a quantum state; usually ultra-low temps and EMF isolation are required to avoid decoherence.

            See, I think Everett was right… to a point. I think the detector, at the point of interaction, probably does just evolved into a combined wave-function where superposition is possible. But decoherence times for anything massive in an environmental bath of radiation approach or exceed the Planck time. Even conservative estimates have times on the order of 10^-20.

            So the point of interaction may well evolve into a superposition, but that decoheres extremely fast. Speed of light is only one foot per 10^-9 sec, so the superposition doesn’t have any chance to spread (let alone affect the entire detector) — it evaporates into decoherence almost immediately.

            There is no “collapse”, no discontinuity, but the wave-function evolution encounters a situation of extremely fast decoherence during a measurement.

            Like

          4. Remember, decoherence is not the collapse. Nothing about decoherence in and of itself makes the other branches go away. For that, you have to add in something else, something not in the quantum formalism. Without that, what you just described is rapid branching.

            Like

          5. The difficulty is we know the branching happens prior to decoherence, and nothing about decoherence prevents it from continuing. At least unless there actually is a collapse of some type.

            Like

          6. Do we know that? One of my issues with the MWI is lack of clarity on when branching occurs. Everett seems to suggest it happens at the measurement. Another view suggests the branches always exist but are identical up to the branching point. In some situations it’s not clear branching occurs at all (or if it does, how it matters).

            (And I need clarity on why a spin measurement doesn’t “collapse” the wave-function under MWI, as we discussed previously.)

            Like

          7. We do know it. It’s what entanglement and quantum computing are all about. I think you may be overthinking branching. When a particle is in a superposition of spin up and spin down, that superposition already has two branches. If that particle interacts with another particle and becomes entangled with it, they are now in a joint superposition of their combined states. As we add more particles, the number of concurrent states, that is branches, multiplies.

            A measurement event is simply dramatically increasing the number of particles involved, which typically leads to decoherence. But decoherence only means the branches can no longer interfere with each other. (Or at least that the interference is no longer detectable.) There’s nothing stopping the process from continuing into the environment, unless there is a collapse that annihilates all the branches save one from reality.

            On why a spin measurement doesn’t collapse the wave function, all I can do again is urge you to take off the lens of the collapse postulate. Under MWI, the other spin measurement outcome is still there. We just don’t have access to it in whatever branch we’re in, and the version of us in the other branch no longer has access to the outcome we see.

            Like

          8. Now I’m wishing I’d never mentioned the MWI…

            “It’s what entanglement and quantum computing are all about.”

            In the context of the MWI, “branching” has specific meaning. As we have discussed before, in the Sean Carroll beam-splitter, where does the branch occur? When (supposedly) both detectors fire? When the photon encounters the mirror? When it leaves the laser? When the experimenter decides to perform the experiment? When the universe began?

            “When a particle is in a superposition of spin up and spin down, that superposition already has two branches.”

            No, it’s just in a superposition. Remember that — until some measurement is made — spin is a superposition of all possible measurements that could be made on all possible axis and angles on those axis. If there’s “branching” it’s a continuum of an infinite number of possible measurement outcomes.

            All that seems certain is that a measurement with definite outcomes does result in a branch. What seemed a unified description of reality to that point now diverges.

            “A measurement event is simply dramatically increasing the number of particles involved, which typically leads to decoherence.”

            Agreed. What I’m suggesting is that decoherence also leads to a single outcome (and no MWI branching). The detector is invariably much larger than the quantum system being measured, so, essentially, its wave-function “wins” — and the detector is constantly decohered on extremely small time scales, so the superposition of the quantum system is instantly lost into that decohered system.

            Importantly, the time scale of decoherence is tens of orders of magnitude faster than the speed of light, so there’s no way for any detector to reach a superposed state. The point of interaction might briefly do so, but it’s quickly damped out by the massive decoherence of the detector.

            “But decoherence only means the branches can no longer interfere with each other.”

            That, BTW, is some magical hand-waving and has become my #1 objection to the MWI. How does decoherence explain how matter can coincide invisibly? Supposedly there are infinite versions of ourselves overlapping and that’s fine and we can’t see them because of decoherence?

            Until I see some math explaining that I can’t see it as anything but a magic spell.

            “On why a spin measurement doesn’t collapse the wave function, all I can do again is urge you to take off the lens of the collapse postulate. Under MWI, the other spin measurement outcome is still there.”

            We’ve had this discussion recently. Yes, absolutely, Alex measures the vertical axis and now there are two branches of Alex, one with a spin-up and one with a spin-down. Both of them have a wave-function that is now in a known eigenstate. Both of them have a wave-function that has “collapsed” to that known eigenstate. Both have a wave-function in superposition of states regarding the other possible axes of measurement, although measurements close to the vertical axis will return correlated results (if Alex measures close to the vertical axis the odds are high of getting the same result as measuring the vertical axis).

            But unless you’re positing some third branch, both branches of Alex have a wave-function that suddenly changed as the result of the measurement. There’s no way to avoid that.

            Liked by 1 person

          9. It does seem like we keep having the same conversation.

            I think branching happens in the evolution of the wave function. In discussions of MWI, it is true that often “branching” is used to refer to what happens with the measurement process and decoherence, but it’s important to understand that the branches that come out of decoherence are continuations of the “possibilities” in the wave function that existed before. It’s just a continuation of the wave function, albeit with the relevant “possibilities” now out of phase with each other. (I quoted “possibilities” because it seems like collapse terminology, but maybe it’ll help connect the dots here.)

            “The detector is invariably much larger than the quantum system being measured, so, essentially, its wave-function “wins” — and the detector is constantly decohered on extremely small time scales, so the superposition of the quantum system is instantly lost into that decohered system.”

            Is this your own theory or do you have a reference? Why doesn’t the collection of all the other qubits in a quantum computer “win” when the initial entanglements begin early in the quantum circuit?

            Decoherence is a physical process that obeys all the laws of physics, including not happening faster than light. That’s not to say that decoherence doesn’t happen very fast under normal conditions, but there’s nothing superluminal about it.

            You seem to increasingly employ the phrase “hand waving” with any concept you just don’t like. In any case, if you read any authoritative description of decoherence (including Jim Baggott’s in his new book), it will explain that decoherence doesn’t solve the measurement problem. Decoherence offers no explanation for why all the outcomes save one cease to exist, if in fact they do. At best, it explains the appearance of the collapse.

            “Until I see some math explaining that I can’t see it as anything but a magic spell.”

            The Wikipedia and SEP articles on quantum decoherence appear to have explanations in terms of Dirac notation. And both have citations to papers I’m sure get much more deeply into the math. It’s worth noting that decoherence, in and of itself, is interpretation agnostic.

            Liked by 1 person

          10. “I think branching happens in the evolution of the wave function.”

            Yes. The question I’m asking is where, when, and under what circumstances.

            “It’s just a continuation of the wave function, albeit with the relevant ‘possibilities’ now out of phase with each other.”

            Even granting that description, it’s still the case that there is a temporal branch point in a spin measurement, and thus a before and after. In any branch, the wave-function suddenly changes at that point. Even from a global perspective, there’s a sudden change that eliminates some possibilities.

            “Is this your own theory or do you have a reference?”

            Mine based on a presumption of physicality, a denial of what I see as extreme solutions, and the shortest distance I can find within what we already know to answering some of the unanswered questions.

            “Why doesn’t the collection of all the other qubits in a quantum computer ‘win’ when the initial entanglements begin early in the quantum circuit?”

            This question seems to misunderstand what I’m saying. In fact, QC engineers go to great lengths to prevent their qbits from decohering into the environment, which is what I’m talking about. One of the salient points of my thesis is that quantum states are extremely fragile.

            “Decoherence is a physical process that obeys all the laws of physics, including not happening faster than light.”

            I can’t found a source to support the numbers, but I’ve read those somewhere recently. I’ll try to track it down.

            One consideration, decoherence happens at the particle level. Electrons can decohere into the other particles of the atom, and atoms can decohere into nearby atoms, extremely quickly even at light speeds.

            Meanwhile, whatever signal the detector processes has to cross billions of atoms. Even if the signal and decoherence moved at the same rate, the signal would be decohered before it got more than a dozen atoms or so (is what I’m suggesting).

            “You seem to increasingly employ the phrase ‘hand waving’ with any concept you just don’t like.”

            Has nothing to do with liking it, since I make the same accusation against hand-waving points made on points I agree with. I’m about concrete specifics, real examples, and the math.

            “In any case, if you read any authoritative description of decoherence (including Jim Baggott’s in his new book), it will explain that decoherence doesn’t solve the measurement problem.”

            How many times have I acknowledged and agreed with that point? Every time.

            “Decoherence offers no explanation for why all the outcomes save one cease to exist, if in fact they do. At best, it explains the appearance of the collapse.”

            Agree in both cases. The MWI does eliminate the apparent randomness of quantum interactions. Why does the photon interact with that particular electron? That’s a huge question for the CI.

            “The Wikipedia and SEP articles on quantum decoherence appear to have explanations in terms of Dirac notation.”

            I’ve seen the math. The whole topic is one of on-going research. But I’m not saying it explains measurement. I am saying decoherence, as generally currently understood, I think explains why detectors, let alone cats or humans, don’t ever go into superposition.

            Everett was right, but only on the small quantum scale.

            Liked by 1 person

          11. I’ve often said Everettian physics could be right on the microscopic scale, but that something might block the larger implications. The questions is what that might be. The difficulty I have with any strong statements here is the only reason we currently seem to have for considering a blocker is distaste for those broader implications. That makes it feel a bit like the Tychonic system.

            Myself, for a scientific assessment, I think the untestable predictions are irrelevant. What matters are the assumptions of the theory, its logical coherence, and the predictions that can be tested. It seems like a lot of physicists take this strategy, and just stay agnostic on the broader implications.

            Liked by 1 person

          12. “The questions is what that might be.”

            I think that answer is decoherence.

            Everett, in his paper, offers reasons why the MWI might not be necessary, and one of them is that there is some magic N that’s the quantum-macro threshold, and this seems not to be the case (no hint of this N has been spotted so far). Thus he rejects this alternative.

            But what if N is much more complicated and dependent on conditions? That would make it much harder to determine, and researchers are, as we’ve all mentioned many times, constantly pressing on that boundary.

            As I’ve said before, I’m impressed by the fragility of the quantum state, and I think that highlights that there is a boundary. I also think it’s a pointer to decoherence being the root cause of the divide. We just don’t quite yet know the details of how coherent phase information is distributed and merged.

            Liked by 2 people

  4. One way to think about this is somewhat aligned with DM’s analogy of hashing. A BH only has eleven degrees of freedom — eleven numbers measurements can provide — so it’s hard to see how something as complex as book text could be encoded in that.

    It’s those few degrees of freedom that makes a BH’s entropy so huge. They could be made from anti-matter or hydrogen or chocolate ice cream, but they only reveal three fundamental properties (eleven observational ones).

    BHs use a hashing algorithm that’s almost entirely collisions. 🙂

    Liked by 1 person

    1. Exactly. Another way of putting it is that if you measure these 11 degrees of freedom after throwing a book into the BH, you now have 11 equations and 1 million unknowns (assuming a 300-odd page book with 3000ish characters per page, and you want to know all the characters, in order). … But of course, this is without measuring the Hawking radiation.

      Liked by 1 person

  5. I don’t know Mike. I like empirical science, but this speculative crap that like MWI, the Block Universe, Black Hole Information Paradox, Hawking Radiation Theory, etc isn’t science, it’s conjecture. This type of misleading hype belongs in publications like Mad Magazine not leading scientific journals. It’s seems like we’ve reached the limits of what can be verified empirically, so it becomes the wild west of popular theories that people can argue, fight and squabble about.

    Hell, we might as well include the nonsense that modern idealist’s the likes of Bernardo Kastrup are conjuring up. These people are upset because they are excluded from the boys club of speculative science, and I hate to use the term science, but they have a valid bitch nevertheless because it’s all the same crap. I just can’t hang with this sort of nonsense. Have fun kids, because I’m not interested…….

    Peace

    Like

    1. You’re painting with an awfully wide brush there.

      The speculative crap of Kastrup is metaphysical philosopher speculative crap and probably a good argument that, at least some, philosophers don’t always add much value to the world. It’s essentially a form of navel-gazing, and he’s welcome to it, but I agree with not hanging with it. Thing is, until it can be ruled out, either by logic or experiment, then there’s always a chance it’s right and shouldn’t be entirely dismissed.

      The speculative crap of the MWI and the BUH (or the MUH) is metaphysical scientific crap that isn’t a lot better than metaphysical philosopher speculative crap, but is at least well-grounded in physical theory. This kind of speculative crap has a greater chance of being truth than the first kind. (I personally think these are “fairy tale physics” but others find them acceptable possibilities.)

      The speculative crap of Black Holes is well-grounded scientific speculative crap that tries to follow physics as understood at this point. (All science is contingent. Science evolves despite scientists.) Importantly, it provides testable predictions — this speculative crap will someday be verified or falsified. The previous metaphysical views may never be. And of the three, this group stands the best chance of being truth because of its physical grounding.

      So don’t throw the baby out with the bathwater!

      Like

      1. Well spoken Wyrd. I’m not a dualist regarding our own universe. I’m convinced that every system within that universe is of a physical and/or material nature (whatever that means), including the system we know as “mind”. Now having said that, I do not buy into materialism as a metaphysical position nor do I buy into idealism as a metaphysical position, bot have intrinsic paradoxes and are logically incoherent.

        As far as black holes are concerned; I think that our physical universe “emerges” from the singularity of black holes. My speculation is that black holes are inner space, the very “cause” of what we refer to as gravity. Think about it: the force of attraction we refer to as gravity is directly correlated to mass, right? But what is the one thing that exists at the center of mass? Inner space right? And that inner space is exponential in relationship to any given mass. The larger the mass, the greater the inner space, the stronger the force of gravity…. aka a black holes. This postulates eliminates the contradiction of our current model of gravity being a self caused cause meaning: that mass is the cause of gravity and that mass itself is subject to it’s own cause.

        Now, as far as the structure of our physical universe is concerned, inner space would be responsible for attraction and outer space would be responsible for the opposing force of repulsion, two opposing forces absolutely essential for structure and form, identical in nature to the opposing forces of electromagnetism, also necessary for structure and form. The discriminating line of demarcation dividing the attractive force of inner space and the repulsive force of outer space would be the event horizon of black holes.

        There you have it, my brief and spectacular moment as a theoretical physicist.

        Peace

        Like

          1. I stand corrected: The previous post was my brief and spectacular moment as a skilled metaphysician and philosopher who just happens to believe; that when it comes to the true nature of reality, explanatory power trumps predictive power every time.

            Is there a mathematical equation to express agreeing to disagree? If not, then I guess agreeing to disagree is a bunch of hand waving also. You kids have fun without me,

            Peace

            Liked by 1 person

    2. Lee,
      I did a post on this earlier this year: https://selfawarepatterns.com/2020/05/23/the-spectrum-of-science-to-fantasy/
      The TL;DR is that it’s not productive to hold a binary view here, as something being either settled science of fantasy. It’s more of a spectrum.
      1. Reliable models
      2. Rigorous exploration
      3. Loose speculation
      4. Fantasy

      General relativity and raw quantum mechanics fall into 1. Paranormal phenomena, faith healing, etc, fall into 4. A lot of scientific theories that are currently in 1 began in 2. The question is what belongs in 2, which has a chance of making it into 1, and what belongs in 3, which ranges from a pretty low probability to being untestable in principle.

      The difference, I think, is in how many assumptions and the quality of the assumptions required for a proposition, how well it reconciles with theories in 1, and if it makes predictions that are testable, at least in principle. (The “in principle” point is important. Copernicus had no way in practice to test his heliocentric theory when he published it in 1543, but it was still testable in principle, and became so in practice 76 years later with the invention of the telescope.) Note that each assumption is an opportunity to be wrong, and major assumptions are major opportunities to be wrong. It’s what makes 3 unlikely to ever make it into 1.

      By those standards, much of what is being discussed here is in 2. But a lot of the other stuff you mention is in 3. Granted, this requires judgment, and so people will disagree.

      Like

    1. Thanks! Interesting article. The view seems to be that the information comes out in the Hawking radiation. I think the Quanta article is basically physicists trying to demonstrate that. But there seems to be a lot of disagreement about how close they really are.

      Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.