Many-worlds, probabilities, and world stacks

In this video, Matt O’Dowd tackles the issue of probabilities in the many-worlds interpretation of quantum mechanics.

A quick reminder. The central mystery of quantum mechanics is that quantum particles move like waves of possible outcomes that interfere with each other, until a measurement happens, when they appear to collapse to one localized outcome, the famous wave-particle duality.

This is the measurement problem, which interpretations of quantum mechanics try to solve. One the oldest and most popular, Copenhagen, asserts that this duality is fundamental, and that further investigation is misguided. Pilot-wave posits both a particle and a wave the entire time.

Many-worlds take the structure of quantum theory as complete, that quantum physics applies to us and the environment as much as particles, resulting in a universe that is itself a wave of all possible outcomes. We only see one outcome of the measurement because we’re the version that sees that outcome, with a version of us seeing each possible outcome.

A longstanding objection to many-worlds is how to talk about probabilities. Probabilities seem reasonable in an interpretation where there’s only one outcome. But if every outcome happens, in what sense is it meaningful to talk about the probability of any one outcome? Aren’t they all 100% probable?

This objection has never bothered me, mostly because I see probabilities as relative to an observer and their limited knowledge. That’s easier to see when looking at at something like the weather forecast, where probabilities more obviously reflect our limited knowledge.

As O’Dowd explains, we can see the probabilities in many-worlds as self locating uncertainty, a view Sean Carroll champions. In the process of explaining this, O’Dowd discusses the nature of worlds in the theory, something I’ve tried to tackle before (here and here) but mostly failed at. Maybe his card stack metaphor works better for most people.

The video runs about 19 minutes.

PBS Space Time: Can The Measurement Problem Be Solved?

(Here’s a link to the video in case the embed doesn’t display.)

In the end, this is a devilishly difficult concept to explain. Which makes the video tough to follow. It might help if you have time to watch it multiple times.

It’s worth noting that there are other proposed solutions to the probability problem. But I think this one makes the most sense, although the others aren’t necessarily wrong. It comes down to your philosophy of probability. The claims of being able to derive the Born Rule in many-worlds are controversial. But at worst the theory has to simply accept the rule as a postulate, similar to the other interpretations.

What do you think? Did O’Dowd’s approach help? If not, any thoughts on where it fumbles? Or about where the explanation itself might be wrong?

73 thoughts on “Many-worlds, probabilities, and world stacks

  1. The video goes by pretty fast…I had to pause it to read the descriptions because I can’t listen to someone talk and read something else at the same time.

    I recently watched a lecture series with Sean Carol on the Great Courses (now called Wondrium) called “The Many Hidden Worlds of Quantum Mechanics” which was much slower and easier to follow (the lectures are supposed to be for total beginners.) Even so, I had to quit because this stuff just doesn’t make a bit of sense to me. I’m afraid fantastic graphics and clever examples don’t clarify anything for me. I suspect if you don’t get the math (and I’m still counting on fingers) you won’t get it at all.

    Liked by 2 people

    1. Yeah, one of the problems with trying to understand this stuff through videos is that a difficult point just rushes by. I prefer reading for most of this stuff. Carroll’s book, Something Deeply Hidden, is pretty good along these lines. Although I struggled with the later sections, and always meant to swing back around to it.

      Having a basic understanding the math helps, but there seem to be plenty of people who follow the math who still don’t get the concepts. So don’t feel bad if it hasn’t clicked yet. Even many experts struggle with it.

      Liked by 1 person

  2. Um… I am stunned.

    Naively, I always assumed that that was how probabilities worked in the many worlds interpretation, envisaging different branches as having different “thickness”, being composed of possible worlds which are *bot* being split apart by the a particular outcome of a measurement.
    Not that it ever occurred to me to connect that to the Borne Rule.

    That’s essentially O’Dowd stack of cards taken to its ultimate limit, which may or may not be continuous. And yes, them cards make absolute sense as an illustration of this view: they represent potential worlds which could have gone one way or another. Take it to the potentially infinite limit (or as O’Dowd points out, just to sufficient granularity) and Bob’s your uncle!

    But I also saw (and still see) a possible difficulty, which I am not equipped to even start addressing: what is the interplay of this simple picture with quantum uncertainties? Just how far can we subdivide a given outcome into a stack of, in some sense primitive, worlds? Are we saying that quantum probabilities are themselves quantised? Or are they at some level just “smeared”? As somebody trained (loooong ago!) on analysis and statistics, I find it easy to think of it all in the infinite limit, but does that actually make sense?

    However, to answer your question: yes, O’Dawd makes perfect sense to me; and yes, I very much like his derivation of the Rule.

    Liked by 1 person

    1. “Stunned” resonates with some of the comments under the video on Youtube.

      On how far we can subdivide a given outcomes, from everything I’ve read, no one really knows. Everett reportedly speculated that it could be infinite. David Deutsch just accepts that and moves on. Sean Carroll does a maximum entropy calculation for the observable universe and comes up with something like e^10^122, so that many distinct base worlds, with any “world” we talk about being some “stack” of those distinct one.

      (I once attempted a back of the napkin type calculation based on Planck scales and came up with a much higher number, but I think I overlooked that at some information density you get a black hole, which puts a limit on how concentrated things can be in any spacetime region.)

      The thing is, if it’s finite, then under many-worlds, there only need to be enough to get us to the heat death of the universe. Actually, heat death and running out of room to further subdivide might be one and the same thing, if you think about it. When you’ve reached that point, you’re at maximum entropy in both views.

      But to O’Dowd’s point, any wave function we actually work with will be coarse grained to some degree. Just how coarse grained or granular it is, depends on what someone is trying to measure or accomplish.

      Like

      1. While I find O’Dowd’s derivation of the Born Rule quite comprehensible, it is rather awkward, in a way which probably causes confusion. He jumps between the the trivial case of applying Pythagoras to the case of two equally likely outcomes, to discussion of splitting sets of cards, without clearly explaining the join between them.

        Fortunately, there is a much cleaner derivation, which is in fact so trivial that I am sure it must have already been articulated by somebody. Let me sketch it, for brevity glossing over some fine but, I think, unproblematic details.

        Because it is so simple, I’ll go straight to the general case of a measurement in which one possible outcome has the probability of K/N (assuming that all outcome probabilities are rational fractions and have been normalised the to least common denominator N). All we need is the Euclidean metric, a.k.a. Pythagoras generalised to multiple dimensions.

        Take the current worldline as consisting of N equiprobable, potentially distinct, but as yet identical worldliness (or worldline bundles), which after the measurement will go with one of the possible outcomes in the appropriate proportions. (Strictly speaking I should be invoking Law of Large Numbers here, but I’ll skip that for simplicity.) Visualise their amplitudes as lying on N mutually orthogonal axes. (O’Dowd’s 2D diagram is the simplest possible case of N = 2).

        Because those N worlds are indistinguishable and each can go with any of the possible outcomes, the unit vector representing the whole of the measurement must have the same amplitude along each of the axes (O’Dowd’s principle of indifference). Trivially, it will be sqrt(1/N).

        After the measurement, K of the worldlines will go with our probability K/N outcome. They will form a subspace of dimensionality K. What is the corresponding measurement amplitude? It is the length of the unit vector of the whole measurement projected into this subspace. Its individual coordinates are still sqrt(1/N) and there are K of them, so again trivially, the combined amplitude is sqrt(K * (sqrt(1/N)^2) == sqrt(K/N).

        So if the probability of an outcome is K/N, it corresponds to the amplitude of sqrt(K/N) and vice versa. Born rules OK! 🙂

        For an easily visualisable case, picture O’Dowd’s example of two outcomes with probabilities of 1/3 and 2/3 respectively. This gives a 3D space, so label the axes conventionally as x, y and z. For symmetry, the unit vector of the measurement must be the diagonal of the cube with one corner in the origin and three sides lying on the three axes. So clearly, its end point will have all three coordinates as sqrt(1/3). Now project this vector into the 2D x,y plane. There are two coords, each of sqrt(1/3) so the projected unit vector is now sqrt(2/3). The three dimensions here correspond to the three sets of cards in the video and this visualisation demonstrates how O’Dowd gets away with using only equiprobable outcomes: in each subspace the outcomes are manifestly equiprobable — it is the size of the projected unit vector which gives non-equal probabilities.

        One final point… While this derivation arises naturally in MW, as far as I can see, O’Dowd is wrong in claiming that it needs something extra in other interpretations. The same logic works — one just substitutes actual worlds of MW with notional possibilities.

        Liked by 1 person

        1. I might have to reread this a few times to make sure I follow it, but in general it makes sense to me.

          However, I think O’Dowd’s point about the other interpretations still holds. That is, you can do the mathematical derivation, but it’s not clear what it means. Of course, in anti-real interpretations it’s not clear what any of the mathematics mean, except that they can be used for predictions.

          And the other interpretations do still need something extra to explain why all but one of the states disappear. As John von Neumann and Erwin Schrödinger pointed out long ago, there’s nothing in the mathematical structure predicting it. Decoherence, which ultimately is predicted by the mathematics, helps, but only in explaining why the interference effects disappear, not in what happens to the unobserved outcomes.

          Like

          1. > However, I think O’Dowd’s point about the other interpretations still holds.
            > That is, you can do the mathematical derivation, but it’s not clear what it
            > means.

            It seems to me quite clear what it means. It simply takes for granted the frequentist understanding of probabilities as representing in principle possible outcomes and uses those in place of actual worldlines. That is in principle not in any way different from any other use of frequentist probabilities. What does it mean the a probability of a fair coin to have 1/2 probability for heads and tails? It means that if you toss it, there are two possible outcomes of which only one will be realised. Should we declare that consequently non-MW interpretations are not entitled to use frequentist probabilities? Hardly.

            Yes, the disappearance of the other possibilities is *the* measurement problem for non-MW views to address, but they are still quite entitled to juggle possible outcomes in place of actualised outcomes.

            BTW, the frequentist take on probabilities is anyway philosophically as dodgy as any other, with the exception of Kolmogorov’s purely axiomatic treatment, which simply refuses to ask “what does it mean?”.

            Liked by 1 person

          2. On further reflection, I note that this derivation of the Born Rule (both as pesented by O’Dowd and in my reformulation) relies on breaking. Leibniz’s principle of identity of indiscernibles. As far as I know, such breakage is not featured in core QM and if that is so, one cannot claim that the Rule just falls out of MW with no additional assumptions.

            Specifically the derivation assumes that worldlines can have separate identities while being identical in all respects. In my version this is required to have K separate, orthogonal axes for a probability of K/N outcomes. Equivalently, in the O’Dowd’s video it is implicit in his handling of distinct sets of cards with identical composition.

            Amusingly, non-MW interpretations can, I think, shrug this off, because they deal with notional, non-realised possibilities and are free to envisage these as identically labelled sides of a multi-dimensional die. OTOH MW deals with actual worldlines, so cannot dismiss the problem quite so simply.

            Like

          3. Interesting objection. And I have to admit that I just read about the principle of identity of indiscernibles for the first time in my life. (At least that I can recall.) But it seems like there’s a major assumption in it, which if accurate means there’s no issue for the derivation. The assumption is that the numbers are a complete and exhaustive description of the entities in question.

            But O’Dowd discusses the fact that any wave function anyone ever writes or works with is going to be coarse grained to some degree. The identicals that he includes in a stack are identical in terms of the state described, but they’d still have different locations in configuration space, differences along one or more of the 3N dimensions of any wave function.

            If the universal wave function is real and we ever could actually write the full complete and exhaustive notation, it would contain huge numbers of states that are identical except for being in slightly different spots along those dimensions. We describe the ongoing entropic differentiations in those states as “worlds splitting”.

            This is why it’s a valid way to think of this as all the worlds already existing, but just groups of them diverging from each other on a measurement rather than worlds splitting. One of the reasons I kind of like this description is it gets at this configuration space topology, which helps in understanding entanglements spanning across large spacetime distances.

            But I may well be missing something here.

            Like

          4. I don’t think any of your points actually deal with the problem. But before I get to that, I should make a couple of things clear.

            Firstly, I am not arguing against O’Dowd’s derivation of Born Rule. I actually very much like it, particularly in its purely geometric form that I’d outlined. The price of nixing Leibniz’s PII at the quantum level does not bother me unduly, doubly so that PII has already been challenged (by French) in a different quantum context. I am simply noting that as far as I can see, contrary to O’D’s claim the derivation does need an assumption which is not present in “vanilla” QM.

            Secondly, I am agnostic as to the reality of Schrodinger’s wave. While I do not hold with anti-realists, I think it is prudent to acknowledge their point (already made under this topic, I seem to remember) that Schrodinger’s formulation is known to be strictly equivalent to Heisenberg’s Matrix Mechanics, which features no such wave. While it is true that Schrodinger’s version is the one that actually gets used, the reason for this is purely pragmatic — it is much easier to handle. Hence arguments about the wave being or not being physically real strike me, as things stand, as mere hand waving.

            Anyway, back to you comments.

            While it is obviously true that in practice physicists will of necessity always deal with coarse grained bundles of worldlines, the conceptual issue remains. Either such bundles can be arbitrarily “thin” all the way down to a single world line (or to the infinity limit), or not. If yes, the objection lacks teeth. If not, then vanilla QM is thereby claimed to be a mere approximation rather than the complete theory — and that is an assumption not required by other interpretations.

            As to the notion of identical worldlines being differentiated by their position in the phase space… The full description of the quantum state of the system is what the wave describes. So whence this position which is supposed to differentiate identical worldlines? As far as I can see, , something is being posited here, which is not in any way required by other interpretations.

            Me, I’d rather break Leibniz’s PII! 🙂

            Like

          5. Sounds like I misunderstood your previous post. I actually considered just objecting to using a philosophical check for a scientific theory, but its thesis (at a very initial glance) actually seemed plausible, and it was more fun to think through how much of an issue it might be.

            I’m missing why the coarse grained point lacks teeth, or implies an incomplete theory. Consider a regular terrain map. We have a theory about how the map translates to actual terrain. If the map is at a certain level of resolution, we don’t think of our map theory as incomplete, just that it isn’t focused in beyond that level. Any map that doesn’t contain every grain, atom, or sub-atomic particle of the referenced terrain is going to be coarse grained, but if we want a closer map, the conceptual framework we use for the closer focus is the same as the more high level version. I see the wave function (or other mathematical frameworks) operating in much the same way.

            I’m a realist toward the wave primarily because of the observed interference effects. I’ve never heard a compelling response on this from anti-realists. Although I remain agnostic on the details of that reality, on what is substance vs what is relations between entities. But I’m also a realist toward the Heisenberg picture, because it’s just describing different aspects of the same ontology, and working with it in a different manner. (I understand the waves to still be there, just not obvious.) Schrodinger is reported to be easier to work with for most purposes, but Heisenberg is sometimes useful for conceptual exploration. David Deutsch actually uses the Heisenberg picture to argue that quantum theory is fully local; but the separability isn’t obvious with Schrodinger.

            Hey, no objection from me at breaking Leibniz’s PII. 🙂

            Like

          6. Well, I in turn misunderstood your post. I thought you were trying to say that in fact the additional breakage of PII was not in fact required by O’Dowd.

            Our exchanges threaten to split into too many sub-topics, so forgive me if I address the “matters arising” only very briefly. (And as always, please forgive any typos — my proof-reading is notoriously lousy.)

            The “coarse bundle” argument has no teeth because it evades the central in-principle issue: does an individual woldline split or is it always the matter of a bundle splitting into sub-bundles? Surely, QM asserts that the former must be the case. To stick with only bundles splitting is akin to (or maybe simply *is*) talking about mixtures of states rather superpositions of states.

            The map analogy is misleading. A map has no predictive value — it merely describes the explored parts of the world, leaving blank areas elsewhere (perhaps annotated with “Here there be Dragons” :-)). OTOH a theory is a predictive apparatus, which can (and should!) be pushed to unexplored and even unexplorable limits. If the theory fails to make a sensible prediction for extreme conditions, then it effectively proclaims itself to be incomplete — as GR is known to be incomplete because it leads to singularities. Unlike GR, QM (and MW in particular) claims to be the complete picture, but if it really were incapable of talking of anything but bundles, it would be ipso facto incomplete.

            I should also add, BTW, that I am glad you didn’t complain about a philosophical objection to a scientific argument. Science is not and should not be exempt from philosophical criticism.

            And BTW, BTW… Did you have a chance to go though that geometric derivation of the Born Rule? I find it stunning in its simplicity. It feels inevitable.

            Like

          7. I’m actually still not sure breaking PII is required, but I have no issue with it if it is. (See below on philosophical tests.)

            Hey, no worries on typos or anything else. It’s all friendly discussion.

            “does an individual woldline split or is it always the matter of a bundle splitting into sub-bundles?”

            My answer is yes, in the sense that I think they are the same thing. Terms like “worldine” or “bundle” are just labels we use to refer to portions of configuration space (and phase space, and Hilbert space overall). All of it is just different ways of discussing the idea of the universal wave function.

            I actually think a map does make predictions (or enable them if you prefer). A map of my state (Louisiana) makes predictions about what I’d have to do to reach New Orleans, about what I’d expect to encounter on my way there. And those predictions can be wrong if a road is closed (or in the case of hurricane Katrina, an entire region is in chaos). Of course, most roadmaps don’t predict which areas are swamps vs solid ground. For that I need a map which focuses on that aspect.

            On philosophical tests, my take is that empirical data always trumps philosophical or other theoretical principles. Of course, when we’re discussing theory, the lines are blurry. But I’ll take a scientific theory that works, even if it violates some philosophical principle, counting on a future reconciliation with, or revision to, the principle. Put another way, a philosophical principle is itself just another theory, one that might need to be revised, just like any other.

            On the geometric derivation, I’ll try to take another run through it. It made sense when I read it the first time, but my grasp wasn’t solid.

            Like

          8. Sorry, I got completely distracted by fire-fighting email problems caused bu misfiring spam filters of one of UK’s ISPs. 😦 So… where were we…

            1. I should have been clearer about the distinction between a map and a model (which is what a scientific theory is). A map says nothing about its blank spots — areas not visited by a cartographer. OTOH one of the main success criteria of a theory is that it makes predictions about areas not previously covered by experimental exploration, to be subsequently verified (entanglement being the obvious example for QM). If a theory does have actual blank spots (areas where it cannot make prediction or makes predictions which make no sense), then it is by definition incomplete. Hence my assertion that if MW is understood as dealing only with bundles and not individual worldlines, it must be understood as being incomplete.

            2. But you accept that individual worldlines do split. In which case the argument about coarseness of actual experimentation (and thus of bundling of world-lines) is neither here nor there — it has no teeth.

            3. So suppose we have two possible outcome with probabilities of 1/3 and 2/3. The question now is: does the worldline split into 2 or onto a bundle of some (possibly infinite) multiple of 3? The latter option requires ditching of PII because all worldlines in the 2/3 bundle are fully described by Schrodinger’s wave as being identical — there is no wiggle room for some difference in position in a phase space, unless one posits a part of reality not covered by the way, implicitly declaring QM to be incomplete.

            But if the split is into two worldlines of unequal probabilities, it becomes unclear how O’Dowd’s derivation can be justified. In his own version it features three distinct card groups in which two are identical. In my version those groups become three orthogonal axes. These groups (and axes) cannot be physical if only two worldlines actually result. There is no *physical* motivation for a three way split. Non-MW interpretation shrug this off — those probabilities are only describing unrealised possibilities. There is no such escape route in MW.

            Thus my claim that in MW the derivation breaks PII. Q.E.D. 🙂

            On the subject of reality of the wave, my reaction is opposite to yours. Since QM has two formulations (Schrodinger’s and Heisenberg’s) known to be equivalent, and one of these does not have a wave amongst its ontological commitments, then the theory as such cannot be claimed to have such a commitment. This is where Poincare’s conventionalism comes in. If two verifiably equivalent scientific theories have different ontological commitments, then the choice of ontology is a matter of convention rather than of fact.

            Finally, the primacy of data. Sure. But bear in mind the Duhem-Quine thesis: all observation is theory-laden.

            Liked by 1 person

          9. No worries on the fire. These conversations happen whenever we have time. Hope the fires are out, or at least reduced.

            I actually see a map as a model. But I wonder if the distinction you might be reaching for here is between the model and the procedure, the algorithm used to produce it, in other words its interface to the reality. I think that distinction is important, because the issue we’re talking about here isn’t with that procedure / algorithm / reality interface, but with any specific map someone may make. In other words, we’re talking about individual wave functions, not the theory of the wave function.

            You’ve probably seen that picture from Apollo 11 of the earth and lunar lander, with the quip that it includes every human being except Michael Collins (the astronaut who took the picture)

            But the actual picture includes no human beings, in the sense that no pixel has a human in it. Of course, the photons that bounced off of humans would be in the light available at that distance. So in principle, someone could take a photo at a high enough resolution so that there would be actual pixels of humans in it. But equipment limitations aside, doing so would have had no practical value, at least not for a spacecraft in orbit of the moon in 1969.

            In the same manner, we could use quantum theory to construct a wave function to an arbitrarily high level of resolution. But for most purposes, it’s not worth it, and equipment limitations would prevent us from testing it anyway. Instead we just use coarse grained descriptions of states with assigned weights (probabilities) to each. But the theory would remain the same regardless of what level of resolution we chose to work at.

            (I’m not saying the theory can’t in principle break down at some more detailed level, but as far as I know, we don’t currently have any evidence for it.)

            On wave realism, the point I tried to make above is that they do not have different ontological commitments. When Schrodinger introduced his equation, he demonstrated that it reconciled with Heisenberg’s matrices. They aren’t alternate theories. They make the same predictions. They’re just different descriptions (with different emphases) of the same structures. It’s why David Deutsch, a wave realist, has no quibbles about using matrices to make his argument for full separable locality. And why scientists don’t exert any effort to experimentally adjudicate between them.

            Definitely all observation is theory laden. But theories can only be stretched and patched so far.

            Like

          10. 1. A model is necessarily an algorithm — even Penrose accepts that we there are no known non-algorithmic processes. But an algorithm is not necessarily a model.

            2. The “in principle” stuff matters. E.g. there is no chance of us observing gravity well above Schwarzschild’s limit. Yet we accept that GR cannot be a full description of gravity because *in principle* it leads to infinite physical values. Same applies to any other scientific theory. If in principle it can give silly (or no) answers to a some boundary conditions, which are not excluded by that same theory, then it must be at best incomplete.

            3. I do not understand what makes you say that Matrix Mechanics has Schrodinger’s wave amongst its ontological commitments. If Schrodinger’s solution were unknown and we only had MM, we certainly would not be debating the reality of the wave — simply because MM does not feature such a wave. So in what way is MM ontologically committed to it?

            Like

          11. 1. Ok, I’ll take that definitional point. In that case, think of it as the general model vs a specific instantiation of it. The instantiation is what we’re talking about being coarse grained. But the same general model can be used to produce higher resolution instantiations.

            2. Agreed on “in principle”. But I’m not arguing for a boundary condition.

            3. I’m not sure what else to say here. I laid out my reasons above. (I’d be grateful for any corrections.) I will grant that Heisenberg didn’t have a wave topology in mind when he developed the approach, and that he, Bohr, and others were generally hostile to it, which does make it easy for people to ignore it, despite the inconvenient empirical data.

            Like

          12. Mike, how can empirical data be inconvenient, when both versions of QM necessarily make exactly the same predictions? Since they do so, evidence cannot favour one over the other. And MM features waves only in wave-particle duality. It does not need any overarching wave to make exactly the same empirical predictions.

            Like

          13. The data isn’t inconvenient for either mathematical approach. Their equivalence was demonstrated early on, and the mathematical structures have been tested to several decimal places. They do make the same predictions, though my understanding is that, for most purposes, it takes a lot more work in matrix mechanics. The inconvenience is for those who want to use that difficulty to rationalize away the waves and their implications.

            Like

          14. One can just as well claim that the evidence is an inconvenience to those who argue for reality of the wave, because MM shows that the same correct predictions can be made without it.

            That’s just a mirror image of what you appear to be arguing and I know of no way of breaking that symmetry. As I see it, both arguments are mistaken, because they contaminate factual evidence with interpretational bias, which is what the Duhem-Quine thesis warns against.

            Hence I accept that we simply do not know whether quantum ontology is better described by Heisenberg or by Schrodinger (or neither). Such agnosticism does not, of course, entail an endorsement of anti-realism.

            Like

  3. I’m going to have to watch that again later. Years back, I listened to an audio lecture that left me feeling like I kind of understood relativity. I had to listen to that same lecture five or six more times before I could even attempt to explain relativity in my own words. I kind of feel the same about this video. So thanks for sharing that, and I will be watching it again. Probably a few more times.

    Liked by 1 person

    1. If you find yourself interested, there’s also value in getting info from multiple sources. I came to an understanding of what O’Dowd’s talking about a year or two ago, but only after I’d read several books and articles on the subject. No one explanation ever did it. Eventually it just “clicked”. (The history of posts on this blog about many-worlds kind of documents that journey.)

      It’s great when we finally overcome those conceptual hurdles, although it can be frustrating trying to explain our epiphany to anyone who hasn’t gone through the journey.

      Liked by 1 person

  4. Typo: you misspelled the Born identity here. It’s named for Max Born.

    I totally agree with you that probability reflects the limited information of a given observer (or community). Not saying that we should outright rule out the idea that there also could be objective probabilities in the external world itself, but specific evidence for that would be needed.

    The principle of indifference has been subject to a lot of criticism – I’ll say something about it tomorrow…

    Liked by 1 person

    1. Fixed. Thanks for catching it!

      Right. I think probabilities, like so many terms, actually refer to a range of related concepts. Since the mathematics are similar, we use the same label, but when we get to the underlying ontology, that practice can lead to confusion.

      I’ll be interested to see what you have to say about the principle of indifference. Not sure if I’d ever heard of it before this video.

      Like

    2. Draw a unit circle of 1 inch diameter on the floor, and construct a Cartesian coordinate system with (0, 0) at the center of the circle. Step back to y = -36 inches and throw a piece of spaghetti at the circle; extend the line if necessary so it crosses all the way across the circle. It forms a chord across the circle, the length of which can be in the range 0 to 2″. The point on the chord closest to the center of the circle can have an x-coordinate anywhere between -1″ and +1″. What is the probability that the chord’s length is between 0.9″ and 1″? What is the probability that the x-coordinate is between sqrt(3)/2 and sqrt(3)/2 + 0.1?

      Unfortunately, you get different answers depending on which variable you are “indifferent” about. This is my attempt to reconstruct the Bertrand paradox before I found it by googling (I accidentally constructed yet another version).

      I’ve never found this paradox very convincing. Bertrand’s original presentation seems to rely on the under-specification of the problem. Once you specify an actual physical mechanism, I suspect there will be a definite answer. Even though probability is first and foremost an epistemic concept, the Principle of Indifference is an idealization that, I suspect, only really works when we have a good mechanical understanding of a problem. For deriving Born from Everett, we are simply stipulating a physical understanding, namely Everett’s, so that’s a gimme.

      Liked by 1 person

  5. Though I think there are many great reasons to oppose the many worlds interpretation of QM, I don’t yet grasp how one might reason that it’s false because probabilities thus shouldn’t make sense. Why would probabilities need to make sense to us in order for a given truth to be true? I’d rather not watch the video until I grasp what it’s suppose to be explaining. Otherwise I might accidentally decide that a valid argument does exist here even if I don’t quite grasp either the question or answer.

    Furthermore my understanding of this QM interpretation is that whenever anything happens (and so will exist as a distinct world as opposed to “many others” where things are different), that the event will be 100% destined to occur. Therefore in a logical capacity this does seem sensible to me, and even if I do consider this QM interpretation itself to be magical.

    I wonder if a fluctuation between epistemic and ontological probability is the source of the issue here? These are different ideas so I can see how one might inadvertently exchange them and so convince themselves that they’ve developed another argument against this QM interpretation.

    Liked by 1 person

    1. I’m probably not the best person to try to elaborate on the probability problem because, as I noted in the post, I don’t really see it myself. But the most convincing argument I’ve seen for it is that it disassociates many-worlds from the predictions of quantum theory.

      But as I noted in the post, all an Everettian has to do is accept the Born rule (squaring the amplitude of a state in the Schrödinger equation gives the probability of it being the outcome in a measurement). Doing so puts it on no worse ground than any other interpretation.

      On the other hand, Everett at least offers the possibility of explaining the Born rule. Those explanations are controversial, but at least they’re there, as opposed to the brute postulate in Copenhagen.

      Probability, like so many concepts, turns out to be a much more difficult subject than it seems. Many-worlds kind of shines a light on those difficulties, but they were there before the theory came along.

      Liked by 1 person

      1. I think you’ve got an interesting post here Mike though I’ve got to question your logic in a couple of ways. Apparently you’re saying that like me you don’t understand the problem here, though apparently experts and such do. So here you figure that there must be a real problem and we’re simply ignorant in this regard. Fine, except maybe it actually is all bullshit? Maybe experts are trusting experts rather than admitting even to themselves that they don’t understand? Thus they contribute to myth building. Who among these experts would have the courage to laugh at such nonsense at the expense of their peers?

        Well actually, I suppose I’d trust Sabine Hossenfelder on the matter. So if she says it’s not bullshit then I could go along. And I bet she’d be able to explain the matter here in a way that we could understand. Furthermore I bet the explanation wouldn’t violate my above objections.

        Then secondly, how can you intelligently discuss a problem that you don’t understand? Should it be bullshit, wouldn’t you then be complicit in promoting that bullshit? And of course I don’t mean to put this to you like you’d be committing some sort of crime. This is a good post and I certainly like to be informed about what people are talking about on the subject. But this tendency might also demonstrate a means by which so much bullshit accumulates in the softest areas of science.

        Liked by 1 person

        1. There are serious people who take the probability problem seriously, including many Everettians. (Although there are also serious people, who like me, don’t see the issue.) So although I can’t see it, I’m not inclined to just dismiss it. And laughing at it would require a level of confidence about it I certainly don’t have.

          In general, when I don’t understand something, it can be because it’s not a valid problem, or it can be because I just don’t understand it. In general, I’m slow to reach the first conclusion, since it cuts off too many opportunities to learn. Only when I reach the point where I understand why people think it’s a problem, and can see the problems with their reasons, am I inclined to dismiss it.

          I can’t say I share your unqualified faith in Hossenfelder. (I actually don’t have unqualified faith in anyone.) I find too many of her explanations problematic, even incoherent. From what I’ve seen, a lot of her fans seem to accept those statements because she flatters their prejudices. She does make good points from time to time, but at other times I find her a source of pointless contention and confusion, particularly when she goes on one of her rants. I think you should demand clear exposition from her in the same manner you would from someone whose views you don’t agree with.

          On discussing problems we don’t understand, one of the best ways to learn about a concept is to try to explain it to others. But that does mean writing about it even though our understanding may be hazy. I don’t think it’s an issue as long as we’re honest where we are.

          Liked by 1 person

          1. It’s not exactly that I have unqualified faith in her, but rather that I don’t recall her disappointing me much so far. So empirically I think she looks pretty good. It could be that there are some things that you’ve observed that I’d find objectionable, since I don’t look at her stuff much. But then maybe I’d more agree with her? I’m all for testing our confirmation biases though. Empirically backed explanations should settle things eventually.

            I agree that trying to explain something that you don’t understand can help you figure it out. So I guess in the end we’ll wonder if you ever do figure anything out here?

            Though I do disagree with many worlds, I’m not going to do so for a reason that I don’t grasp. As I recall Sabine objects given the unfalsifiable component of it. I doubt she’d also say, “Oh yeah, I also don’t believe it because in that case probabilities would be ruined”. I think that would trouble me, or at least pending an effective explanation. Of course Sabine, like me, doesn’t believe in any ontological probabilities. We’re determinists. Then as for the epistemic kind, in physics they don’t matter.

            Liked by 1 person

          2. She’s addressed many-worlds a couple of times (that I know of). I can’t say I understand the issue she has with it. The first one was during a time when I was still trying to understand the interpretation, and I watched the video and studied her post very closely. But I was never able to figure out what she was talking about. She addressed it more recently, largely saying the same thing, and I still don’t get it.

            I did appreciate that she clarified that energy isn’t an issue. Given all the arguments I’ve had with people over the years about that, it was good to see her acknowledge it. Although that’s so basic I’d expect it of any physicist, even one who’s skeptical of the interpretation.

            Similarly, her reasons for supporting superdeterminism remain obscure to me. She insists she’s not depending on retrocausality, but what she describes seems like it in all but name. But at least what she’s talking about is an empirical investigation, one that would stress quantum theory overall, which I think is always a good thing.

            Like

  6. Re “A quick reminder. The central mystery of quantum mechanics is that quantum particles move like waves of possible outcomes that interfere with each other, until a measurement happens, when they appear to collapse to one localized outcome, the famous wave-particle duality.”

    Uh, I think you have this wrong. The Schrodinger equation is an attempt to describe the energy of a quantum particle and it does indeed take the form of a wave. When thee equation was developed it had a competitor, matrix mechanics in which matrices were used to describe those energies. Matrices didn’t represent waves. But both approaches proved to be equivalent, so matrix mechanics was dumped.

    So the wave equation represents what, exactly? If you square it, you get probabilities so the wave equations represent the square roots of probabilities and those are what in reality?

    People seem to be confusing the description of something with the thing itself. Wave-particle duality refers to subatomic particles exhibiting properties we associate with particles under one set of circumstances and properties we associate with wave-like phenomena under others. It is not a reference to the “collapse of wave functions.”

    When a wave function collapses it is our description which collapses, not necessarily the thing itself. That our description is indeterminant and then determinant says exactly what about the particle in question.

    We seem to be confusing descriptions with reality. This has gotten so extreme that people in the woo woo community believe that human consciousness is required to collapse a wave function, when that is pure idiocy. An observation collapses a waveform and to make an observation requires an interaction, and it is the interaction that seems to be the physical cause, not the observation.

    Like

    1. I think my description was accurate. You’re asserting a particular interpretation of that description, the anti-real one, as the only obviously right answer, and seem to be objecting that I didn’t include it. I didn’t because it’s just one of the interpretations out there.

      And as always, I’ll point out my issue with it. If the wave function doesn’t describe reality at some level of abstraction, then where do the interference effects come from? And how can quantum computing work utilizing those interference effects? That’s empirical evidence which remains inconvenient for the anti-real stance.

      Myself, I’m a structural realist. I think the structure of the wave function is real, but agnostic on what the underlying reality might be. But even just structural realism leaves us with inconvenient implications. Sorry, but I don’t see us getting past QM with comfortable traditional notions of how reality works.

      I agree that the consciousness-causing-the-collapse isn’t a good answer. It’s an old view that predates decoherence theory and the data from quantum computing, but remains a cherished notion among idealists, mystics, and various new age types. But it’s far from the only realist answer, many-worlds being the one the video discusses.

      I’ve had periods where I thought instrumentalism was unavoidable, but was glad to find structural realism to get away from it. It’s too often been the refuge of people who don’t want to accept what science tells us about the world. Religious traditionalists in particular have often invoked it to ignore inconvenient findings.

      Like

  7. Let’s take this scenario.

    We measure something with a possible value of +1 or -1.

    Our measurement device reads +1 in our universe. According to theory, it would read -1 in a different universe.

    However, isn’t our measurement device itself now also subject to splitting so at the some future split in our universe, we could check the device and it could read -1.

    If this is correct, then no measurements would be final and we should discover in our own universe and timeline measurements that flip back and forth between +1 and -1 as the device itself is split between different worlds. Wouldn’t this be consistent with the theory?

    Do we find that?

    On the other hand, if the measurement once done is final, then the splitting stops at the point the measurement is made. In that case, aren’t we back to the original problem that MW was supposed to resolve?

    Liked by 1 person

    1. Not sure if I follow your scenario. But world splitting under the theory happens when the effects of a quantum system are amplified into the environment, an amplification we usually call “measurement”, typically done by the measuring device. Another way to describe it is the quantum system in question becoming decohered and losing its quantum interference effects, while becoming entangled with the measuring device and the rest of the environment.

      Of course, the measuring device itself is composed of quantum particles, but there’s nothing really amplifying any of their unique effects, so the effects tend to cancel out. (“Tend” being the key word there, because there can, in principle, be natural measurement events. Although classical systems exist because they’re not usually a factor.)

      A quantum system can also evolve post-measurement, so if you measure it later, there can again be multiple possible outcomes. So you’d have further splitting from that. But each subsequent split happens within the branches from the previous splits.

      Does that get at your question? Or did I miss something?

      Like

      1. Let me try again.

        We have a measurement on a device of +1.

        If that device/measurement is part of a world that can split, then there will be some world with a timeline where the measurement after the split will read +1 then +1 , and another where it will read +1 then -1.

        If the measurement is final at +1, then there would no more splitting or all the splits would result in the same measurement. We would have the same problem MW is trying to solve.

        What’s more, every interaction of a wave with a wave or a particle is in effect a measurement. If measurements are final, then there would not be many worlds. There might be two: one where measurements have occurred and another where they haven’t. A classical world and a quantum world but not many worlds and MW is nonsense.

        What you are arguing is that, in effect, MW has nothing to do with the world we understand because everything is “entangled with the measuring device and the rest of the environment”. All of the stuff about dead and alive cats or people is complete nonsense and MW resolves nothing.

        Like

        1. Sorry, still not following the point you’re trying to make.

          Remember that if multiple outcomes are possible, there will always be splitting. Even if one is 99.999% and the other 0.001% or something.

          Even if only one outcome is possible, per O’Dowd’s point, our description of a state is always coarse grained. So what we’re calling one state might be a variety of closely related states that make only minute differences, but those differences would still result in splits, albeit with branches that are very similar. (Although it seems like those minute difference would butterfly out over time.)

          Anyway, if you think many-worlds doesn’t solve anything, I’d say you don’t understand it yet.

          Like

      2. Let me rephrase part of that for clarity:

        If that device/measurement is part of a world that can split, then there will be some world with a timeline where the measurement will read +1 before the split and +1 after the split and another where it will read +1 before and -1 after..

        Like

        1. Oops, sorry, missed this one with my reply above.

          But I’m still not following the issue. If you’re saying the device reads +1 before the measurement, then either +1 or -1 afterward? Yes, there will be two branches, one with each sequence.

          Of if you’re saying that there are two measurements, then the first measurement would result in two branches, a +1 one and a -1 one. If a second measurement is done then each branch further splits, so, assuming the quantum system in question can evolve between measurements, we end up with the following branches:
          +1|+1
          +1|-1
          -1|-1
          -1|+1

          Additional measurements would result in further branches.

          Again, unless I’m missing something.

          Liked by 1 person

          1. Let me try a different approach altogether since my other approach isn’t working.

            A measurement doesn’t happen without a device. We are not really measuring a property of a particle, but are measuring the interaction of the device with the particle. Hence, we end up with a discrete value which could change with a new interaction between the device and particle.

            If it were otherwise, we would not see values potentially change with new interactions between the device and the particle. The value once measured would always stay the same and we would need a split to explain where the other value went.

            Liked by 1 person

          2. Ok, I think I understand what you’re saying. But let me reword it to make sure.

            Let’s say we measure the spin of a particle on a particular axis, and then immediately measure it on the same axis so that it doesn’t have a chance to evolve. (I think there’s actually an experiment that does this, although maybe with polarity. For an example: https://www.youtube.com/watch?v=lZ3bPUKo5zc&list=PLUl4u3cNGP61-9PEhRognw5vryrSEVLPr )

            In that case, according to my understanding, we get branching on the first measurement, and that’s it. Branching requires measurement of a superposition, but if there isn’t any superposition, then you stay with the same branch.

            It’s worth noting that spin axes are noncommutative, so being in a definite state on one axis is equivalent to being in a superposition on a perpendicular axis. So if we alternated between axes, there would be superpositions, and a branching each time.

            Hope that helps. (And I got it right.)

            Liked by 1 person

          3. It’s worth noting that many-worlds is just straight quantum theory without an ontic wave function collapse. I think it provides a solution to the measurement problem. Of course, so does any interpretation, but not with the same parsimony.

            Granted, parsimony is just a rule of thumb, but one that seems to work in most cases.

            Liked by 1 person

          4. My view might be closest to relational.

            “relational quantum mechanics argues that the notion of “state” describes not the observed system itself, but the relationship, or correlation, between the system and its observer”

            https://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics

            “There is no true wave collapse, in the sense in which it occurs in some interpretations”

            https://en.wikipedia.org/wiki/Relational_quantum_mechanics

            No need for extravagant world splitting theories.

            Liked by 1 person

          5. I’m on board with states describing relations. (I’m a structural realist.) But I’m not onboard with the states only existing during the interactions, as in not existing before, between, or after them. But without that stipulation, RQM seems to be just MW with blinders on.

            Like

          6. Or RQM is more parsimonious because it doesn’t need to make the additional assumptions that MW makes.

            In either theory, all of the states exist all the time.

            Like

          7. During a MW split we also obtain a discrete value at measurement. MW just goes on to assume the missing states are somewhere else.Apparently (?) MW must believe the discrete state has reality beyond measurement of it.

            Liked by 1 person

          8. If something is predicted by the structure of quantum theory, then in what sense is it an “assumption”? Erwin Schrödinger pointed out 20 years before Hugh Everett’s thesis that his equation predicted a cat that was both alive and dead at the same time. All Everett did was remove the assumption that it ended when the box was opened. Everett’s point being that Schrödinger would have exactly the same experience if it didn’t. Many-worlds is just raw quantum theory followed to the bitter end.

            On the quote from Wikipedia, a lot depends on what we mean by “collapse”. What distinguishes RQM from MW is whether values exist outside of interactions. That’s the thing about interpretations. There’s always a cost. Proponents of particular interpretations tend to downplay their favorite’s cost while emphasizing the ones of competing interpretations. My blocker for RQM is its particular cost. (And like all anti-real outlooks, that of ignoring the empirical reality of the interference effects.)

            Liked by 1 person

          9. My understand :

            I think the values exist outside the interactions but they are in the quantum cloud. If we keep measuring we might find them all. Just the discrete value doesn’t exist outside the interaction.

            BTW, why does MW need a “world” split? Why not just split the wave? No extra baggage. Or maybe I’m not understanding that the “world” is merely a metaphor in MW. I’m not sure there would be an enormous difference between RQM and MW if we just make MW mean Many Waves.

            Liked by 1 person

          10. The word “world” is just a poetic, evocative way of describing observers and their surroundings going into an entangled superposition. Everett’s original name was “the universal wavefunction”. I think Wheeler pushed him to publish it under the “relative state formulation” name. It was Bryce DeWitt who coined the “many worlds” term many years later. “Many waves” would work too, although it’s not as evocative. You could also call it the “macroscopic superpositions” theory.

            But it’s all to get the idea across that the math doesn’t prescribe any boundary between tiny things and big ones, that the whole universe is quantum. RQM shares that premise, although with an anti-real stance (or at least an agnostic one) toward the waves.

            Liked by 1 person

  8. Maybe I’m dense, but I still haven’t seen anything to convince me that it’s not just math that is useful for predicting. This video just shows (me) that the math is consistent from more than one angle. Awesome. But I still haven’t seen anything that proves that “superpositions” are real and not just the math of not knowing yet, that the cat is neither alive nor dead, that there are not hidden variables. I accept that there can’t be hidden variables at the same time as locality. That just means electrons, etc., are non-local, and that’s fine. The problem I see is that people still think of these things as (sometimes?) particles. Electrons are not particles, ever. They only sometimes seem like particles because when they interact, they interact with one thing at a time, like particles. That doesn’t make them particles.

    As a side note, are you paying attention to Wolfram? He’s suggesting that quantum mechanics, general relativity, and the 2nd law each fall out of computation. Well, all possible computations, but still. I’m still trying to wrap my head around it. If not, here is sorta an easy introduction (almost an hour): https://www.youtube.com/watch?v=cPfbGA_hNVo.

    *

    Liked by 1 person

    1. What would it take to convince you that the math is describing reality at some level of abstraction? If being consistent and predictive from multiple angles and throughout complex processes (such as with quantum computing) doesn’t get you there, then what would?

      I’ve heard of Wolfram’s ideas, but haven’t really dove into them. Physicists don’t seem to be taking them too seriously. Doesn’t necessarily mean he’s wrong, but I don’t have the background to assess them myself, so I’m dependent on the physics community to do it. Until a substantial portion of that community starts adding some credence to them, it doesn’t seem like a good use of my time.

      Like

      1. The math is describing reality in the same way that it describes the reality of ocean waves and sound waves. It just doesn’t tell you about what’s going on underneath the waving. That’s epistemic structural realism, is it not?

        I’m fairly confident that it’s not only the case that we don’t know whether many worlds is correct, but that we cannot know. It’s the noumena/phenomena thing (physics doesn’t say what things are, only what they do). I guess it would take some experimental evidence that there are no hidden variables regardless of locality to shift my current thinking.

        As for Wolfram, there are only two main ideas I get from his work: 1. a system of very few rules can be determined but still unpredictable from within the system, and 2. the best an observer can do from within such a system is develop rules based on course graining, and some of these rules will be probabilistic.

        *

        Like

        1. I think I’ve drifted more into ontic structural realism. The problem with epistemic structural realism, is people in the ESR camp seem to take it as license to talk about intrinsic properties, as though we’d be in a position to say anything meaningful about them. I’ve concluded that the statement that intrinsic properties might exist but are utterly unknowable does no meaningful work. Indeed, such properties would have no causal interactions with anything we could know. By the causal criteria, that makes them effectively non-existent. Hence OSR. (I might change my mind tomorrow.)

          OSR doesn’t rule out the possibility that there are underlying properties. Just that if they are in any way meaningful for us, they must be extrinsic in some fashion.

          Anyway, structural realism is the view that the structures themselves are real. The problem is if you accept that for the wave function, then you can no longer say it’s all just a mathematical contrivance. It is real at some level of abstraction. Sure, it’s reasonable to be agnostic about what might be underneath. I am too. But that still means the structure on top exists and its implications have to be dealt with.

          It’s kind of like how understanding that thermodynamics emerges from particle kinetics doesn’t change thermodynamic realities. The laws of thermodynamics still have to be reckoned with.

          No scientific theory can ever be verified. They can only be falsified. I think we can falsify many-worlds by falsifying the mathematical structure of quantum theory. For example, if any evidence of an objective collapse were ever to be discovered, many-worlds would be done. Or to your point, if any hidden variables were ever discovered, depending on their nature, many-worlds might be falsified. On the other hand, the more the core formalism is experimentally tested and not falsified, the more it seems like it’s the whole story. But there will never be an experiment that forever rules out some possible future addition.

          I will grant that we may never be able to adjudicate between relational quantum mechanics and many-worlds, because whether reality exists when we’re not interacting with seems hopelessly unknowable. But we can still narrow things down to those two options. And then each of us has to decide which one seems more coherent.

          Those ideas you’re getting from Wolfram sound pretty self evident to me. Based on the murmurs I’ve heard, there must be more to it.

          Like

          1. Hmm. My understanding of ESR/OSR may be … insufficient. To me, the “thing” must have intrinsic properties to produce behavior. We just don’t know what those properties are. All we get is behavior of, say, an atom. That is, until we can identify a subset of that behavior, at which point we have a new “thing”, such as a neutron. So my statement is that intrinsic properties *must* exist, but those of the thing at the bottom are unknown. If they become known, then there is a new bottom. (Is it tomorrow yet? 🙂 )

            Structures are real, as are patterns and all abstractions. But some structures, like the wave equation, can be useful even when they don’t match the reality exactly. Unless I’m mistaken, the QM wave equation gives a non-zero possibility that an interaction will happen (particle will be measured) essentially infinitely far away. Does that mean we must assume the universe is infinitely large?

            Wolfram’s ideas may be self-evident to us (who have the advantage of living when we do) but I take solace in the fact that someone can derive/prove it. Also, the “more to it” is the deriving of relativity, quantum mechanics, and the 2nd law from that basis, the understanding of which will be forever beyond me. My specific interest in the matter is trying to show if/how the 2nd law not only allows but drives the formation of life and intelligence. Was kinda hoping Wolfram would do the heavy lifting. We’ll see.

            *

            Like

          2. So if an internal property is necessary to produce behavior, doesn’t that imply there’s a relationship, a structure, between the property and that behavior? If so, then in what sense is it intrinsic? (It doesn’t help that the word “intrinsic” is ambiguous, having different meanings in different contexts.) I think there’s a difference between properties we just don’t know about yet, and the idea of “things in and of themselves” that philosophers talk about. In truth, I’m not sure if the philosopher concept is all that coherent.

            The same relationship you’re describing for the wave function also applies to gravitational fields in general relativity; they go to infinity. All theories have domains of applicability. But just as we don’t throw out Newton because we know the limits of its applicability, we don’t dismiss the reality of quantum or gravitational fields. In any case, the universe might actually be infinite (which has its own many-worlds type implications.)

            I do at some point need to skim Wolfram’s ideas. They might be useful in story settings.

            Liked by 1 person

          3. Pardon my interjection, but if you’re interested in discussing how thermodynamics drives the formation of life and intelligence, you might want to check out Tim Kueper’s site, “The Motive Power of Fire.”

            Also, could you be otherwise known to me as “Jim of Seattle”?

            Liked by 1 person

          4. Nothing to pardon, and thanks for that pointer. Will definitely look it up (google?)

            Not sure about your “Jim of Seattle” question. Is that an inquiry or a request? FWIW, I don’t think I’ve ever used “Jim of Seattle“. I used to go by Jim, but started using James when I moved to Seattle.

            *

            Like

          5. Google will find it, but here’s a link: https://motivepowerfire.wordpress.com/

            Jim of Seattle was, and perhaps sometimes still is, a particularly able contributor at Song Fight (songfight.org). It turns out his real name was “Jim Owen,” and mine is “Jim Owens,” which caused some confusion there, as I was also an occasional contributor. It’s weird that you’re not him. What are the odds?

            Liked by 1 person

          6. My name is Jim Smith, so the shift to James was an attempt to change those odds. Apparently necessary even with the epithet.

            And I’ve already commented on the most recent post at motivepowerfire. Will be looking at the others soon.

            *

            Like

  9. I’m too stupid to understand this.
    I will say that it feels like Plato’s Cave and we’re inside describing our shadow world as if we know everything and can form theories as to how it all works.
    Maybe an ASI will figure it out.
    Me? I’ll be dead before long and my super position will finally collapse. Heads? Tails? No one will care.

    Liked by 1 person

Leave a reply to James Cross Cancel reply