The spectrum of science to fantasy

A question long argued in the philosophy of science is the demarcation problem.  How to we distinguish science from non-science?  Karl Popper famously proposed falsifiability as a criteria.  To be science, a theory must make predictions that could turn out to be wrong.  It must be falsifiable.  Theories that are amorphous or flexible enough to never encounter this test, aren’t scientific.  This standard was famously sharp enough to cull Marxism and Freudean psychoanalysis from science.

Falsifiability has a lot going for it, but it also has a lot of issues.  For one, when we say “falsifiable”, do we mean falsifiable in practice today?  If so, then a lot of exploratory work done by scientists is non-science.  This would include Copernicus’ work on heliocentrism, Albert Einstein’s work on relativity, or Peter Higgs and colleagues’ work on the Higgs mechanism.  None of these theories were testable while these scientists were working on them.  In the case of Copernicus and Higgs, it was several decades before they became testable, that is, falsifiable.

Reportedly, Popper was actually more careful than this.  His proposed standard was falsifiable in principle.  So to be scientific, a theory must be testable in some foreseeable manner.

But even this can be problematic.  August Comte infamously predicted in 1835 that we would never know the composition of the stars.  Speculation seemed pointless.  But within a few decades, stellar spectroscopy was developed and we did actually start to learn about stellar composition.  Likewise, when Einstein, Podolsky, and Rosen published their paper in 1935 on the EPR paradox, it was criticized by many as metaphysical navel-gazing, until John Stewart Bell figured out a way to test it 29 years later.

On top of this, as Sabine Hossenfelder recently pointed out, new theories can be about existing data.  If the new theory explains the existing data better than an established theory, that is with fewer assumptions and perhaps simpler constructs, then it may replace that older theory, without ever producing its own unique falsifiable predictions.

Even more problematic, many successful scientific theories, while having reliably testable predictions, also have predictions that can’t currently be tested.  For several decades, general relativity predicted gravitational waves, but they were only actually detected a few years ago. From what I understand, aspects of the role of pressure in general relativity remain untested.

And every scientific theory is essentially a metaphysical statement, a conclusion, or series of conclusions reached inductively.  Anyone who has studied epistemology is familiar with the problem of induction, most famously analogized by black swans.

All of which means that the dividing line isn’t sharp, but long and blurry and requires a lot of judgment.  I tend to think, rather than a sharp demarcation, it’s better to think in terms of a spectrum.

  1. Reliable models
  2. Rigorous exploration
  3. Loose speculation
  4. Falsified notions

1-Reliable models, are the ones most clearly science, and represents the most successful theories, such as general relativity, quantum mechanics, natural selection, etc.  Often the predictions of these theories are reliable enough for technology to be built using them.

I considered calling 1 “settled science”, but that implies that successful theories are never overturned.  Most famously, Newton’s laws of gravity reigned for centuries, before Einstein overturned them with general relativity.  However, Newton’s laws remain reliable enough that NASA mission planners use them for most of their calculations.  Newton’s laws are no longer the most reliable model, but they remain very reliable for many purposes.  Which is to say, very successful theories, at least the mathematical components, are unlikely to ever be completely dismissed.

2-Rigorous exploration, is disciplined theoretical speculation.  As noted above, scientists have to have space to work in this realm, since too many theories in 1 began here.  But what characterizes rigorous exploration from the next category is that these theories are either extrapolations from theories in 1, or tight speculation involving one or a few assumptions, assumptions narrowly motivated to fit the data.

3-Loose speculation is where I think there start to be legitimate concerns about whether what’s happening is scientific.  In this category, there may be numerous assumptions, with each assumption an opportunity to be wrong.  Or the assumptions may be motivated by a desire for a certain outcome, not to explain the data, but perhaps to meet personal biases and intuitions.

I gave examples of 2 above.  For 3, based on what I’ve read, string theory arguably belongs in this category.  I think some other speculative notions, such as many exotic theories of consciousness, belong here too.

Many people would relegate all multiverse theories here, but I think they have to be looked at on a case by case basis, since some are either extrapolations of successful theories, or have minimal assumptions, and a strong case can be made for them being in 2.  (None are currently in 1.)

But I would include Tegmark’s mathematical universe hypothesis in 3, along with a lot of other philosophical metaphysical speculation.  This is often stuff that, strictly speaking isn’t impossible, but has non-data motivated assumptions and is the hardest to imagine ever being testable.

4-Falsified notions, the last category, is, simply put, fantasy.  Generally for this stuff to be reality would require that one or more theories in 1 be wrong.  Astrology, paranormal claims, creationism, intelligent design, and a lot of similar notions go here.  If it’s presented as science then it’s fake science, pseudoscience.

Only 1 represents a reliable view of reality.  But as noted above, this is science, and nothing is immune from possibly being overturned on new data.

2 represents what I often refer to as candidates for reality.  Many will be right, others wrong, but we can’t currently know which is which.

3 might, in principle, turn out to be reality, but the probability is very low, low enough that the skeptic in me tends to just assume they’re wrong.

And 4 is the province of honest entertainers or dishonest charlatans.

It’s worth noting that even putting theories into these categories takes judgment, and many might sit on the boundaries.

But I think the main takeaway is that just because something isn’t in 1, doesn’t mean the only other option is 4.  It’s not just reliable science or fantasy.  There’s a space for exploratory science at least.  I’m actually pretty sure science as an overall enterprise wouldn’t work without that exploratory space.

Unless of course I’m missing something?  Am I being too permissive with these categories?  Not permissive enough?  Or just missing the ball entirely?

The measurement problem, Copenhagen, pilot-wave, and many worlds

With quantum physics, we have a situation where a quantum object, such as a photon, electron, atom or similar scale entity, acts like a wave, spreading out in a superposition, until we look at it (by measuring it in some manner), then it behaves like a particle.  This is known as the measurement problem.

Now, some people try to get epistemic about this. Maybe the wave isn’t real but just epistemic probabilities. The issue, shown in the double-slit experiment, is that the wave interferes with itself, something those who want to relegate the wave to completely non-real status have to contend with.

An important point is that if the wave is very spread out, say light years, and any part of it is measured, the whole thing collapses to a particle, apparently faster than light.  This appears to violate relativity (and hence causality), which was Albert Einstein’s chief beef with quantum physics, and the impetus behind the concept of entanglement explored in the EPR paradox.

Now, we have an equation, the Schrodinger equation, that models the evolution of the wave.  Its accuracy has been established in innumerable experiments.  But when we actually look at the wave, that is, attempt to take a measurement, we find a particle, that subsequently behaves like a particle.  The math appears to stop working, except as a probabilistic prediction of where we’ll find the particle.  This is often called the wave function collapse.

The Copenhagen interpretation handles this by saying that quantum physics only applies to small isolated systems.  As soon as something macroscopic is involved, such as a measuring device, the rules change.  Kept to a minimal instrumental version, I think this interpretation is underrated.  Bare bones Copenhagen doesn’t attempt to explain reality, only describe our interactions with it.  It could be seen as an admission that the metaphors of our normal scale existence are simply inadequate for the quantum realm.

Of course, people can’t resist going further.  Copenhagen is actually more a family of interpretations, some of which involve speculation about consciousness causing the collapse.  Reality doesn’t congeal until we actually look at it.  I think the challenges of quantum computing rule this out, where engineers have to go to extreme efforts to preserve the wave to get the benefits of that type of computation.  They’d probably be very happy if all they had to do was prevent any conscious mind from knowing the state of the system.  But it’s an idea many people delight in, so it persists.

The pilot-wave interpretation, often referred to as De Broglie-Bohm theory, posits that there is both a particle and a wave the entire time.  The wave guides the particle.  When we look / measure, the wave becomes entangled with the environment, it loses its coherence, and so the particle is now free to behave like a particle.  This idea actually predates Copenhagen, although it wasn’t refined until the 1950s.

Pilot-wave initially looks promising.  We preserve determinism.  But we don’t preserve locality.  Looking at the wave, anywhere in its extent, still causes the whole thing to decohere and free up the particle, even if the particle is light years away.  So, Einstein wasn’t happy with this solution, since relativity appears to still be threatened.

Hugh Everett III looked at the above situation and asked, what if the math doesn’t in fact stop working when we look?  Our observations seem to indicate that it does.  But that’s failing to account for the fact that macroscopic systems, including us, are collections of quantum objects.

As it turns out, the Schrodinger equation does predict what will happen.  The wave will become entangled in the waves of the quantum objects comprising the measuring device.  It will become entangled with the environment, just as pilot-wave predicted, but unlike pilot-wave, Everett dispenses with the particle.

Crucially, rather than collapsing, the superposition of the wave will spread, just as it seems to do before we look.  Why does it appear to collapse?  Because it has spread to us.  We have gone into superposition.  Every branch of that superposition will now continue to spread out into the universe.  But the branches are all decohered from each other, each no longer able to interfere with the other.  They are essentially causally isolated.

So each of those branches could be romantically described as being in its own separate “world”, resulting in many worlds, the many worlds interpretation.

The appearance of the collapse, under the many worlds interpretation, is because we are now on one branch of the wave function, observing the small fragment of the original wave that became entangled with this branch of the environment.  Under this interpretation, there is a different version of us in each other branch seeing differing parts of the wave, which we now refer to as a “particle”.

Which of these interpretations is true?  Copenhagen, pilot-wave, many worlds, or some other interpretation?  They all make the same observable predictions.  (The ones that don’t were discarded long ago.)  It’s the predictions they make beyond our ability to observe that distinguish them from each other.

We could ask which has the fewest number of assumptions.  Most people (often grudgingly) will admit that many worlds has the most elegant math.  (Evoking comparisons with Copernicus’ heliocentric model in relation to Ptolemy’s ancient geocentric one.)  And it does preserve realism, locality and determinism, just not one unique reality.  Whether that mounts to fewer assumptions than the others is a matter of intense debate.

Each interpretation has a cost, often downplayed by the proponents of that interpretation, but they’re always there.  Quantum physics forces us to give up something: realism, locality, determinism, one unique reality, or some other cherished notion.  As things stand right now, you can choose the interpretation that least threatens your intuitions, but you can’t pretend there isn’t a cost.

Unless of course I’m missing something.

Sean Carroll’s Something Deeply Hidden

Cover of Something Deeply HiddenI’m just about finished reading Sean Carroll’s Something Deeply Hidden.  I was going to wait to post this until I’d completely finished, but all I’ve got left is the appendix, I perceive that I’ve gotten through the main points, and discussion on the previous post is veering in this direction.

As widely reported, Carroll is an advocate for the Everettian interpretation of quantum mechanics, generally known as the Many Worlds Interpretation (MWI).  I gave a primer on this back in December.  Nothing in Carroll’s book invalidated that description, so if you need the basics, check it out.

Carroll’s broad point is the MWI, in terms of the mathematical postulates, is the most austere interpretation.  Its central premise is that we should ask what happens if quantum systems evolve solely based on the Schrodinger equation.  Doing so leads to a deterministic theory that explain our observations and preserves realism and locality, which makes it broadly compatible with special and general relativity.

The basic idea is that the wave function never collapses, it just becomes entangled with the waves of other quantum systems.  We see this in experiments, where particles that interact don’t experience collapse, but merely become entangled.  And physicists have been able to isolate ever larger molecules and keep them in states of quantum superposition.

The Copenhagen interpretation posits that eventually interaction with macroscopic objects causes the wave function to collapse.  But what exactly is a macroscopic object?  At what point between large molecules and measuring devices do we cross the boundary that causes wave function collapse?

The Everettian view is, never.  The superposition of the initial quantum system never collapses, it continues spreading.  We perceive it to have collapsed, but that’s only because it’s spread to us, and we’ve become the version of us looking at one particular outcome, with other versions in other branches of the wave function looking at the other outcomes.

One thing I was hoping to get from Carroll was a description of how the MWI avoids the issues with Bell’s theorem.  That theorem notes that if all the states of entangled quantum particles are set when they become entangled, the statistics of various outcomes will be constrained in a way that they won’t be if the states aren’t set until they’re measured.

Numerous experiments have verified that the statistics match the values not being set until the measurement.  This is an issue for Copenhagen and other interpretations because under them, in order for the entangled relationships to hold, the measurement results have to be communicated to the other particle in some sort of faster than light communication, violating locality, Einstein’s spooky action at a distance.  (But as I noted in the previous post, not in any way that’s actually useful.)

The main thing Carroll says about this is what I’ve seen from other sources.  Bell’s theorem assumes a definite outcome to the measurement.  But since in MWI there are no definite outcomes, every outcome is realized, the theorem doesn’t apply.

(Maybe under MWI, the statistics aren’t constrained in any one branch of the wave function because the outcomes are spread over all the branches, but that’s my speculation.)

Another question I hoped to see an answer to is how much branching a quantum interaction causes and at what resolution?  For a binary result, like the direction of spin, the answer should be two.  But the location of a particle is spread out in a wave, and elementary particles are points in space.  How many branches does that wave result in?  Carroll admits that we don’t know.  We just don’t know how granular the universe is.  He notes that Everett himself considered it plausible that the number might be infinite, which I can’t see as a strength of the theory.

Another question is whether the branching happens all at once, an entire universe instantly created, or gradually from the interaction outward.  My primer post and description above inherently assumes that the answer is gradual.  But Carroll states that it can be accounted for both ways.  It’s hard for me to see how creating an entire universe billions of light years wide, all at once, preserves locality, so I’m not sure how this can be true, except perhaps in an instrumental manner.  But an appeal to instrumentalism here seems wrong since the whole point of the MWI is to find the reality behind the observations.

Carroll also briefly considers the question a lot of people wonder about, if reality is branching like this, where are all these branches located?  Carroll’s reply is that the branches aren’t “located” anywhere.  They’re all right here, just not in a way that they can interact with each other.  While I think I understand this point, it’s undoubtedly one that a lot of people struggle with, and I don’t think Carroll addresses it sufficiently.

Another question is the conservation of energy.  Where does the energy for all these different branches come from?  Carroll’s answer is that the original energy of the universe is constantly being spread out among the branches, albeit not evenly, but according the amplitude of each outcome in the wave function.

But here is where the infinite branching point above becomes more problematic.  If the energy is being diluted throughout the branches, and there are an infinite number of those branches, does that mean the energy started out as infinite?  These two answers don’t seem to fit together.

One point Carroll does make clear, our personal decisions do not cause branching.  The branching is caused by quantum events.  Of course, in some of those branches, we might make different decisions, so the effect might be that there are universes with us making varying decisions, but it isn’t guaranteed.  Indeed, I tend to think that most branches will look identical at the macroscopic level, with the differences only being at the microscopic quantum level.

Carroll also discusses what you need to do with Everettian physics if you don’t want the other branches, the many worlds.  You have to add something to the formalism to accomplish this.  He looks at the deBroglie-Bohm pilot-wave theory, which although older than the MWI, could be seen as Everettian physics plus a particle that reifies one of the branches as the real one.  He also looks at GRW, an interesting premise that maybe quantum systems simply spontaneously and randomly collapse, but since they do so very rarely, we usually only see the collapse in association with large macroscopic systems.

The main thing to understand is that each of these additions come with a cost.  With pilot-wave, it’s explicit non-locality.  With GRW, it’s a largely ad-hoc premise whose only purpose is to ban the other worlds.  He discusses other alternatives with similar issues.

Carroll doesn’t address it, but a common move in the physics community is to accept Everettian physics itself, but simply say that the other worlds aren’t there.   This is known as the unreal version of the interpretation.  My issue with this move, aside from the absence of any explanation for why only one of the branches is the real one, is that it reminds me of the Tychonic System.

In the decades between when Copernicus proposed the sun centered model of the solar system and Galileo was able to produce empirical observations that supported it, Tycho Brahe proposed a compromise model, where a lot of the planets orbited the sun, but the sun continued to orbit the Earth.  It was an attempt to get the benefits of the elegant mathematics of the Copernican model, while preserving the “philosophical benefits” of the Earth centered Ptolemaic system.  Today it’s obvious that this was a misguided attempt to save appearances, but it wasn’t obvious in the late 16th century.

Along similar lines, Chad Orzel recommends that we not consider the other worlds as real, but merely as metaphors, accounting devices.  This is also reminiscent of one of the most common moves during the 16th century, to say that Copernicus’ crazy claim that the Earth moves shouldn’t be taken literally.  It should be regarded merely as a mathematical convenience.  Max Planck made a similar move when he first introduced quanta into his calculations.   The claim is, not to worry, it’s not like this crazy thing is real; it’s just an accounting gimmick.

I don’t know whether the MWI is reality or not.  As I’ve noted many times, I think it’s a candidate for reality.  But if you find the mathematics of the Everettian view elegant, and want the benefit of that elegance, then I think you should either accept the consequences (many worlds) or find a good reason to reject those consequences.  (A theory of quantum gravity might eventually provide a reason, but that’s speculation.)

Toward the end of the book Carroll gets into stuff involving the possible emergence of spacetime from quantum mechanics, which I found difficult to follow.  He does point out it’s difficult to work with quantum mechanics in terms of cosmology without, at least implicitly, working under the MWI.

Finally, in the epilogue, he reveals the the title comes from something Einstein wrote describing his wonder as a child that something about the workings of a compass implied, “something deeply hidden.”

All in all, I found the book a good discussion on these topics.  That said, Carroll isn’t really striving for even handedness here.  He’s a partisan for a particular view, and it shows.

Why you can’t use quantum entanglement for faster than light communication

Albert Einstein, with his theory of special relativity, established that the speed of light is the absolute speed limit of the universe.  A rocket ship attempting to accelerate to the speed of light encounters some well known effects: time dilation, mass increase, and length contraction.  The closer to the speed of light it gets, the higher its mass climbs, the slower its passage of time, and the shorter its length.  To actually reach the speed of light, it would need to acquire infinite mass, zero passage of time, and zero length, which would require infinite energy.  (Translation: you can’t do it.)

Things are marginally more hopeful for photons, which have no mass.  They always travel at the speed of light.  As the unit of all electromagnetic radiation, they enable communication at the speed of light.  But that’s the fastest they enable it at.

Often when these facts come up in discussions, someone raises the possibility of using quantum entanglement for communication.  Entanglement, we are told, is a non-local effect.  Doesn’t this, as a fair amount of science fiction implies, mean that there might be some effect we could use in the future for faster than light communication?

Unfortunately, the answer is no.  This doesn’t come from a pessimistic view of the possibilities, but from an understanding of what entanglement actually is, an understanding I have to admit I’ve only recently fully come to appreciate.  I first got it when reading Adam Becker’s What Is Real?, but wanted to wait to discuss it until I’d gotten some confirmation from Sean Carroll’s Something Deeply Hidden, which I’m currently reading.

That understanding is that entanglement is inherently about information.  When two quantum objects interact, they become entangled with each, meaning that they’re described by an overall common wave function.  But that description, for most people, isn’t very enlightening.   So let’s do an analogy.

Imagine both Alice and Bob, living far away from each other, each have a subscription to the New York Times, and each of them knows about the other’s subscription.  Let’s further suppose they both have very reliable and timely delivery of their paper.  When Alice gets a particular issue of the Times and looks at it, she knows that Bob is getting the same issue with the same information.  You could say that each copy of the same issue of the Times is entangled with every other copy, including Alice’s and Bob’s, which is to say, they share a causal history that enables information about one to provide information on the other.

So far this isn’t any big deal.  Alice and Bob each know what the other is seeing, but can’t use that information in any way to communicate with each other.  If Bob alters his copy of the Times, it doesn’t effect Alice’s.  All it really does is break the entanglement between them, that is, erase his ability to use his paper to know what’s in Alice’s copy.  (Technically since information is always conserved, it spreads the entanglement around, but let’s not get sidetracked.)

So what’s the big deal with entanglement?  Well, let’s say that a very special issue of the Times comes out, a quantum version of the paper, one that is in a superposition of possible states until a reader actually looks at it.  One branch of the superposition says the stock market went up yesterday, the other says it crashed.  Under standard interpretations of quantum mechanics, it is meaningless to talk about what the paper actually says until someone looks at it.

But, as soon as Bob or Alice actually look at their paper, the wave function of the quantum copy collapses into a definite value.  When Alice looks at her copy, she knows what Bob will see, even though Bob hasn’t looked at his yet.  This is true even if Alice and Bob are separated by light years.  In other words, what the paper says isn’t a definite value, until either Bob or Alice (or some other subscriber) looks at theirs, but as soon as either does, the other’s copy instantly becomes definite too, with the same values.  But if both copies were in an undefined state prior to their collapses, how do those copies “know” which one to collapse to so they agree with the other?

This is the aspect of quantum theory that bothered Einstein enough to co-author a paper with Boris Podolsky and Nathan Rosen in 1935, the famous EPR paradox paper.  In their view, it indicated that quantum theory could not be complete.  Einstein famously called it “spooky action at a distance”.  Bell’s theorem would eventually prove he and his co-authors wrong, at least if everything is happening in one consistent universe.

But just like our classical edition of the paper, there’s nothing Alice or Bob can do to their quantum copies that would allow them to communicate.  Again, if Bob alters his copy, all he does is break the entanglement (technically spread it around).

Bringing this back home to particles, there’s nothing you can do with one particle of an entangled pair of particles that will control the state of the other particle.  (Other then bring them back together and have them interact again.)  Yes, the act of measuring the first particle causes the other to assume a definite value, but there’s no way either party can know ahead of time what those values will be.  And attempting to control them, alters the particle’s state, breaking (spreading) the entanglement.

This might be frustrating, because we seem so close.  But of course, that closeness is an illusion, borne of a misunderstanding of what actually happens with entanglement.

To be clear, quantum entanglement, under most interpretations of quantum mechanics, violates the spirit of special relativity.  It allows communication of a sort between the entangled items, but it doesn’t violate the letter of relativity, since it’s not communication we’re able to actually do anything with.

Unless of course, I’m missing something?

Recommendation: What Is Real?

Last week I started listening to a Sean Carroll podcast episode, an interview of Adam Becker on his book, What Is Real?: The Unfinished Quest for the Meaning of Quantum Physics.  Before even finishing the episode, I downloaded Becker’s book and read it.

Becker starts out in the early decades of the 20th century, when quantum physics was still being worked out.  He takes us through the early controversies starting at the famous Solvay Conference.  He covers select details of the personal lives of many of the physicists involved, including the fact that many of them were Jewish and starkly affected by the deteriorating situation in Europe in the 1930s.

He describes the effects that World War II had on the physics community, shifting its center from Europe to America, along with the postwar influx of big money from the military and corporations, turning a small community of philosophically minded physicists into a much larger and more pragmatic field, a shift that likely affected the field’s attitudes toward exploring the foundations of quantum physics.

It’s often said that the Copenhagen Interpretation of quantum mechanics is the default one, but Becker points out that there isn’t really any one interpretation that everyone agrees is the Copenhagen one.  Talk to physicists who accept it and you’ll get a fairly wide variety of viewpoints, with Neils Bohr, generally cited as the primary originator of the interpretation, himself providing a lot of contradictory commentary on it.

Measurement is a key event in most versions.  But what exactly is a measurement?  Bohr insisted that the language describing a measurement must be in ordinary language, and resisted any attempts to be more precise, such as describing the measuring apparatus from a quantum perspective.  To me, this implies an epistemic stance, about the limits on what we can know, but his language is reportedly pretty unclear on this.

From the beginning, there were people who weren’t satisfied with the Copenhagen explanation.  One of the first to provide an alternative was Louis de Broglie.  He proposed an alternative explanation at the 1927 Solvay conference involving a particle and a pilot wave, but wasn’t well prepared to answer criticisms and quickly withdrew it.

Albert Einstein, himself one of the early pioneers of quantum physics, was also not happy.  It’s commonly assumed that Einstein’s chief beef with quantum physics was its lack of determinism, but Becker points out that his real complaint was the lack of locality, and its implications for special and general relativity.  The famous EPR (Einstein-Podolsky-Rosen) paper in 1935 was mostly about the fact that quantum physics, as currently envisioned, involved “spooky action at a distance.”  This is also shown by the fact that, although Einstein knew about the de Broglie-Bohm pilot-wave interpretation, he wasn’t enthusiastic about it.  It preserved determinism, but still had non-local effects.

Speaking of the de Broglie-Bohm interpretation, Becker covers the travails of David Bohm.  Bohm had Marxist leanings, which got him into hot water during the McCarthy era.  He ended up losing his job and having to leave the country.  After independently coming to the same conclusions de Broglie had decades earlier, he was able to clean up the pilot wave interpretation.  But the taint involved with his politics likely affected the reception of his ideas.

Hugh Everett is commonly described as bitterly leaving academia after the chilly reception of his interpretation, the one that would eventually be known as the Many Worlds Interpretation(MWI).  But Becker points out that Everett never planned an academic career, wanting a more affluent lifestyle which a career in the defense industry provided.

Given that the MWI manages to preserve both locality and determinism, I sometimes wonder what Einstein would have thought of it.  He died in 1955, a couple of years before Everett’s paper in 1957.  What would have been his attitude toward it?  It’s worth noting that Erwin Schrodinger mused about a similar possibility in 1952.  Maybe Einstein also thought of it but didn’t consider it worth the conceptual cost.

Becker also covers John Stewart Bell and his theorem.  Bell managed to take a metaphysical debate and turn it into a scientific one with possible experiments to test the idea, experiments which were later conducted.  These experiments verified the non-local nature of quantum physics (for interpretations other than MWI and its cousins).

This is a fascinating book, with a lot of interesting history.  It provides a particularly stark picture of people enduring terrible costs to their career for daring to explore radical ideas.  But it’s not without its issues.  Becker makes no pretense about being even handed.  He is a partisan in the interpretation wars.  Much of the book is a sustained attack on the Copenhagen Interpretation, and he seems to gravitate toward somewhat strawmannish versions when describing it.

This extends to his descriptions of its proponents such as Niels Bohr or Werner Heisenberg.  Bohr is often described as a slow thinker and unable to communicate clearly, and in the later parts of the book he becomes something of a nemesis, suppressing alternative ideas as they come up.  Heisenberg is described as a status conscious individual with questionable ethics leading him to work with the Nazis, then trying to spin his involvement after the war.

Heisenberg may indeed have been a piece of work.  But in the case of Bohr, his description in this book seems incompatible with the esteem in which he was held by the physics community.  In my experience, when a historical figure is described as clueless but nevertheless has consistent and ongoing success, it usually means that description is skewed, and that was the sense I got here.  Becker ascribes Bohr’s prestige to his charisma, but I doubt it was only that.  The charisma / personality explanation smacks of rationalizing, that people simply couldn’t have thought his actual ideas had merit.

My suspicion in this regard was also fueled by Becker’s discussion on the philosophy of science.  He (probably accurately) describes the influence logical positivism had on Bohr and his collaborators, but he lumps Karl Popper’s falsifiability in with the verifcationism of the logical positivists (even though Popper was an opponent of logical positivism).  And Becker rails against instrumentalism, but his criticism is of a silly version that few modern instrumentalists would ascribe to.

In general, Becker seems impatient with epistemic caution.  For him, physics is about describing the world, and he doesn’t want to be constrained by things like testability.  So he’s enthusiastic for various interpretations even though none of them are uniquely testable, as well as multiverses and all the rest.  He seems unable to see any validity in the discomfort many physicists have with speculation too far removed from experimentation.

All that said, I enjoyed this book and generally do recommend it for getting an idea of the human stories associated with quantum physics, even if its often not an objective one.

Neanderthals and the beginnings of us

The Smithsonian has an interesting article up on what we currently know about Neanderthals.  The article details some of the internecine battles that always seems to be a part of the paleoanthropology field, in this case focusing on the capabilities of Neanderthals, whether they had art, religion, and other qualities of modern humans.

Our view of Neanderthals has undergone a radical transformation from when they were first discovered in the 19th century.  Then they were thought of a ape-men, large lumbering brutes who probably didn’t have language, clothing, or brains to speak of.  As recently as a few decades ago, in the movie Quest for Fire (one of my favorite movies, despite its flaws), Neanderthals were portrayed as mental inferiors who often acted like monkeys.

But in science, evidence always has the final word:

A new body of research has emerged that’s transformed our image of Neanderthals. Through advances in archaeology, dating, genetics, biological anthropology and many related disciplines we now know that Neanderthals not only had bigger brains than sapiens, but also walked upright and had a greater lung capacity. These ice age Eurasians were skilled toolmakers and big-game hunters who lived in large social groups, built shelters, traded jewelry, wore clothing, ate plants and cooked them, and made sticky pitch to secure their spear points by heating birch bark. Evidence is mounting that Neanderthals had a complex language and even, given the care with which they buried their dead, some form of spirituality. And as the cave art in Spain demonstrates, these early settlers had the chutzpah to enter an unwelcoming underground environment, using fire to light the way.

It seems clear now that if we were to encounter Neanderthals today, they might look a bit strange to us, but we would quickly come to regard them as people.  Indeed, that appears to be what our ancestors did.

The real game-changer came in 2013, when, after a decades-long effort to decode ancient DNA, the Max Planck Institute published the entire Neanderthal genome. It turns out that if you’re of European or Asian descent, up to 4 percent of your DNA was inherited directly from Neanderthals.

4% may not seem like much, but my understanding is that it represents a lot of interbreeding between Homo sapiens and Homo neanderthalis.  These weren’t one off encounters, the results of deviants from one or both species.  It indicates pretty wide integration.

Decades ago, there were two prevailing theories about how modern humans evolved.  One held that we had gradually evolved from earlier Homo species, primarily Homo erectus, throughout the world, with ongoing genetic exchanges.  In this model, called Multiregional Evolution, Europeans evolved mostly separately from eastern Asians who evolved mostly separately from Africans, etc.

The other view, called the Replacement model, or Recent African Origin theory, held that modern humans had evolved in Africa, and then sometime in the last 50,000-100,000 years had migrated out and spread throughout the world, displacing any other Homo species they encountered.

The debate between these two views raged on for decades, with the evidence gradually growing in favor of the Replacement model, before genetic research finally weighed in on it and sealed the deal.  It turns out that modern humans evolved in Africa within the last 200,000-300,000 years.  All of us today are descended from these Africans.  A branch of humanity migrated out of Africa sometime between 60,000 and 80,000 years ago, spreading throughout the world.  All non-Africans are descended from this branch.

But while the Replacement model was mostly right, it wasn’t entirely right.  As mentioned above, further research showed that non-Africans have DNA from other branches of humanity.  European ancestors interbred with Neanderthals, and Asian ancestors probably interbred with another branch of humanity called Denisovans.

One of the theories about why these other branches of humanity died out, prevalent until just a few years ago, was that Homo sapiens probably wiped them out.  I have to admit that this dark genocidal theory seemed plausible to me at the time.  Neanderthals in particular had been around for hundreds of thousands of years, only disappearing when modern humans came around.

But it now strikes me as more plausible that Neanderthals weren’t wiped out.  They were assimilated.  This is referred to as the Assimilation Model in the article.  The population of Neanderthals was never more than a few thousand individuals, while the incoming Homo sapiens population was reportedly in the tens of thousands.  It seems likely that what happened was some degree of interbreeding, merging, and assimilation.

I’m sure that doesn’t mean it was all sweetness and light.  Homo sapiens were an invading force.  I’m sure there was conflict, and some of it was probably brutal.  There’s too much continuity in violent behavior from other primates to humans to think it wouldn’t have happened.  But we’re also a pragmatic species, one whose members will make alliances when it’s the best option.  It seems clear that happened in at least some portion of the encounters.

All of which indicates that Homo sapiens and Neanderthals had enough in common to recognize each other’s humanity.  Which also means that their common ancestor, Homo heidelbergensis, who lived from 700,000 to 300,000 years ago, likely had many of the qualities we’d recognize in people.  There’s no evidence they had what’s now called behavioral modernity, including symbolic thought, but they must have had a lot of what makes us…us, including perhaps an early form of language, or proto-language.

But this is a field where new evidence is constantly being uncovered and paradigms shifted, so we should probably expect more surprises in the years to come.

Do all quantum trails inevitably lead to Everett?

I’ve been thinking lately about quantum physics, a topic that seems to attract all sorts of crazy speculation and intense controversy, which seems inevitable.  Quantum mechanics challenges our deepest held most cherished beliefs about how reality works.  If you study the quantum world and you don’t come away deeply unsettled, then you simply haven’t properly engaged with it.  (I originally wrote “understood” in the previous sentence instead of “engaged”, but the ghost of Richard Feymann reminded me that if you think you understand quantum mechanics, you don’t understand quantum mechanics.)

At the heart of the issue are facts such as that quantum particles operate as waves until someone “looks” at them, or more precisely, “measures” them, then they instantly begin behaving like particles with definite positions.  There are other quantum properties, such as spin, which show similar dualities.   Quantum objects in their pre-measurement states are referred to as being in a superposition.  That superposition appears to instantly disappear when the measurement happens, with the object “choosing” a particular path, position, or state.

How do we know that the quantum objects are in this superposition before we look at them?   Because in their superposition states, the spread out parts interfere with each other.  This is evident in the famous double slit experiment, where single particles shot through the slits one at a time, interfere with themselves to produce the interference pattern that waves normally produce.  If you’re not familiar with this experiment and its crazy implications, check out this video:

So, what’s going on here?  What happens when the superposition disappears?  The mathematics of quantum theory are reportedly rock solid.  From a straight calculation standpoint, physicists know what to do.  Which leads many of them to decry any attempt to further explain what’s happening.  The phrase, “shut up and calculate,” is often exclaimed to pesky students who want to understand what is happening.  This seems to be the oldest and most widely accepted attitude toward quantum mechanics in physics.

From what I understand, the original Copenhagen Interpretation was very much an instrumental view of quantum physics.  It decried any attempt to explore beyond the observations and mathematics as hopeless speculation.  (I say “original” because there are a plethora of views under the Copenhagen label, and many of them make ontological assertions that the original formulation seemed to avoid, such as insisting that there is no other reality than what is described.)

Under this view, the wave of the quantum object evolves under the wave function, a mathematical construct.  When a measurement is attempted, the wave function “collapses”, which is just a fancy way of saying it disappears.  The superposition becomes a definite state.

What exactly causes the collapse?  What does “measurement” or “observation” mean in this context?  It isn’t interaction with just another quantum object.  Molecules have been held in quantum superposition, including, as a new recent experiment demonstrates, ones with thousands of atoms.  For a molecule to hold together, chemical bonds have to form, and for the individual atoms to hold together, the components have to exchange bosons (photons, gluons, etc) with each other.  All this happens and apparently fails to cause a collapse in otherwise isolated systems.

One proposal thrown out decades ago, which has long been a favorite of New Age spiritualists and similarly minded people, is that maybe consciousness causes the collapse.  In other words, maybe it doesn’t happen until we look at it.  However, most physicists don’t give this notion much weight.  And the difficulties of engineering a quantum computer, which require that a superposition be maintained to get their processing benefits, seems to show (to the great annoyance of engineers) that systems with no interaction with consciousness still experience collapse.

What appears to cause the collapse is interaction with the environment.  But what exactly is “the environment”?  For an atom in a molecule, the environment would be the rest of the molecule, but an isolated molecule seems capable of maintaining its superposition.  How complex or vast does the interacting system need to be to cause the collapse?  The Copenhagen Interpretation merely says a macroscopic object, such as a measuring apparatus, but that’s an imprecise term.  At what point do we leave the microscopic realm and enter the classical macroscopic realm?  Experiments that succeed at isolating ever larger macromolecules seem able to preserve the quantum superposition.

If we move beyond the Copenhagen Interpretation, we encounter propositions that maybe the collapse doesn’t really happen.  The oldest of these is the deBroglie-Bohm Interpretation.  In it, there is always a particle that is guided by a pilot wave.  The pilot wave appears to disappear on measurement, but what’s really happening is that the wave decoheres, loses its coherence into the environment, causing the particle to behave like a freestanding particle.

The problem is that this interpretation is explicitly non-local in that destroying any part of the wave causes the whole thing to cease any effect on the particle.  Non-locality, essentially action at a distance, is considered anathema in physics.  (Although it’s often asserted that quantum entanglement makes it unavoidable.)

The most controversial proposition is that maybe the collapse never happens and that the superposition continues, spreading to other systems.  The elegance of this interpretation is that it essentially allows the system to continue evolving according to the Schrödinger equation, the central equation in the mathematics of quantum mechanics.  From an Occam’s razor standpoint, this looks promising.

Well, except for a pesky detail.  We don’t observe the surrounding environment going into a superposition.  After a measurement, the measuring apparatus and lab setup seem just as singular as they always have.  But this is sloppy thinking.  Under this proposition, the measuring apparatus and lab have gone into superposition.  We don’t observe it because we ourselves have gone into superposition.

In other words, there’s a version of the measuring apparatus that measures the particle going one way, and a version that measures it going the other way.  There’s a version of the scientist that sees the measurement one way, and another version of the scientist that sees it the other way.  When they call their colleague to tell them about the results, the colleague goes into superposition.  When they publish their results, the journal goes into superposition.  When we read the paper, we go into superposition.  The superposition spreads ever farther out into spacetime.

We don’t see interference between the branches of superpositions because the waves have decohered, lost their phase with each other.  Brian Greene in The Hidden Reality points out that it may be possible in principle to measure some remnant interference from the decohered waves, but it would be extremely difficult.  Another physicist compared it to trying to measure the effects of Jupiter’s gravity on a satellite orbiting the Earth: possible in principle but beyond the precision of our current instruments.

Until that becomes possible, we have to consider each path as its own separate causal framework.  Each quantum event expands the overall wave function of the universe, making each one its own separate branch of causality, in essence, its own separate universe or world, which is why this proposition is generally known as the Many Worlds Interpretation.

Which interpretation is reality?  Obviously there’s a lot more of them than I mentioned here, so this post is unavoidably narrow in its consideration.  To me, the (instrumental) Copenhagen Interpretation has the benefit of being epistemically humble.  Years ago, I was attracted to the deBroglie-Bohm Interpretation, but it has a lot of problems and is not well regarded by most physicists.

The Many Worlds Interpretation seems absurd, but we need to remember that the interpretation itself isn’t so much absurd, but its implications.  Criticizing the interpretation because of those implications, as this Quanta Magazine piece does, seems unproductive, akin to criticizing general relativity because we don’t like the relativity of simultaneity, or evolution because we don’t like what it says about humanity’s place in nature.

With every experiment that increases the maximally observed size of quantum objects, the more likely it seems to me that the whole universe is essentially quantum, and the more inevitable this interpretation seems.

Now, it may be possible that Hugh Everett III, the originator of this interpretation, was right that the wave function never collapses, but that some other factor prevents the unseen parts of the post-measurement wave from actually being real.  Referred to as the unreal version of the interpretation, this seems to be the position of a lot of physicists.  Since we have no present way of testing the proposition as Brian Greene suggested, we can’t know.

From a scientific perspective then, it seems like the most responsible position is agnosticism.  But from an emotional perspective, I have to admit that the elegance of spreading superpositions are appealing to me, even if I’m very aware that there’s no way to test the implications.

What do you think?  Am I missing anything?  Are there actual physics problems with the Many Worlds Interpretation that should disqualify it?  Or other interpretations that we should be considering?

Are the social sciences “real” science?

YouTube channel Crash Course is starting a new series on what is perhaps the most social of social sciences: Sociology.

The social sciences, such as sociology, but also psychology, economics, anthropology, and other similar fields get a lot of grief from people about not being “real” science.  This criticism is typically justified by noting that scientific theories are about making predictions, and the ability of the social sciences to make predictions seems far weaker than, say, particle physics.  Economists couldn’t predict when the Great Recession was coming, the argument goes, so it’s not a science.

But this ignores the fact that predictions are not always possible in the natural sciences either.  Physics is the hardest of hard sciences, but it’s married to astronomy, an observational science.  Astronomers can’t predict when the star Betelguese will go supernova.  But they still know a great deal about star life cycles, and can tell that Betelguese is in a stage where it could go any time in the next few million years.

Likewise biologists can’t predict when and how a virus will mutate.  They understand evolution well enough to know that they will mutate, but predicting what direction it will take is impossible.  Meteorologists can’t predict the precise path of a hurricane, even though they understand how hurricanes develop and what factors lead to the path they take.

The problem is that these are matters not directly testable in controlled experiments.  Which is exactly the problem with predicting what will happen in economies.  In all of these cases, controlled experiments, where the variables are isolated until the causal link is found, are impossible.  So scientists have little choice but to do careful observation and recording, and look for patterns in the data.

Just as an astronomer knows Betelguese will eventually go supernova, an economist knows that tightening the money supply will send contractionary pressures through the economy.  They can’t predict that the economy will definitely shrink if the money supply is tightened because other conflating variables might affect the outcome, but they know from decades of observation that economic growth will be slower than it otherwise would have been.  This is an important insight to have.

In the same manner, many of the patterns studied in the other social sciences don’t provide precise predictive power, but they still give valuable insights into what is happening.  And again, there are many cases in the natural sciences where this same situation exists.

Why then all the criticism of the social sciences?  I think the real reason is that the results of social science studies often have socially controversial conclusions.  Many people dislike these conclusions.  Often these people are social conservatives upset that studies don’t validate their cherished notions, such as traditionally held values.  But many liberals deny science just as vigorously when it violates their ideologies.

Not that everything is ideal in these fields.  I think anthropology ethnographers often get too close to their subject matter, living among the culture they’re studying for years at a time.  While this provides deep insights not available through other methods, it taints any conclusions with the researcher’s subjective viewpoint.  Often follow up studies don’t have the same findings.  This seems to make ethnographies, a valuable source of cultural information, more journalism than science.

And psychology has been experiencing a notorious replication crisis for the last several years, where previously accepted psychological effects are not being reproduced in follow up studies.  But the replication crisis was first recognized by people in the field, and the field as a whole appears to be gradually working out the issues.

When considering the replication crisis, it pays to remember the controversy over the last several years in theoretical physics.  Unable to test their theories, some theorists have called for those theories not to be held to the classic testing standard.  Many in the field are pushing back, and theoretical physics is also working through the issues.

In the end, science is always a difficult endeavor, even when controlled experiments are possible.  Looking at the world to see patterns, developing theories about those patterns, and then putting them to the test, facing possible failure, is always a hard enterprise.

It’s made more difficult when your subject matter have minds of their own with their own agendas, and can alter their behaviors when observed.  This puts the social sciences into what philosopher Alex Rosenberg calls an arms race, where science uncovers a particular pattern, people learn about it, alter their behavior based on their knowledge of it, and effectively change the pattern out from under the science.

But like all sciences, it still produces information we wouldn’t have otherwise had.  And as long as it’s based on careful rigorous observation, with theories subject to revision or refutation on those observations, I think it deserves the label “science”.

What do scientific theories actually tell us about the world?

One of the things that’s exciting about learning new things, is that often a new understanding in one area sheds light on what might seem like a completely separate topic.  For me, information about how the brain works appears to have shed new light on a question in the philosophy of of science, where there has long been a debate about the epistemic nature of scientific theories.

Spacetime lattice Image credit: mysid via Wikipedia
Spacetime lattice
Image credit: mysid via Wikipedia

One camp holds that scientific theories reflect reality, at least to some level of approximation.  So when we talk about space being warped in general relativity, or the behavior of fermions and bosons, there is actually something “out there” that corresponds to those concepts.  There is something actually being warped, and there actually are tiny particles and/or waves that are being described in particle physics.  This camp is scientific realism.

The opposing camp believes that scientific theories are only frameworks we build to predict observations.  The stories we tell ourselves associated with those predictive frameworks may or may not correspond to any underlying reality.  All we can know is whether the theory successfully makes its predictions.  This camp is instrumentalism.

The vast majority of scientists are realists.  This makes sense when you consider the motivation needed to spend hours of  your life in a lab doing experiments, or to endure the discomforts and hazards of field work.  It’s pretty hard for geologists to visit the antarctic for samples, or for biologists to crawl through the mud for specimens, if they don’t see themselves in some way as being in pursuit of truth.

But the instrumentalists tend to point out all the successful scientific theories that could accurately predict observations, at least for a time, but were eventually shown to be wrong.

The prime example is Ptolemy’s ancient theory of the universe, a precise mathematical model of the Aristotelian view of geocentrism, the idea that the Earth is the center of the universe with everything revolving around it.    For centuries, Ptolemy’s model accurately predicted naked eye observations of the heavens.

But we know today that it is completely wrong.  As Copernicus pointed out in the 1500s, the Earth orbits around the sun.  Interestingly, many science historians have pointed out that Copernicus’ model actually wasn’t any better at making predictions than Ptolemy’s, at least until Galileo started making observations through a telescope.  Indeed, the first printing of Copernicus’ theory had a preface from someone, probably hoping to head off controversy, saying the ideas presented might only be a predictive framework unrelated to actual reality.

For a long time, I was agnostic between realism and instrumentalism.  Emotionally, scientific realism is hard to shake.  Without it, science seems little more than an endeavor to lay the groundwork for technology, for practical applications of its findings.  Many instrumentalists are happy to see it in that light.  A lot of instrumentalists tend to be philosophers, theologians, and others who may be less than thrilled with the implications of scientific findings.

However I do think it’s important for scientists, and anyone assessing scientific theories, to be able to put on the instrumentalist cap from time to time, to conservatively assess which parts of a theory are actually predictive, and which may just be speculative baggage.

But here’s the thing.  Often what we’re really talking about here is the difference between the raw mathematics of a theory, and its language description, including the metaphors and analogies we use to understand it.  The idea is that the mathematics might be right, but the rest wrong.

But the language part of a theory is a description of a mental understanding of what’s happening.  That understanding is a model we build in our brains, a neural firing pattern that may or may not be isomorphic with patterns in the world.  And as I’ve discussed in my consciousness posts, the model building mechanism evolved for an adaptive purpose: to make predictions.

In other words, the language description of a theory is itself a predictive model.  Its predictions may not be as precise as the mathematical portions, they may not be currently testable in the same manner as the mathematics (assuming those mathematics are actually testable; I’m looking at you string theorists), but it will still make predictions.

Using the Ptolemy example above, the language model did make predictions.  It’s just that many of its predictions couldn’t be tested until the availability of telescopes.  Once they could, the Ptolemy model quickly fell from favor.  (At least it was quick on historical time scales.  It wasn’t quick enough to avoid making Galileo’s final years miserable.)  As many have pointed out, it wasn’t that Copernicus’ model made precisely right predictions, but it was far less wrong than Ptolemy’s.

When you think about it, any mental model we hold makes predictions.  The predictions might not be testable, currently or ever, but they’re still there.  Even religious or metaphysical beliefs make predictions, such as whether we’ll wake up in an afterlife after we die.  They’re just predictions we may never be able to test in this world.

This means that the distinction between scientific realism and instrumentalism is an artificial one.  It’s really just a distinction between aspects of a theory that can be tested, and the currently untestable aspects.  Often the divide is between the mathematical portions and the language portions, but the only real difference there is that the mathematical predictions are precise, whereas the language ones are less precise, to varying degrees.

Of course, I’m basing this insight on a scientific theory about how the brain works.  If that theory eventually ends up failing in its predictions, it might have implications for the epistemic point I’m making here, for the revision to our model of scientific knowledge I think is warranted.

And idealists might note that I’m also making the assumption that brains exist, that along with the rest of the external world they aren’t an illusion.  I have to concede that’s true, and even if this understanding makes accurate useful predictions, within idealism, it still wouldn’t be mapping to actual reality.  But given that I’m also assuming that all you other minds exist out there, it’s a stipulation I’m comfortable with.

As always, it might be that I’m missing something.  If so, I hope you’ll set me straight in the comments.

Consciousness is composed of non-consciousness

The components of a thing are not individually the thing.

For example, the components of the chair I type most of my blog posts from are not the chair itself, but the wood of the frame, the springs for the back and bottom, some metal parts for the reclining mechanism, the fabric coverings, cushions, etc.  None of these things by themselves are the chair.  Only together do they make a chair.  (Although you could point to a subset of the assemblage and perhaps still call it a chair, such as the chair without the armrests.  But that sub-assemblage would still be composed of constituents that are not the chair.)

Traffic is composed of a number of vehicles on the road at the same time.  But the road itself isn’t traffic.  Neither are the cars, or the drivers, or the traffic lights, or any accidents.  But together, they make up what we collectively call “traffic.”

A human society is composed of individual people.  No person by themselves make up a society.  It requires multiple people.  (I don’t know if two people would count as a society, but I’m pretty sure one person wouldn’t.)

Of course, these are fairly banal examples.  But there are others where the relationship between the whole and its parts might become more difficult to accept.

Consider life.  Complex animals such as humans are composed of organs, which are composed of cells, all of which are alive.  But a cell is composed of molecular machinery that is not, by itself, alive.  And that machinery is itself composed of molecules which simply behave according to the laws of chemistry and electricity.  In other words, life is composed of non-life.

Years ago, when I was reading about morality, I remember learning about the meta-ethical debate on whether morality eventually reduces to non-moral facts.  If you see moral rules as having some sort of platonic objectivity, then you might say no, that moral rules only reduce to themselves.  But if you see those rules as arising from nature, society, culture, etc, then the answer would probably be yes.

In physics, the laws of thermodynamics are often said to emerge out of the movement and speed of particles.  In other words, the components of thermodynamics are not thermodynamics, but the mechanics of particles.

Quantum physics seems to show that classical physics is composed of entities that are not themselves part of classical physics.  Whatever is happening with wave-particle duality, it doesn’t appear to obey classical laws.  Indeed, the classical laws appear to emerge from quantum mechanics.

This is one reason why I’m leery of signing on to any one interpretation of quantum physics.  It seems like many of the interpretations attempt to understand quantum mechanics in classical terms, with each interpretation sacrificing a different aspect of the classical understanding of reality to preserve a more cherished aspect.  But if quantum mechanics represent, in essence, the components that make up the classical world, this could ultimately prove to be a hopeless endeavor.  (Or it might not, I’m not really committed to either viewpoint.)  Quantum mechanics might ultimately simply have to be accepted on its own terms, not in terms of any metaphor or analogy to translate into everyday terms.

This is also why I’m usually skeptical of using reason and logic, built on patterns and mechanics as they exist in our universe, to extrapolate about the origins of the universe, or about other universes.  Granted, it’s not like we have much choice for this kind of reasoning, since we are part of this universe and it is everything we’ve ever known.  But this seems like an area where our confidence shouldn’t be high.  There doesn’t seem to be any reason to assume that the components or origins of the universe must behave according to how things work within the universe.

Another area where it seems like people are extremely resistant to accepting this principle is minds, particularly consciousness.  But if the mind exists as a system in this universe, and I think the evidence pretty strongly points in that direction, then that means that a mind is ultimately composed of constituents that are not themselves a mind, that aren’t cognition.  This is definitely true for anyone who accepts the computational theory of mind, but it seems like it would be true for any materialistic theory of mind.

This means that consciousness itself must ultimately be composed of things that are not themselves conscious.  It seems like the hard problem, which troubles many people is, in essence, resistance to this idea, an insistence that consciousness, whatever it is, must remain whole, indivisible.  If that is your attitude, then the hard problem, how this indivisible thing arises from a physical system that is itself divisible, would indeed seem like a major problem.

I also think this is why notions like panpsychism often arise.  Panpscyhism seems like a way to reconcile this divide.  It isn’t that consciousness is composed of non-conscious constituents, but that everything is consciousness to one degree or another, and human consciousness is simply the sum total of all the consciousness of its components.

But it seems like this assumption, that consciousness is indivisible, is one we should scrutinize.  Why would it be immune from the relationship of just about every other whole to its constituents?  What is different about it?  At least aside from it being our most primal experience of things?

Vision and hearing processing centers of the brain.  Image credit: Selket via Wikipedia
Vision and hearing processing centers of the brain. Image credit: Selket via Wikipedia

It seems like if consciousness were truly indivisible, we’d see that reflected in brain damaged patients.  Either a person would retain consciousness or they wouldn’t.  But it appears that it is possible, with disorders of consciousness, to have varying degrees of it.  And there are a number of cases where a person retains some aspects of consciousness, but not others.

For example, patients with a condition known as hemispatial neglect, lose the ability to perceive one side of the world.  To be clear, it isn’t so much that they can’t see that side, but that that particular side of the world becomes inconceivable to them.  Other conditions, referred to as agnosia, can render a patient unable to recognize faces, sounds, or in general process a number of sensory perceptions.

All of which seems to strengthen the case that consciousness is not immune from the general principle that things are ultimately composed of constituents that are not that thing, that consciousness is divisible, ultimately composed of non-conscious constituents.

Unless, of course, I’m missing something?