The role of beauty and simplicity in scientific theories

In the post on Copernicus earlier this week, I noted that his heliocentric theory, right from its initial publication, was hailed as far more mathematically elegant than the Aristotelian / Ptolemaic system, which was taken as the canonical model of the universe at the time.  But while everyone hailed Copernican mathematics, virtually no one accepted its ontology.  The idea that the Earth moved around the Sun was simply too radical for most people.

It’s easy to forget today the searing shift in perspective that took place between Copernicus’ publication of his theory in 1543, and Newton’s publication of his theory of gravitation and mechanics in 1687.  In that period, we went from being the center of creation, with the universe literally revolving around us, to an insignificant speck in an incomprehensibly vast darkness.  It’s probably not an coincidence that the first modern atheists and deists arose in the 17th and 18th centuries.

In retrospect, the mathematical elegance of Copernicanism was a major clue.  But as I noted in that post, we have to be careful with generalizing from that.  This morning, Aeon highlighted an article by Massimo Pigliucci from earlier this year (which I apparently completely missed at the time), pointing out that Richard Feynman often asserted that truth could be recognized by its beauty and simplicity.  Massimo’s thesis is that, given the failure of things like supersymmetry and other related theories,  Feynman was wrong.

I think Massimo, along with similar critics such as Sabine Hossenfelder, Peter Woit, and Jim Baggott are right, to an extent.  No theory should be accepted purely on the basis of its beauty, simplicity, or elegance, on its aesthetics.  But given the Copernican story, it also seems excessively hasty to completely dismiss such theories.

On the one hand, if Copernicus had held to such a philosophy, he might have avoided engaging in what, in his time, amounted to metaphysical speculation.  Eventually telescopic observations would have forced the matter, and a new model would have needed to be developed in the 17th century.

On the other hand, Copernicus’ theory arguably spurred decades of discussion, setting up the intellectual atmosphere that inspired figures like Tycho Brahe,  Kepler, and Galileo.  All observation is theory laden.  How much longer would science have taken to reach the same conclusions without Copernicus’ theoretical work?  There’s probably no way to know.

In addition, we have to admit the fact that no theory, even one with a long history of successfully predicting observations, is ever the only explanation for those observations.  There are always alternate models.  We can say we use Occam’s razor to select one, but “simplicity” is often just another name for the aesthetic aspects that are really used in that selection.

I think the right middle ground then, is that logical or mathematical elegance are fine for admitting a theory into the candidate for reality category.  If it gets falsified, then we can dismiss it.  If it’s the simplest theory and racks up predictive success, then we can accept it as the best explanation, until a better one comes along.  That appears to be as much certitude as anyone’s going to get.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , | 12 Comments

Lost in Space

Lost in Space poster showing Will with an image of the robot in the backgroundWhen I was very young in the early 70s, I remember coming home after school and watching afternoon TV, a lot of syndicated shows from the 60s.  One of those shows was the original Star Trek, in the early years of its syndication run that would pull it out of oblivion and eventually turn it into a major franchise.

But there was another show, which for my five to seven year old self, was just as thrilling as Star Trek, if actually not more so: Lost in Space.   While Star Trek has aged relatively well (particularly after its 2006 visual remastering), Lost in Space hasn’t at all.  We’re not talking special effects or production values here, but the overall intelligence of the stories, which probably never worked for anyone but very young kids.

When I tried to re-watch it a few years ago, the early parts of the first season weren’t too bad (with allowance for 1960s limitations), but the later parts and subsequent seasons were unwatchable.  It quickly became apparent that attempting to watch them as an adult would just sully my childhood memories.

There was an attempted revival in the late 90s with a theatrical movie, which I remember enjoying, but it was universally reviled.  A lot of people at the time actually seemed to miss the “campy charm” of the original series.

But the new Netflex remake, which just released its second season, seems to be faring far better.  I find that it actually manages to capture the spirit of the original show without falling too far into its cartoonish nature.

It does this by reimagining the original premise.  Of course, the original show was itself probably a reimagining of the old comic book series: Space Family Robinson (although the lineage is unclear), which was itself obviously a reimagining of the classic Swiss Family Robinson from 1812.  So there’s plenty of precedent here.

The new show modernizes family roles, with the females on an equal footing now with the males, with Maureen, the wife, actually now the mission commander.  (The old show had portrayed everyone in very traditional roles.)  But the moral lessons and family values aspect so traditional to Family Robinson stories are still there.  And the overall feel of the show is pretty life affirming, with a theme that, while they may make tragic mistakes, people are generally good.

One thing the show does particularly well is in making the villain, Doctor Smith, work in a believable manner.  Even as a kid, I wondered what was wrong with the Robinson’s, why they couldn’t see Smith for the skunk he obviously was.  In the new show, Smith, now a woman, manages to convincingly sit on the boundary.  She is capable of utter cringe inducing villainy, but also selfless heroics, and, particularly in the new season, has very sympathetic moments.

The show does continue the Lost in Space tradition of having a very loose relation with scientific reality, although it’s not really any worse than most TV space shows.  (It certainly isn’t any worse that The Mandalorian, which I’m also enjoying, but as typical Star Wars fantasy.)  Unlike Star Wars, this show idealizes science and mathematics, even if it mixes in a little magic here and there, such as Will Robinson’s apparently paranormal connection to the (now alien) robot.

The show also continues the Family Robinson tradition of having the family face more novel and dangerous situations than could realistically show up anywhere.  No opportunity is lost for the characters to experience danger.  This is particularly noticeable when binging through the episodes as I did.

On the other hand, the production values are excellent and the show is a lot of fun.  As I noted above, it does manage to capture the spirit of the original, a mix of wonder, such as large inscrutable alien installations, with the dread of various alien monsters.  All countered by the spirit of people working together.

Well worth checking out if space adventure is your cup of tea.  Although be forewarned, like the first season, the second one ends on a cliffhanger.

Posted in Science Fiction | Tagged , , | 11 Comments

Merry Christmas

A year ago, I wrote that I hoped there would be more time for blogging in the next year.  That time would eventually materialize, but not for several months.  Yet, the frequency of posts immediately spiked, and has stayed higher ever since.  Did I just end up making more time?  Not quite.

I realized after that post that it wasn’t anything unusual for me to hammer out a 250-500 word comment in a conversation.  What then was stopping me from hammering out a blog post of the same size?  So I lowered the threshold of doing a post, giving myself permission to do only a 100 word one, or shorter.  Of course, I’m too much of a big mouth, so the posts are almost never 100 words, but the promise that I could stop at that length made it much easier to start.  And as many of you know, in writing, starting is half the battle.

The result is 120 posts since the Christmas one last year, 117 in 2019 proper.  According to my site stats, that’s the best year since 2014, which had almost 500 posts.  (I used to share articles via the blog, but most of that activity now happens on Twitter.)

And together, we’ve knocked out 5608 comments here so far in 2019.  Thank you!  It’s the discussions with friends which makes blogging worth it.  For me, that’s what really made the last year enjoyable.  Intelligent, often eye opening conversation is what it’s all about!

I read an article this morning about why winter solstice celebrations persist across human cultures.  It’s the ultimate origin for the holiday season.  The religious observances were additions, scheduled to occur on, or perhaps co-opt, existing celebrations.  The article focuses on the natural renewal aspects of the solstice, the fact that the sun starts traveling higher again in the sky, and the days start becoming longer again.

I think there’s something to that.  And maybe that was the original impetus.  But my own strong suspicion is that what the winter holidays are really about today is celebration of our friends and family, a celebration that evolved to lift spirits in the dead of winter.  (I do wonder how this works for people living in the southern hemisphere, where the seasons are reversed.)

With that in mind, I hope all of you, my online friends, are having a safe, comfortable, and enjoyable holiday season.

Merry Christmas!

Posted in Zeitgeist | 19 Comments

A theory more pleasing to the mind

For most of human history, the Earth was seen as the stationary center of the universe, with the sun, planets, and starry firmament circling around it at various speeds.  The ancient Greeks quickly managed to work out that the Earth was spherical but struggled to explain the motions of the heavens.

Eventually Eudoxus, a student of Plato, worked out a mathematical model involving concentric spheres, which the planets rode on around the Earth in perfect circles, with the outermost sphere being the firmament.  Aristotle took this model and posited as a physical one, with the spheres being crystalline orbs.

Several centuries later, Ptolemy worked out a fairly rigorous model based on these views that was mostly predictive of astronomical observations.  But while the Ptolemaic system worked, it was widely regarded as problematic, positing a lot of ugly conceptions, such as epicycles, to explain what was happening.

Early on, there were people who pointed out that changing some basic assumptions might simplify the model.  Aristarchus of Samos came up with a heliocentric model way back in the 3rd century BC, with the sun at the center and everything, including the Earth, revolving around it.  But heliocentrism didn’t seem to garner a substantial following among ancient astronomers.

When Copernicus developed his heliocentric model, he was careful to cite Aristarchus and other sources, to make sure that his readers knew there was ancient precedent for the idea, that it wasn’t something completely novel.  At his point in history, in the early 16th century, the completely new remained suspect, even after the discovery of the new world.

Copernicus was a mathematician and his model was rigorous, but it wasn’t without its own problems.  Copernicus retained the spheres and perfect circles, so he had to add his own kludgy anomalies, although these were seen as less severe than the ones in the Ptolemaic model.

In many ways, Copernicus’ model was no better than the Ptolemaic one at making predictions.  His primary justification for it was that the Ptolemaic system was a “monster”, and that his own model was, “more pleasing to the mind.”

Copernicus, fearing the ridicule his theory might provoke, held off publishing it until the end of his life, although he had circulated rough outlines of it earlier.  After publication, there wasn’t much ridicule.  There actually wasn’t much reaction at all.  Astronomers found his mathematics more elegant than Ptolemy’s and were happy to use his system, but most regarded his physics as a convenient fiction.

Throughout the 1500s, it’s estimated that there may only have been about a dozen astronomers in Europe who were convinced Copernicans.  Aside from the reasons noted above, heliocentrism was seen as simply absurd, too much of a departure from common sense.  And that was aside from the issue that it may have contradicted scripture.

And there remained many unanswered questions.  If the Earth moved, why didn’t everyone feel the movement?  Why weren’t birds in flight or clouds affected?  Ancient philosophy held that the element earth fell toward the center of the universe.  But if the Earth wasn’t at the center, then what caused things to fall toward it?  And if the Earth changed position throughout the year, why weren’t the stars seen to shift in relative position, to exhibit parallax?  Not having detectable parallax would mean they were incomprehensibly far away.

There would be weakening of the Ptolemaic system in the later part of the century.  Tycho Brahe, making more rigorous and precise naked eye observations than anyone had ever made before, discovered novas, indicating that the heavens could change, and observed comets that seemed to cross the location of the supposed spheres, implying the spheres didn’t really exist.  But these were issues for Copernicus’ models just as much as they were for Ptolemy’s.

Open minded astronomers continued to note Copernicus’ theory as an interesting, if somewhat bizarre speculation, but only a few adopted it.  The result is that in 1600, 57 years after its publication, Copernicus’ theory seemed in danger of going down the same path as Aristarchus’ earlier proposition, of being little more than a footnote of history.

Then the telescope was invented in 1608, and Galileo took the design, improved it, used it to look at the heavens in 1609, and published his results.  It was only at this point that the different predictions between the models could be tested.  Galileo’s observations were much more compatible with Copericanism.

Galileo would eventually get in trouble with the church for his subsequent advocacy of heliocentrism, but as the observations accumulated, the reality became increasingly undeniable.  Newton would eventually answer many of the lingering questions caused by the new model.  By then, virtually all astronomers were Copernicans.

What interests me about this story is, what could people in the 1500s have done to better assess the Copernican model?  In 1543, when it was published, it amounted to an alternate theory that largely made the same observable predictions as the existing one.  It didn’t really make fewer assumptions than the Ptolemaic one, so parsimony (Occam’s razor) wasn’t much of a guide.

The main thing it seemed to have going for it was it’s more convenient mathematics.  Everyone acknowledged its mathematical elegance early on.  And many astronomers seemed willing to use those mathematics, even while not accepting the implied reality.  A preface added to Copernicus’ book, although not written by Copernicus himself, even suggested that approach.

It wouldn’t be the last time in science that someone said, “Don’t worry.  This is just a mathematical convenience, an accounting gimmick.  It’s not like this crazy thing is true.”  Max Planck used a similar line when he discovered that quantizing energy made his calculations work, which eventually turned out to be the basis for quantum physics.  And I think of Chad Orzel’s recommendation that we not think of Everett’s many worlds as real, just take them as metaphor, an accounting device.

Of course, it’s important to remember the misses as well as the hits.  In recent years we’ve had theories with elegant mathematics that eventually didn’t turn out to be reality.  The LHC reportedly has eaten a lot of such theories.  Although an argument could be made that those theories started much further from empirical motivations than the successful ones above.  Admittedly, this is a subjective standard.

All of which is to say, judging the plausibility of rigorous theories is far from simple.

What do you think?  Was there some standard early modern astronomers could have used to better judge Copernicus’ theory?  Or do we simply have accept that our ability to assess many speculative theories is limited until actual empirical data becomes available?

Posted in History | Tagged , , , , | 11 Comments

The discovery of discovery

I’ve been thinking lately about the history of science, particularly the period between 1500 and 1700, what is usually referred to as “the scientific revolution.”  I’m a bit leery of many accounts of this period, as they often assume that there’s some bright line separating science from what came before.

There’s a tendency to look at people like Copernicus, Tycho Brahe, Kepler, and Galileo as modern scientists transplanted into 16th and 17th century settings.  What’s often forgotten is that many of these guys were also astrologers (a respected profession at the time), making a living providing astrological readings for noblemen.  Indeed, a lot of the motivation for early astronomy was to more accurately predict the movements of planets for better astrology.  These figures are best thought of as pioneers of early modernity, rather than modern scientists.

That’s not to say that the period didn’t see remarkable breakthroughs, both in terms of discoveries and methodological improvements.  Which seems to beg for an explanation.  What led to these sudden rapid improvements?

David Wootton, in his book The Invention of Science, spends a lot of time looking at changes in language, noting that concepts like “discovery” or “fact” didn’t exist prior to this period.

I found the discovery one particularly striking.  Prior to about 1500, the concept of discovery doesn’t appear to exist.  This fits somewhat with what I’ve read about ancient and medieval cultures.  For most human societies, the new and innovative is suspect.  What is trustworthy are ancient traditions.  Certainly innovation and progress did happen, but it seemed to be in spite of overall societal attitudes toward it, which made it rare and even riskier than it already would have been.

It reminded me of a point that I believe Bart Ehrman made in one of his books, about why early non-Jewish Christians didn’t discard the Old Testament, with its obviously different theology and ethics and its focus on Israeli nationalism.  The answer is that credibility in the ancient world came from ancient wisdom.  Retaining its links to the ancient Hebrew traditions strengthened Christianity as a movement.

It’s also reminiscent of what I’ve read about Confucianism.  Confucius could be seen as ground breaking in not attempting to ground his philosophy in any claims of divine revelation, but his lessons all came from studying history.  As a philosophy, Confucianism seemed deeply involved in looking at what had worked in the past.

Prior to the 16th century, when Europeans sought to understand or invent something, there was a sense that they weren’t finding anything new, just retrieving lost knowledge.  The idea was that the ancients had all the knowledge, likely from divine sources, and that all people in later ages could do was rediscover the knowledge that had been lost since then.  Vestiges of this sense actually continued all through the scientific revolution period, with Newton convinced that he was only relearning things known by ancient Biblical authors and other ancient peoples.

So the preference for the old and tried appears to be a deep human impulse.  Modern societies valuing innovation and discovery is somewhat of an aberration.  What brought it into western thought?  Wootton focuses on Columbus’ voyages and the subsequent realization that he had found a new world.  Here was something no one could credibly argue the ancients had ever known about.

Wootton notes that the word “discovery” only starts to appear in written records in the years after Columbus’ voyage.  The idea that something completely new could be found started to grow.  And the economic rewards attached to being the first person to find new lands, trade routes, and riches created the idea of being the discoverer, with everything that came with it.

It’s interesting that Columbus’ voyage is what started this rather than the earlier Portuguese explorations of the Atlantic and Africa.  But the Portuguese were exploring lands that conceivably the Phoenicians and Carthaginians had seen.  And I’ve read that they weren’t keen on publishing the results of their voyages, often preferring to keep them as state secrets, limiting later knowledge of them.  (To be fair to the Portuguese, they started their voyages before the invention of the printing press, so the idea of publishing them may not have even occurred until late in the period.)

It’s worth noting that the concept of discovery wasn’t completely new.  The ancient Greeks obviously had some form of it.  (Archimedes could hardly have had anything else in mind when he yelled “eureka”.)  But the Romans didn’t appear to find the concept enticing and it faded after they took over.  Which means that the concept and valuing of discovery is not something guaranteed to continue, a sobering realization.

Does this mean the scientific revolution might not have happened without the discovery of the new world?  I personally doubt it.  I think this skirts the main reason the scientific revolution happened, the invention of the printing press.

Bar chart showing the dramatic increases in print production from the 15th through the 18th century.

Image credit: Tentotwo via Wikipedia (click through for source)

The European version of the printing press was invented in the middle of the 15th century.  By 1500, I’ve read estimates that more copies of books were produced than manuscripts had been copied in the previous 1000 years.  And the volume of the 15th century was paltry compared to what came in the 16th and subsequent centuries.

The result was an explosion of knowledge transfer.  It suddenly became much easier to acquire knowledge, and then use that knowledge as a base for further investigation.  Progress which might have previously taken centuries or millenia suddenly was taking decades.  I think this, more than any one conceptual breakthrough, is what led to the period that now looks like a revolution to us.

If Columbus had not ushered in the concept of discovery, it’s hard to imagine it wouldn’t have eventually come in anyway.

Unless, of course, I’m missing something.

Posted in History | Tagged , , | 24 Comments

Massimo on consciousness: no illusion, but also no spookiness

Massimo Pigliucci has a good article on consciousness at Aeon.  In it, he takes aim both at illusionists as well as those who claim consciousness is outside the purview of science.  Although I’d say he’s more worked up about the illusionists.

However, rather than taking the typical path of strawmanning the claim, he deals with the actual argument, acknowledging what the illusionists are actually saying, that it isn’t consciousness overall they see as an illusion, but phenomenal consciousness in particular.

First, in discussing the views of Keith Frankish (probably the chief champion of illusionism today):

He begins by making a distinction between phenomenal consciousness and access consciousness. Phenomenal consciousness is what produces the subjective quality of experience, what philosophers call ‘qualia’

…By contrast, access consciousness makes it possible for us to perceive things in the first place. As Frankish puts it, access consciousness is what ‘makes sensory information accessible to the rest of the mind, and thus to “you” – the person constituted by these embodied mental systems’

He then presents a fork similar (although not identical) to the one I presented the other day.

Both camps agree that there is more to consciousness than the access aspect and, moreover, that phenomenal consciousness seems to have nonphysical properties (the ‘what is it like’ thing). From there, one can go in two very different directions: the scientific horn of the dilemma, attempting to explain how science might provide us with a satisfactory account of phenomenal consciousness, as Frankish does; or the antiscientific horn, claiming that phenomenal consciousness is squarely outside the domain of competence of science,

Actually I’m not sure I agree with the first part of the first sentence, that phenomenal consciousness is necessarily something separate and apart from access consciousness.  To me, phenomenal consciousness is access consciousness, just from the inside, that is, phenomenal consciousness is what it’s like to have access consciousness.

But anyway, Massimo largely agrees with the illusionists in terms of the underlying reality.  But he disagrees with calling phenomenal consciousness an illusion.  He describes the user interface metaphor often used by illusionists, but notes that actual user interfaces in computer systems are not illusions, but crucial causal mechanisms.  This pretty much matches my own view.

I do think illusionism is saying something important, but it would be stronger if it found another way to express it.  Michael Graziano, who has at times embraced the illusionist label, but backed away from it in his more recent book, notes that when people see “illusion”, they equate it with “mirage”.  For the most hard core illusionists, this is accurate, albeit only for phenomenal consciousness, although others use “illusion” to mean “not what it appears to be.”  It seems like the word “illusion” shuts down consideration.

It’s why my own preferred language is to say that phenomenal consciousness exists, but only subjectively, as the internal perspective of access consciousness.  It’s the phenomena to access consciousness’ noumena.

I do have a couple of quibbles with the article.  First is this snippet:

but I think of consciousness as a weakly emergent phenomenon, not dissimilar from, say, the wetness of water (though a lot more complicated).

I’m glad Massimo stipulated weak emergence here.  And I agree that the right way to think about phenomenal consciousness is existing at a certain level of organization.  (And again, from a certain perspective.)

But I get nervous when people talk about consciousness and emergence.  The issue is that, of course consciousness is emergent, but that in and of itself doesn’t really explain anything.  We know temperature is emergent from particle kinetics, but we more than know that it emerges, we understand how it emerges.  I don’t think we should be satisfied with anything less for consciousness.

The involved neurons also need to be made of (and produce) the right stuff: it is not just how they are arranged in the brain that does the trick, it also takes certain specific physical and chemical properties that carbon-based cells have, silicon-based alternatives might or might not have (it’s an open empirical question), and cardboard, say, definitely doesn’t have.

Massimo has a history of taking biological naturalism type positions, so I’m happy that he at least acknowledges the possibility of machine consciousness here.  And I suspect his real target are the panpsychists.  But I’m a functionalist and see functionality (or the lack of it) as sufficient to rule out those types of claims.  When people talk about particular substrates, I wish they’d discuss what specific functionality is only enabled by those substrates, and why.

But again, those are quibbles.

It follows that an explanation of phenomenal consciousness will come (if it will come – there is no assurance that, just because we want to know something, we will eventually figure out a way of actually knowing it) from neuroscience and evolutionary biology, once our understanding of the human brain will be comparable with our understanding of the inner workings of our own computers. We will then see clearly the connection between the underlying mechanisms and the user-friendly, causally efficacious representations (not illusions!) that allow us to efficiently work with computers and to survive and reproduce in our world as biological organisms.

Despite a caveat I’m not wild about, amen to the main point!

Posted in Zeitgeist | Tagged , , , , , | 63 Comments

Recommendation: The Expanse (season 4)

The Expanse season 4 poster, showing the cast, spaceships, and a planetSeason 4 of The Expanse TV show was released Friday on Amazon Prime, so I just spent today binging on it.

There was a lot of uncertainty about the show last year when SyFy canceled it, but within a short period Amazon stepped in and saved it, renewing it for a fourth season.  And earlier this year it was preemptively renewed for a fifth season, so Amazon seems to have committed pretty thoroughly.

The production values remain high, possibly higher than previous seasons.  A lot of the action this season takes place on a planet, which I imagine spiked both the exterior location and CG costs.  And the ships, both interior and exterior, still look great.

One thing I love about this show is how the spaceships operate according to Newtonian principles, where it’s necessary to accelerate and later decelerate to reach a destination. And when coasting, the realities of free fall are acknowledged.  There is compromise a bit in portraying this, having everyone walk around with magnetic boots, but considering how much it would cost to have actors constantly swinging around on cables, and compared to it being completely ignored on most space shows, I give them a pass.

There are other compromises, such as having sound in space.  The book authors, who are producers on the show, are defensive about this, insisting that it’s necessary for the space scenes to work.  As I noted in my post on Ad Astra, I think this underestimates audiences.  But again, it’s relatively minor compared to most shows.

The season is mostly an adaptation of the fourth Expanse book, Cibola Burn, but there are differences.  Many of them simply reflect the realities of telling a story in novel vs TV form.  Others seem like enhancements.  The show tends to develop the villains a bit more than the books, which is good, but it also tends to, I think, have a darker edgier feel.  Some of the characters get additional challenges.  And I think the show handles the departure of a major character much better than the book.

Some changes seem related to the practical need to employ all the actors, including the ones whose characters were mostly absent from Cibola Burn.  So we have a lot of parallel plot threads that weren’t in the book, related to Avasarala, Bobbie, Drummer, and Ashford, all in events taking place away from the main story setting.  In some cases, this seems like completely new material.  In others, it front loads developments for upcoming seasons, particularly events in the fifth book, Nemesis Games.

It’s difficult to get into details without also getting into spoilers, particularly if you haven’t seen the earlier seasons.  And if you haven’t watched the show yet, you’ll want to start with the first season.  Someone could jump in on season 4, but they’d be missing a lot of backstory.

So if you’re looking for intelligent space opera with excellent production values, I highly recommend it.  And don’t forget that this is based on an excellent book series, which is worth checking out, if it’s your cup of tea.

Posted in Science Fiction | Tagged , , | 14 Comments

Is entanglement decoherence from the outside, and decoherence entanglement from the inside?

A recent tweet by Sean Carroll has me thinking.

Quantum decoherence is said to occur when a particular quantum system becomes entangled with its environment, that is to say, as information about the quantum system spreads throughout the environment, that system undergoes at least an apparent wave function collapse.  It stops behaving like a wave and more like a particle.

But it’s possible to have two quantum particles in their wave stage interact and become entangled, apparently without decohering.  Of course, since they are entangled, a later measurement of one of the particles in the relevant way will give us information, not just on the particle being measured, but also information about the other particle.  Once measured, from our point of view, both particles will have decohered, collapsed from a wave to a particle.

Which raises the question, from particle A’s perspective, when A becomes entangled with particle B, wouldn’t B have decohered (at least for whatever properties are entangled)?  That is, wouldn’t B’s wave function have collapsed for A?  And wouldn’t the same be true from particle B’s perspective in relation to particle A?

All of which is to ask, is entanglement what we see from an interaction we don’t partake in, but decoherence what we see from one that we do?  Put another way, can we say that entanglement is decoherence from the outside, while decoherence is entanglement from the inside?

If not, why not?  If so, what does this mean for interpretations of quantum mechanics outside of the many worlds and relational ones?

h/t James of Seattle (who replied to Carroll with a question I wouldn’t mind seeing an answer to)

Posted in Zeitgeist | Tagged , , , | 57 Comments

The magic step and the crucial fork

Those of you who’ve known me for a while may remember the long fascination I’ve had with Michael Graziano’s attention schema theory of consciousness.  I covered it early in this blog’s history and have returned to it multiple times over the years.  I still think the theory has a lot going for it, particularly as part of an overall framework of higher order theories.  But as I’ve learned more over the years, it’s more Graziano’s approach I’ve come to value than his specific theory.

Back in 2013, in his book, Consciousness and the Social Brain, he pointed out that it’s pretty common for theories of consciousness to explain things up to a certain point, then have a magic step.  For example, integrated information theory posits that structural integration is consciousness, the various recurrent theories posit that the recurrence itself is consciousness, and quantum theories often assert that consciousness is in the wave function collapse.  Why are these things in particular conscious?  It’s usually left unsaid, something that’s supposed to simply be accepted.

Christof Koch, in his book, Consciousness: Confessions of a Romantic Reductionist, relates that once when presenting a theory about layer 5 neurons in the visual cortex firing rhythmically possibly being related to consciousness, he was asked by the neurologist Volker Henn how his theory was really any different from Descartes’ locating the soul in the pineal gland.  Koch’s language and concepts were more modern, Henn argued, but exactly how consciousness arose from that activity was still just as mysterious as how it was supposed to have arisen from the pineal gland.

Koch said he responded to Henn with a promissory note, an IOU, that eventually science would get to the full causal explanation.  However, Koch goes on to describe that he eventually concluded it was hopeless, that subjectivity was too radically different to actually emerge from physical systems.  It led him to panpsychism and integrated information theory (IIT).  (Although in his more recent book, he seems to have backed off of panpsychism, now seeing IIT as an alternative to, rather than elaboration of, panpsychism.)

Koch’s conclusion was in many ways similar to David Chalmers’ conclusion, that consciousness is irreducible and fundamental, making property dualism inevitable, and leading Chalmers to coin the famous “hard problem” of consciousness.  These conclusions also caused Chalmers to flirt with panpsychism.

Graziano, in acknowledging the magic step that exists in most consciousness theories, argued that such theories were incomplete.  A successful theory, he argued, needed to avoid such a step.  But is this possible?  Arguably every theory of consciousness has these promissory notes, these IOUs.  The question might be how small can we make them.

Graziano’s approach was to ask, what exactly are we trying to explain?  How do we know that’s what needs to be explained?  We can say “consciousness”, but what does that mean?  How do we know we’re conscious?  Someone could reply that the only way we could even ask that question is as a conscious entity, but that’s begging the question.  What exactly are we talking about here?

It’s commonly understood that our senses can be fooled.  We’ve all seen the visual illusions that, as hard as we try, we can’t see through.  Our lower level visual circuitry simply won’t allow it.  And the possibility that we might be a brain in a vat somewhere, or be living in a simulation, is often taken seriously by a lot of people.

What people have a much harder time accepting is the idea that our inner senses might have the same limitations.  Our sense of what happens in our own mind feels direct and privileged in a manner that outer senses don’t.  In many ways, what these inner senses are telling us seem like the most primal thing we can ever know.  But if these senses aren’t accurate, much like the visual illusions, these are not things we can see through, no matter how hard we try.

Cover of 'Rethinking Consciousness' by Michael GrazianoIn his new book, Rethinking Consciousness: A Scientific Theory of Subjective Experience, Graziano discusses an interesting example.  Lord Horatio Nelson, the great British admiral, lost an arm in combat.   Like many amputees, he suffered from phantom limb syndrome, painful sensations from the nonexistent limb.  He famously claimed that he had proved the existence of an afterlife, since if his arm could have a ghost, then so could the rest of him.

Phantom limb syndrome appears to arise from a contradiction between the brain’s body schema, its model of the body, and its actual body.  Strangely enough, as V. S. Ramachandran discussed in his book, The Tell-Tale Brain, the reverse can also happen after a stroke or other brain injury.  A patient’s body schema can become damaged so that it no longer includes a limb that’s physically still there.  They no longer feel the limb is really theirs anymore.  For some, the feeling is so strong that they seek to have the limb amputated.

Importantly, in both cases, the person is unable to see past the issue.  The body schema is simply too powerful, too primal, and operates are a pre-conscious level.  It can be doubted intellectually, but not intuitively, not at a primal level.

If the body schema exerts that kind of power, imagine what power a schema that tells us about our own mental life must exert.

So for Graziano, the question isn’t how to explain what our intuitive understanding of consciousness tells us about.  Instead, what needs to be explained is why we have that intuitive understanding.  In many ways, Graziano described what Chalmers would later call the “meta-problem of consciousness“, not the hard problem, but the problem of why we think there is a hard problem.  (If Graziano had Chalmers’ talent for naming philosophical concepts, we might have started talking about the meta-problem in 2013.)

Of course, Graziano’s answer is that we have a model of the messy and emergent process of attention, a schema, a higher order representation of it at the highest global workspace level, which we use to control it in top down fashion.  But while the model is effective in providing that feedback and control, it doesn’t provide accurate information for actually understanding the mind.  Indeed, it’s simplified model of attention, portraying it as an ethereal fluid or energy that can be concentrated in or around the head, but not necessarily of it, is actively misleading.  There’s a reason why we are all intuitive dualists.

At this point we reach a crucial juncture, a fork in the road.  You will either conclude that Graziano’s contention (and similar ones from other cognitive scientists) is an attempt to pull a fast one, a cheat, a dodge from confronting the real problem, or that it’s plausible.  If you can’t accept it, then consciousness likely remains an intractable mystery for you, and concepts like IIT, panpsychism, quantum consciousness, and a host of other exotic solutions may appear necessary.

But if you can accept that introspection is unreliable, then a host of grounded neuroscience theories, such as global workspace and higher order thought, including the attention schema, become plausible.  Consciousness looks scientifically tractable, in a manner that could someday result in conscious machines, and maybe even mind uploading.

I long ago took the fork that accepts the limits of introspection, and the views I’ve expressed on this blog reflect it.  But I’ve been reminded in recent conversations that this is a fork many of you haven’t taken.  It leads to very different underlying assumptions, something we should be cognizant of in our discussions.

So which fork have you taken?  And why do you think it’s the correct choice?  Or do you think there even is a real choice here?

Posted in Mind and AI | Tagged , , , , , | 56 Comments

Ad Astra: Apocalypse Now in space

Ad Astra movie poster showing an astronaut helmet and spaceship in space with planets in the backgroundThe movie Ad Astra is a strange mix.  In many ways, it’s a visually stunning film with excellent production values.  And it has first class name stars, most notably Brad Pitt and Tommy Lee Jones.  But the plot has serious issues.  On balance, I enjoyed it, but this is a case where your mileage may vary considerably.

I mentioned in the title that this is largely Apocalypse Now in space.  I’m not spoiling anything with that description.  It’s been discussed in public by the director in various interviews.  It succeeds in capturing the stark tone and bleakness of that other story.  Pitt plays a character, Major Roy McBride, who is mostly emotionless, a man supremely competent at his job, but has seen his relationships whither, and seems to be largely going through the motions.

McBride’s father, H. Clifford McBride (Jones) is an astronaut revered as a hero who pioneered human exploration of the outer solar system.  But by the time of the story, Clifford has disappeared on a mission to Neptune and has not been heard from in years.  He and his mission are presumed lost.

But suddenly intense and dangerous power surges have started arriving throughout the solar system.  Roy is almost killed by one.  The surges appear to be originating from Neptune.  The authorities believe Clifford is still alive, and want Roy, his son, to travel to Mars and from there send a message to him, in the hopes that he will respond.

What follows is a quest across various locations in the solar system meant to have a similar feel to Captain Willard’s trek though Vietnam in Apocalypse Now.  The solar system is not a happy place.  There are pirates on the moon, man eating primates in spaceships, and disillusioned Mars colonists to contend with.  And, of course, the whole time Roy is wondering what the deal is with his father.

There’s no real explanation given for the state of the solar system.  Things are just dangerous.  And apparently the authorities are not to be trusted.  In Apocalypse Now, the setting is Vietnam, a brutal war zone, so no explanation is needed for the stark landscape or dysfunctional leadership, but the situation throughout the solar system in this movie begs for an explanation, one that I never caught.

The movie does make an effort to be more scientifically accurate than your typical space movie.  Spaceships blast off from surfaces with rocket stages, but switch to long range drives (presumably ion drives of some type).  Ships are seen accelerating and decelerating.  And crew members are in zero gravity during the non-thrust phases.  And the overall look, both in interiors and exteriors, has a very authentic feel to it.

The movie does ignore the low gravity conditions on the Moon and Mars, but I’m willing to give them a pass on it, given how difficult it would be to accurately portray the dynamics of those environments.

One nice touch is the lack of sound in vacuum.  Action sequences on the moon and in space take place in silence, except for the occasional sounds transmitted through vibrations of touching suits and equipment.  The result are surreal haunting sequences that other movies could have tapped into long ago, if they’d just refrained from sound effects.

That said, the movie is far from scientifically rigorous.  And it has its share of outright howlers.  For instance, it taps into the common misconception that venting atmosphere causes bodies to explode.  And it’s never really explained why Roy needs to travel to Mars to transmit his message to Neptune.  (I’ve seen speculation that maybe the Sun was in the way, but presumably signal relays would still be a thing in the future.  In any case, it would have been quicker to wait for the Earth to move enough in its orbit for direct line of sight.)

But the biggest scientific fail is the eventual explanation for the power surges.  Surges propagating throughout the entire solar system with the dangerous intensity portrayed, would require, well, astronomical power.  Catastrophic solar storms might do it, but the eventual explanation provided is utterly inadequate.  Granted, the power surges are just the movie’s McGuffin, but it seems like a modicum of effort could have provided a more coherent motivation.

And the movie’s conclusion makes a philosophical statement that, while I actually suspect it’s (partially) true, will be seen by many as hopelessly pessimistic.

So, an interesting mix of quality and problems.  This poignant mix is shown in the movie’s Rotten Tomatoes scores.  Critics give it high ratings: 84%, but audiences are far less impressed: 40%.  I enjoyed it, and if you’re a space nerd, you might too.  But the story had serious problems, and the stark tone and pessimistic outlook will turn a lot of people off.

Have you seen this movie?  If so, what did you think of it?

Posted in Science Fiction | Tagged , , | 21 Comments