What do scientific theories actually tell us about the world?

One of the things that’s exciting about learning new things, is that often a new understanding in one area sheds light on what might seem like a completely separate topic.  For me, information about how the brain works appears to have shed new light on a question in the philosophy of of science, where there has long been a debate about the epistemic nature of scientific theories.

Spacetime lattice Image credit: mysid via Wikipedia

Spacetime lattice
Image credit: mysid via Wikipedia

One camp holds that scientific theories reflect reality, at least to some level of approximation.  So when we talk about space being warped in general relativity, or the behavior of fermions and bosons, there is actually something “out there” that corresponds to those concepts.  There is something actually being warped, and there actually are tiny particles and/or waves that are being described in particle physics.  This camp is scientific realism.

The opposing camp believes that scientific theories are only frameworks we build to predict observations.  The stories we tell ourselves associated with those predictive frameworks may or may not correspond to any underlying reality.  All we can know is whether the theory successfully makes its predictions.  This camp is instrumentalism.

The vast majority of scientists are realists.  This makes sense when you consider the motivation needed to spend hours of  your life in a lab doing experiments, or to endure the discomforts and hazards of field work.  It’s pretty hard for geologists to visit the antarctic for samples, or for biologists to crawl through the mud for specimens, if they don’t see themselves in some way as being in pursuit of truth.

But the instrumentalists tend to point out all the successful scientific theories that could accurately predict observations, at least for a time, but were eventually shown to be wrong.

The prime example is Ptolemy’s ancient theory of the universe, a precise mathematical model of the Aristotelian view of geocentrism, the idea that the Earth is the center of the universe with everything revolving around it.    For centuries, Ptolemy’s model accurately predicted naked eye observations of the heavens.

But we know today that it is completely wrong.  As Copernicus pointed out in the 1500s, the Earth orbits around the sun.  Interestingly, many science historians have pointed out that Copernicus’ model actually wasn’t any better at making predictions than Ptolemy’s, at least until Galileo started making observations through a telescope.  Indeed, the first printing of Copernicus’ theory had a preface from someone, probably hoping to head off controversy, saying the ideas presented might only be a predictive framework unrelated to actual reality.

For a long time, I was agnostic between realism and instrumentalism.  Emotionally, scientific realism is hard to shake.  Without it, science seems little more than an endeavor to lay the groundwork for technology, for practical applications of its findings.  Many instrumentalists are happy to see it in that light.  A lot of instrumentalists tend to be philosophers, theologians, and others who may be less than thrilled with the implications of scientific findings.

However I do think it’s important for scientists, and anyone assessing scientific theories, to be able to put on the instrumentalist cap from time to time, to conservatively assess which parts of a theory are actually predictive, and which may just be speculative baggage.

But here’s the thing.  Often what we’re really talking about here is the difference between the raw mathematics of a theory, and its language description, including the metaphors and analogies we use to understand it.  The idea is that the mathematics might be right, but the rest wrong.

But the language part of a theory is a description of a mental understanding of what’s happening.  That understanding is a model we build in our brains, a neural firing pattern that may or may not be isomorphic with patterns in the world.  And as I’ve discussed in my consciousness posts, the model building mechanism evolved for an adaptive purpose: to make predictions.

In other words, the language description of a theory is itself a predictive model.  Its predictions may not be as precise as the mathematical portions, they may not be currently testable in the same manner as the mathematics (assuming those mathematics are actually testable; I’m looking at you string theorists), but it will still make predictions.

Using the Ptolemy example above, the language model did make predictions.  It’s just that many of its predictions couldn’t be tested until the availability of telescopes.  Once they could, the Ptolemy model quickly fell from favor.  (At least it was quick on historical time scales.  It wasn’t quick enough to avoid making Galileo’s final years miserable.)  As many have pointed out, it wasn’t that Copernicus’ model made precisely right predictions, but it was far less wrong than Ptolemy’s.

When you think about it, any mental model we hold makes predictions.  The predictions might not be testable, currently or ever, but they’re still there.  Even religious or metaphysical beliefs make predictions, such as whether we’ll wake up in an afterlife after we die.  They’re just predictions we may never be able to test in this world.

This means that the distinction between scientific realism and instrumentalism is an artificial one.  It’s really just a distinction between aspects of a theory that can be tested, and the currently untestable aspects.  Often the divide is between the mathematical portions and the language portions, but the only real difference there is that the mathematical predictions are precise, whereas the language ones are less precise, to varying degrees.

Of course, I’m basing this insight on a scientific theory about how the brain works.  If that theory eventually ends up failing in its predictions, it might have implications for the epistemic point I’m making here, for the revision to our model of scientific knowledge I think is warranted.

And idealists might note that I’m also making the assumption that brains exist, that along with the rest of the external world they aren’t an illusion.  I have to concede that’s true, and even if this understanding makes accurate useful predictions, within idealism, it still wouldn’t be mapping to actual reality.  But given that I’m also assuming that all you other minds exist out there, it’s a stipulation I’m comfortable with.

As always, it might be that I’m missing something.  If so, I hope you’ll set me straight in the comments.

Posted in Philosophy | Tagged , , , , , , , | 69 Comments

Arrival, the shape of aliens, and bridging the communication barrier

arrival_movie_posterThis weekend, I watched the movie ‘Arrival‘.  It starts off with the now common scenario of several floating ships appearing in the skies around the world.  But unlike most movies in this mold, it focuses on humanity’s efforts to communicate with the aliens and understand why they’ve come.  The protagonist is an expert in linguistics.

I found this movie to be uncommonly intelligent and high quality science fiction, of a type that we rarely see in cinema.  I’ve heard it’s won and been nominated for various awards.  In my opinion, it’s well deserved.  I highly recommend it.

That said, I’m going to quibble with a couple of its aspects.  I won’t spoil anything that you wouldn’t see in the first act, but if having even bits of that spoiled bothers you, you may want to skip this post until you’ve seen it.

I’m not going to quibble with the existence of the aliens, or why they arrived when they did.  A common criticism I have of alien invasion movies is that the aliens usually choose to show up when we can resist them, rather than any of the previous 4.54 billion years when the planet was a sitting duck.  But I actually think the movie has a good answer for that, which I won’t spoil.

Okay, first quibble.  The movie goes out of its way to portray the aliens as utterly, well, alien.  On the one hand, I very much appreciate this.  Too often, media sci-fi portray aliens as humans with maybe an extra bump on their forehead or in overall humanoid form but maybe with reptilian skin or something, together with all too human emotions and attitudes.  Historically, some of this came from technological constraints on what could be shown.  But with CG technology being what it is today, this excuse, still somewhat plausible for television, doesn’t really cut it for high production movies.

That said, in its attempt to make the aliens profoundly different, I think the movie ignores some simple realities.  Extraterrestrial life would undoubtedly be very different from Earth life, but the laws of physics put limits on just how strange it could be.

For example,we never see eyes on the aliens.  (Or at least I couldn’t ever make out any.)  Now, it’s possible that an alien that evolved in a consistently dark or opaque environment, such as an underground sea or in a thick opaque atmosphere, might never evolve vision.

But we see the aliens communicating visually, which implies some kind of ability to take in information from electromagnetic radiation (light).  And eyes weren’t a one time mutation in Earth history.  From what I’ve read, they evolved several times in independent evolutionary lines.  In other words, eyes are one of the features that evolution tends to converge on.  The aliens didn’t have to be portrayed with two stereoscopic eyes.  They could have had many, like on spiders.

The other is the overall body plan of the aliens.  They don’t come across as having much dexterity.  But as I’ve noted before, the only civilization producing species on this planet needed more than intelligence, but also the ability to physically manipulate the environment.  It’s why a primate species currently rules the planet instead of a cetacean, elephantine, corvine, or other type of intelligent species.

I’m not saying that the aliens needed to have humanoid body plans.  Ant-like bodies with prehensile limbs might have done the trick.  But the movie aliens needed to have better physical abilities than what was portrayed.  Their portrayed bodies might have been dexterous in a liquid environment, similar to cephalopods, but that didn’t appear to be the environment they were in.

My second quibble is with the effort to communicate with the aliens.  If you’ve seen the movie,  you understand this issue’s place in the plot, but the initial decision to translate written language doesn’t make that much sense.  As Seth Shostak of SETI (Search for Extraterrestrial Intelligence) has pointed out, it makes a lot more sense to attempt initial communication with pictures.

This makes sense when you consider that the earliest human writing evolved from using pictures to convey concepts.  Over time, the pictures got streamlined into symbols for each word or concept.  It was thousands of years before the idea of letters standing in for individual speech sounds developed.  Attempting to jump over all that with an utterly alien mind seems like the hard way to do it.

Of course, conveying complex information with pictures wouldn’t itself be easy.  For example, how do you get across the main question the humans had for the aliens, “Why have you come?”  But a series of pictures showing the alien ships approaching humans, followed by alternating pictures of humans dead or alive might have given the aliens a quick chance to make their intentions clearer.  And once you had a basic form of communication going, a common symbolic vocabulary could be worked out, eventually allowing more sophisticated exchanges.

A much tougher challenge might be if the aliens didn’t have visual senses.  Imagine trying to build a common vocabulary with a bat like alien that sensed the world through echolocation, or one that thought and moved on vastly different time scales, such as conscious trees.  But even then, we’d still live in the same universe, and there would have to be some common overlapping ways of perceiving the world.  It might come down to small model statues arranged in sequences to convey scenarios.

Of course, it’s always possible to engage in rationalizations to explain away these quibbles with the movie.  And as I indicated above, this is a movie that is far more intelligent than your typical sci-fi film.  Not the least because it gave me an excuse to talk about alien body plans and communication strategies 🙂

Posted in Science Fiction | Tagged , , , , , , , , | 40 Comments

Split brain does not lead to split consciousness – University of Amsterdam

I’ve talked before about Roger Perry’s famous split-brain patient experiments.  Patients with severe epileptic seizures used to undergo a collosotomy, a procedure to cut the connections between the left and right hemispheres of their cerebrum.  It often helped alleviate their symptoms and, remarkably, the patients afterward remained mentally functional, at least to outside appearances.

Each hemisphere of the brain controls and receives sensory input from half the body.  What Perry and his colleagues discovered in their experiments, was that if sensory inputs going into the patient were isolated to one hemisphere or the other, each of the patient’s hemispheres were only aware of its own sensations, and with the language centers usually focused on the left hemisphere, the patient could usually only describe what they were seeing when the left hemisphere received it.

The fact that the patients, post-procedure, remained largely functional seemed to show that each hemisphere was effectively watching what the other half of the body did, and mentally confabulating the actions as its own.  It opened up the possibility that this happens even in healthy people, albeit to a lesser extent.

However, new research appears to show that this phenomenon may be more limited than previously thought:

 

uva-2017-split-brain-figurre

A new research study contradicts the established view that so-called split-brain patients have a split consciousness. Instead, the researchers behind the study, led by UvA psychologist Yair Pinto, have found strong evidence showing that despite being characterised by little to no communication between the right and left brain hemispheres, split brain does not cause two independent conscious perceivers in one brain. Their results are published in the latest edition of the journal Brain.

Source: Split brain does not lead to split consciousness

Assuming there are not any methodological issues with these new experiments (always a possibility), and given everything I’ve learned about the brain since first reading about the split brain patient experiments, I can’t say I find this too surprising.

The corpus collosum connects the cerebral hemispheres of the brain, but there are other regions which connect the two sides of the brain, mostly sub-cortical.  These areas are generally below the level of consciousness, but the information from them feeds into the cerebrum.

These results are also consistent with the phenomenon of blindsight, where a patient that has sustained damage to visual processing centers in the occipital lobe (part of the cerebrum) cannot consciously see  something, but if pressed, can often still identify it.  The reason for this is that while the optic nerve does feed into the cerebrum, it also branches off to sub-cortical regions such as the superior colliculus in the mid-brain region.  Again, the processing in those regions is below the level of consciousness (at least in humans), but it provides information, to a limited degree, to conscious regions.

It seems likely that, for the patients in the lower row of the image above, something similar is taking place.  Each hemisphere may not be able to consciously perceive what the other hemisphere is seeing, but communication in sub-cortical regions is bubbling up into the cerebrum, enabling them to make the determinations that they’re making in the new experiments.  Maybe.

But I’m not sure this necessarily justifies saying that the patient retains one unified consciousness.  It may be that it is less divided than previously thought, but definitely is more separate than the consciousness of healthy people.  Of course, I’m basing these remarks on the press release.  The actual paper may shed additional light.

I definitely plan to watch for any new developments in this area.  Or for any comments from Michael Gazzaniga, one of Perry’s assistants who are still around and writing.

Posted in Zeitgeist | 6 Comments

Being a beast machine

In my post on consciousness possibly being a simulation engine, I noted Anil Seth’s excellent Aeon article as one of the inspirations.  As it turns out, Seth talked at a TEDx conference and covered many of the same topics he addressed in that article.

As noted in my post, I think a lot of what Seth describes here is actually unconscious perception.  If I’m right, it’s when those predictive models trigger multiple emotional reactions from our limbic system and we have to do simulations on various courses of action to decide what to do, that what we call consciousness actually comes into the picture.

I like one point Seth makes about proprioception.  He demonstrates, using the famous rubber hand test, that proprioception is a construction, a model created by the brain based on exteroception (sense of the outside world, including the external body), and interoception (sense of internal body states).  It’s become fashionable to tout proprioception and many other related perceptions as senses beyond the basic ones.  But if these additional perceptions are built on top of the basic ones, I think calling them senses in and of themselves is questionable.

Seth’s closing points about the self are worth pondering.  The self is a model, in many ways similar to the models we create for the external world.  As a result, that model can be different from the reality.  It can be wrong, no matter how privileged our access to it might feel.

Idealists ask whether the external world exists, whether or not we live in a simulation.  What isn’t often appreciated is that we definitely do live in a simulation.  Each and every one of us lives inside a simulation constructed inside our brain, both of the outside world and of ourselves.  As Seth says, “a fantasy that corresponds with the reality”, except that the reality is often a simplified cartoonish view of the reality, one adaptive for survival but not necessarily for giving us an accurate view of the actual reality.

Posted in Zeitgeist | Tagged , , , , , , | 20 Comments

Two brain science podcasts worth checking out

As my long time readers will know, I’m very interested in the mind, and my preferred way to explore it is through science, notably neuroscience or cognitive psychology, or with science oriented philosophy.  With that in mind, I want to call your attention to a couple of podcasts I’ve been following for a while.

gingercampbellThe first is Dr. Ginger Campbell’s excellent Brain Science podcast.  The posting frequency isn’t very high, but most episodes are packed with interesting information.  The most common format is Campbell interviewing an author.  One of the recent episodes was an interview with Jon Mallatt, one of the authors of the book that has informed many of my recent posts on consciousness.  Older episodes feature neuroscientists whose work I’ve highlighted before, such as Michael Graziano and Michael Gazzaniga.

Some of the people and books that Campbell discusses do get pretty technical, but most of it seems oriented toward a science literate lay person.  Unfortunately, the older episodes are pay walled, but I’ve been impressed enough by the recent episodes to get a subscription to work my way through the archives.  I feel comfortable recommending this podcast for anyone with an interest in the brain and mind.

The other podcast is Brain Matters.  This is a much more hard core “inside baseball” show that often gets very technical, to the extent that I have trouble following many of the episodes.  It’s done by a group of neuroscience graduate students who most often are interviewing working neuroscientists.  As a result, the subjects can get somewhat arcane, with topics such as cortical columns, aphasia, mitochondria in neurons, or the tracing of particular neural circuits.  I don’t try to listen to every episode of this one, instead focusing on the ones where the title or description catch my interest.

Both of these podcasts can be subscribed to in the standard services.  (My subscriptions are through iTunes and the iOS Podcast app.)  Or they can just be listened to on their web sites.

If you know of any similar sources, I’d love to hear about them in the comments.

Posted in Zeitgeist | Tagged , , , , | 59 Comments

America’s long path to universal voting rights

My memory of what I learned in early grade school about the history of American voting rights went something like this.  Prior to 1776, we were ruled by the king of Great Britain.  He was a tyrant who oppressed us with taxation without representation, so we rebelled and set up a democracy.  (UK readers, I see you rolling your eyes.)

There may have been a brief mention of slaves getting the vote after the Civil War (the slaves themselves weren’t mentioned until we got to the section on the causes of that war), but other than that, I came away with the impression that voting was mostly something we had figured out in 1776 with maybe some fine tuning in 1787.

Yep, the value of a public education.  To be fair to my state’s school system, the picture did get more sophisticated in middle school grades, but not by much.

Of course, the reality is that there had been elections in England for centuries before the American Revolution (which was a conflict against Parliament as much as with the king).  The American colonies had largely inherited the old English voting paradigm, which included allowing voting by males who owned a certain amount of property.  Many colonies also restricted the right to members of approved religious denominations.

The result was at the beginning of the United States, only a relatively small minority of the population could vote.  The exact percentage varied depending on locale, ranging from as low as 40% of adult white males to as high as 80%, depending on the availability and expense of property and the exact voting laws, with some estimates of the overall percentage of the American population that could vote being as low as 6%.

The progress from that initial very limited suffrage to the near universal suffrage we have today happened in what I would call four waves.  The first wave enfranchised most white males, the second wave briefly enfranchised blacks, the third wave women, and the fourth re-enfranchised blacks along with much of the remaining excluded population.

The first wave happened in the early 19th century.  White males who didn’t have the vote were pushing for it, but that by itself wasn’t enough to make it happen.  There was a strong sentiment that only those with a stake in the society should be allowed to vote, as well as a concern that too broad a franchise might allow elections to be swayed by a nascent working class enslaved to their employer’s interests.  Many fretted that America might someday become a country of working class people instead of farmers.

Proponents of broader suffrage argued that fears of a working class country were unfounded, that America would always be predominantly agrarian.  The proponents had to be careful in the arguments they used, focusing on why their particular group should have the vote without implying that voting was any kind of general right.  Such a right might imply that women, blacks, and natives should be allowed to vote, which everyone regarded as crazy talk.

The success of this wave came from a number of factors.  The rise of national political parties played a role, allowing voters who had the right to vote in one type of election to punish a party that opposed their right to vote in other elections.  In addition, the War of 1812 shed light on the fact that soldiers without the right to vote had a lower incentive to fight.  But perhaps the largest factor may have been new states in the west, who used broad white male suffrage to attract migrants, which put competitive pressure on the eastern states to expand their own franchise.

The result was that by more or less 1850, if you were male, white, and paid taxes (the standard that replaced the property requirements), you probably could vote.  Still excluded at this point were women, blacks, most native Americans, paupers, and most immigrants.  The first half of the 19th century was a period of mostly optimism about democratic ideals.

The second half wouldn’t be.  As America indeed started to become the working class society people of a previous generation had feared, those fears came roaring back, leading to widespread nativism and discrimination.  When we think of the later 19th century, we often might think of the Civil War and Reconstruction, of blacks getting the right to vote.  This was the second wave I mentioned above.  But it happened in an era of otherwise rising skepticism about the ideals of a broad democracy, which is likely why the second wave mostly floundered.

As Reconstruction ended and white southerners seized back control of their states, the north showed little interest in stopping the subsequent large scale disenfranchisement of blacks.  Yes, the 15th Amendment was on the books, theoretically guaranteeing blacks the right to vote, but after the first decade or so of its ratification, only the most brazen violations of it were policed, generally allowing Jim Crow era laws to develop.  It was a stark demonstration that liberal laws are impotent if the people in power won’t enforce them.

The late 19th century turn against democracy also resulted in strong headwinds for the women’s suffrage movement, which is usually considered to have started in 1848.  Those headwinds resulted in little progress before 1900, although women did often get the right to vote in some local elections such as for school board positions, and a more broad right in a few western states.

The third wave for women’s suffrage didn’t really heat up until the early 20th century, when women’s groups became far more organized and aggressive.  In addition, the industrial nature of World War I demonstrated that women could contribute substantially to war efforts, something that had been a convincing argument in the first two waves.  That and there was an international movement in several democratic countries to enfranchise women.  All of which culminated in the 19th Amendment being ratified in 1920.  (Interestingly, this was followed by a very conservative decade in American politics.)

The Great Depression in the 1930s led to something of an ad-hoc change.  Paupers continued to be largely excluded from the voting franchise, with technically anyone receiving any kind of welfare considered a pauper.  However, the large scale unemployment and hardship of the 1930s made officials reluctant to label anyone on relief a pauper, which largely ended that exclusion.  It became kind of an inversion of the situation with the 15th Amendment.

The fourth wave started with World War II.  Part of the war propaganda on the western side focused on the fact that we were democracies and the enemies weren’t.  (I guess conveniently ignoring the realities in Russia.)  That and the fact that the Nazis saw American racial laws as a source of inspiration for their own policies, policies which resulted in the Holocaust.  This seemed to turn a harsh light on the differences between the ideals of American democracy and the reality.  It was also recognized that America’s racial issues gave the communists a Cold War propaganda issue.

Nevertheless, the fourth wave was a long slog, starting with desegregation of army barracks in the late 1940s, and civil resistance from blacks themselves in the 1950s.  Eventually the result was the Civil Rights Era.  This resulted in laws passed in the 1960s guaranteeing blacks the right to vote, finally fulfilling the promise of the 15th Amendment a century after it has been ratified.

But the Civil Rights Era also included a rush to correct other longstanding issues with American voting, so that many groups that had been excluded by a variety of underhanded techniques, such as American Indians, mobile workers, recent immigrants, paupers, and other smaller groups were finally enfranchised.  It was a period when the Federal government finally took an active role in ensuring the right to vote.  By the early 1970s, America finally had near universal suffrage.  (“Near” because in many states convicted felons, insane people, and other similar categories continue to be excluded.)

When I was younger, I never realized just how recent this development had been.  Nor how fragile or incomplete it was until the 2000 election with all the disputes about voting laws and the electoral college, or again when in recent years the Supreme Court invalidated substantial portions of the Voting Rights Act of 1965, not to mention the election we just had.

As always, I find reading history helps to put our own times in context, which can be comforting in some ways but alarming in others.  Reading about voting rights history in America shows that voter eligibility has always been a partisan issue, and that the times we live in aren’t nearly as uniquely blinkered as we might fear.  On the other hand, it also shows that our conception of democracy is a very recent one, and that there’s no guarantee that past progress can’t be reversed.  Vigilance is always required.

therighttovotecoverMuch of the information in this post came from ‘The Right to Vote: The Contested History of Democracy in the United States‘ by Alexander Keyssar.  I can’t say this was an exciting read, and the Kindle version had some unacceptable formatting issues, but I found it a fascinating source of information for this topic.

Posted in History | Tagged , , , , | 10 Comments

Merry Christmas

Real Clear Science highlighted an interesting article from a few years ago on the evolution of Santa Claus: Will the Real Santa Claus Please Stand Up?

We always think of Santa Claus as an incredibly old man—positively ancient—but the fact is, he’s exactly 150-years-old, born in 1863. Indeed, we might be thinking of Santa’s predecessor St. Nicholas, who is far older, believed to have been a Turkish Greek bishop in the 300s. But the first European winter gift-bringer is even more of a geezer, going back to ancient Germanic paganism and the Norse god Odin. When he wandered the earth, the deity disguised himself as a bearded old man wearing a broad-brimmed hat and cloak and carrying a traveler’s staff. He looked a lot like Gandalf the Gray in “Lord of the Rings.”

The gift-bringer showed up during the 12-day winter festival known as Yule, where it was a tradition to burn a whole tree, from bottom up. (This evolved into the smaller “Yule log” we know today.) Children would leave hay in their shoes for Odin’s eight-legged horse and find it replaced with treats the next day. As Europe became Christianized, these beliefs were absorbed into the Christian faith, and St. Nicholas, celebrated on December 6, was bestowed with the gift-bringer mythology.

And related to Odin’s eight-legged horse:


via xkcd

Whatever this weekend means for you, I hope you and those you care for are safe and having a Happy Holiday, free of eight legged horses.  (Unless of course an eight legged horse is what you want for Christmas.)

Merry Christmas to all my online friends!

Posted in Zeitgeist | 20 Comments