What’s at the edge of the universe?

Image credit: Pablo Carlos Budass via Wikipedia

Gizmodo has an interesting article that someone asked my thoughts on.  Part of their “Giz asks” series, it asks various physicists what’s at the edge of the universe?  The physicists polled include Sean Carroll, Jo Dunkley, Jessie Shelton, Michael Troxel, Abigail Vieregg, and Arthur B. Kosowsky.

They all give similar answers, that space isn’t known to have any edge.  It may be infinite, or it may curve back on itself in an extra-dimensional sphere or torus (donut) shape, meaning that if you travel in one direction long enough, you might end up where you started.  The best measurements to date imply that space is flat, although we can’t rule out that within the uncertainty of our current measurements that it doesn’t curve in a way that eventually leads one of the other shapes.

Many of the physicists mention the observable universe, and that the actual universe is thought to continue well beyond it, although one pointed out that we can’t rule out major variations just beyond the boundary of our observations.

If you think about the universe as all that we can causally interact with, then the observable universe could be considered our universe, with the “edge” being the edge of what we can observe.  Although the edge of observations may change in the future.  Currently the furthest thing we can see is the cosmic microwave background radiation.  In terms of electromagnetic radiation, it’s hard to imagine we’ll ever see farther than that.

However, if cosmic inflation is correct, and a lot of physicists are convinced it is, then the causal universe might be far larger than the currently observable universe.  We can currently detect gravitational waves.  If we could ever detect such waves from the period during inflation in the 10-32 seconds after the big bang, when the universe was thought to have expanded to 1030 to 10100 its previous size, then we might be able to infer things about the universe far beyond the limits of electromagnetic observation.

Of course, that range refers to things from the past that could causally affect us.  If we only think about what we can causally affect from here on, then due to the ongoing expansion of the universe, the cosmic horizon has a radius of about 14-16 billion light years, which would be the limit of what we could ever conceivably have any causal influence on.  As I’ve written about before, this means that most of the universe we can observe is already forever beyond our reach.

But it’s interesting to speculate what might happen if we’re ever able to travel FTL (faster than light).  What might we see beyond the observable universe?  Would it just be the same kind of stuff we can currently see going on into infinity?  Or would we eventually find regions of the universe where things are very different?

I mentioned cosmic inflation.  A variation of that idea is eternal inflation, where inflation is the natural state of spacetime, but that due to a random quantum fluctuation, a bubble of low inflation was created, aka our universe.  There are different conceptions about what the edge of this bubble might look like.  Some see it as a bubble of time as well as space, which we can’t leave because the edges of the bubble are the beginning of our universe in time, the big bang.

Other physicists have speculated that we could travel until we reached regions where the expansion of the universe was faster and faster, until we approached inflationary space.  Unless our method of FTL protected us in some manner, we could never enter inflationary space.  Aside from it perpetually receding from us, the expansion rate would overcome the nuclear forces holding our atomic nuclei together, not to mention the electromagnetic forces, and we’d be instantly ripped apart.

So travel to another bubble, even with an FTL drive, would probably never be a thing.  Even if it was, other bubbles are thought to have different laws of physics.  If we ever made it to another bubble, we might find its physics hostile to our form of life.

There have been measurements lately indicating that dark energy, the force driving the current expansion of the universe, may be increasing in strength.  If so, then within a few tens of billions of years, the universe as we know it might end in a “big rip.”  If we are in a bubble, then that bubble might eventually come to a violent end, perhaps dissolving into inflationary space.

I sometimes wonder if information of any kind, not to mention any form of life, could be preserved in inflationary space.  Based on the description, it doesn’t seem so.  I don’t know what’s bleaker, the long slow heat death of the universe under constant dark energy, or a big rip in a few tens of billions of year.

So in pondering the edge of the universe, we have edges in observability (currently 13.8 billion years), in inward causality (depends on cosmic inflation), on outward causality (14-16 billion light years), and edges in time including the big bang and one of the possible endings (heat death, big rip, and an increasingly unlikely one we didn’t discuss, the big crunch), as well as possible curving loopbacks and infinite expanses.

Did I miss anything?

Posted in Space | Tagged , , , , , | 16 Comments

Maybe the brain communicates via electrical fields after all

An interesting finding by scientists at Case Western Reserve University, that neurons may communicate via electrical fields:

Scientists think they’ve identified a previously unknown form of neural communication that self-propagates across brain tissue, and can leap wirelessly from neurons in one section of brain tissue to another – even if they’ve been surgically severed.

…To that end, Durand and his team investigated slow periodic activity in vitro, studying the brain waves in hippocampal slices extracted from decapitated mice.

What they found was that slow periodic activity can generate electric fields which in turn activate neighbouring cells, constituting a form of neural communication without chemical synaptic transmission or gap junctions.

It’s important to note that these results happened in vitro, that is, in an artificial environment outside of a living brain.  Still, this is the first time anyone has been able to scientifically demonstrate a form of communication between neurons called ephaptic coupling, communication via electrical field, which is regarded as a “jaw dropping” result.

Indeed it’s so jaw dropping that skepticism is called for.  We need to see replication of these results by other teams.  Unfortunately, I don’t anticipate that caution will be heeded in a lot of circles.  I suspect a lot of people are going to run with this result, regardless of its limitations and uncertainties.

It will be interesting to see if these results hold up!

Posted in Zeitgeist | Tagged , , | 18 Comments

Did smell lead to consciousness?

Smell has apparently always been a peculiar sense.  The sensory pathway of smell information to the brain runs completely independent from the other senses.  The pathways for the other senses run through the midbrain and thalamus and are then relayed to cortical regions.  But smell goes to the olfactory bulb behind the nose, and from there directly to various forebrain regions such as the amygdala, hippocampus, and prefrontal cortex).

Diagram showing the fish brain with the olfactory bulb, telencephalon, optic tectum, and cerebellum

Fish brain
Image credit: Neale Monks via Wikipedia

This independent pathway is ancient.  From the earliest vertebrates, it appears that smell has always gone directly to the telencephalon (the forebrain) while the other senses went through the optic tectum (midbrain) region.

This is strange, because while other sensory information, such as vision, hearing, and touch are routed to the forebrain in mammals, allowing the formation of sensory images in the cortex, this does not appear to happen in amphibians and reptiles.  The creation of sensory images in the forebrain, other than smell, appears to be an innovation of mammals and birds (perhaps independently, making it an example of convergent evolution).

This has led many biologists to conclude that the telencephalon in fish and reptiles is basically just a “smell brain”.  This seems borne out by experiments with fish, where their telencephalon was destroyed, and the fish seemed able to go about their normal lives.  However, the fish did lose the ability to learn or anticipate consequences, including new spatial navigation.  In other words, the fish lost the ability to remember and imagine, which seems to indicate that there is more at work in their forebrain than just smell.

But if you think about it, smell, unlike the other senses, is much more entangled with memory.  Smells of predators and prey linger after they’ve departed.  For an animal to make use of smell information requires memory and imagination, accessing past associations of the smell, whether it indicated a predator or some food source, and thinking about what the smell in the current situation means.  This isn’t necessarily true for vision, hearing, touch, or taste, where reacting reflexively to current stimuli can still be adaptive.

In other words, the rise of smell might have led to the rise of memory and imagination.  And as I’ve written before, sentience, the ability to feel, is only adaptive if it can be used for something.  This is why most neuroscientists see feelings as we consciously perceive them being linked to the same regions where imagination is coordinated, the frontal lobes in mammals, or more broadly the forebrain in non-mammalian vertebrates.  Which is to say, that smell may have been what led to the evolution of sentience.

Todd Feinberg and Jon Mallatt, in their book The Ancient Origins of Consciousness, discuss and argue against this proposition.  For them, it seems far more reasonable to see vision as the sense which drove consciousness.  And strictly in terms of image based consciousness, they may be right.  But in early vertebrates, most of that image based consciousness seemed focused on the midbrain region, a region that doesn’t appear capable of memory and nonreflexive learning, behaviors typically associated with sentient consciousness.

An interesting question to ponder is, if vision, hearing, and the other senses are processed primarily in the optic tectum, the midbrain region for fish and reptiles, how much of that sensory information actually makes it to their telencephalon, that is, into their memories and imagination?  Humans have no introspective access to the low resolution images formed in our own midbrain region, only to the ones we form in our cortex.  But is the telencephalon of an amphibian or reptile able to access the visual information from its optic tectum?

John Dowling in his book, Understanding the Brain, points out that a frog, which can catch and eat flies with its tongue, can only see a fly if it is moving.  A frog in a cage stocked with fresh but dead flies, will starve.  Dowling asks what the frog is actually “seeing” in that case.  It may be that the frog’s optic tectum can generate reflexive tongue movements, but that the frog itself has no conscious access to a visual image of the fly, or any other visual images.

And yet, the telencephalon of these species can inhibit the reflexive reactions from their tectum.  In order to do so effectively, it seems like they should get some information from their tectum, their midbrain region.  In fact, Feinberg and Mallatt in their book indicate that some visual and audio information has been shown to make it to the telencephalon.  But it seems likely, similar to how we receive processed information from our midbrain, that this comes in the form of feelings rather than detailed sensory information.

This has led many biologists to conclude that amphibians and reptiles aren’t conscious.  As I’ve noted before, whether to call a particular species “conscious” is ultimately a matter of interpretation.  However, we can say that their experience of the world is very different from ours.  That experience does not appear to include visual and auditory images, although it may well include olfactory ones.

So it’s possible that we are conscious today because of smell.  This proposal is strange and counter-intuitive to us because we’re primates, a group of mammalian species where the sense of smell has atrophied.  But for most animals, smell is a major part of their worldview.

What do you think?  Did we use smell to climb the ladder of sentience and then, as primates, kick that ladder loose?  Does the lack of visual images mean fish aren’t conscious?

Posted in Mind and AI | Tagged , , , , , | 34 Comments

When does personhood begin?

Gary Whittenberger has an article at Skeptic on discussing personhood and abortion:

The pro-person position, as I have outlined it in this essay, recognizes the late fetus and the host woman both as persons with human rights. When these rights come into conflict, as can occur during the last 15 weeks of pregnancy, then the state must intervene through a clear constitution, laws, and/or policies to resolve the conflict. The pro-person position provides a specific path for resolution. The prolife position has been mistaken from the start. It is indefensible to invoke a magical “ensoulment” and to thereby classify the zygote as a person. While more reasonable, the pro-choice position is also off the mark. It has relied on obsolete notions such as trimesters, viability, and privacy implied in or lifted from Roe v. Wade and the premise that a fully conscious fetus is not a person. On the other hand, the pro-person position corrects all these errors and is based on a solid philosophical and scientific foundation, which can still change as new evidence, reasons, and arguments are brought forth. In summary, the core idea of the pro-person position is that the human organism becomes a human person when it acquires the capacity for consciousness at approximately 25 weeks after conception.

Whittenberger’s mannerism in this article has an ongoing air of triumph which I find annoying, and it makes me want to find reasons to disagree with him.  However, I can’t say I disagree much.  He looks at the neuroscientific evidence as it currently stands, which holds that consciousness is a thalamo-cortical phenomenon, then looks at when the thalmo-corticol systems comes online around the 25 week milestone of a pregnancy.  It’s roughly around the time that the two cortical hemispheres start firing in synchrony.  He marks that as a point of personhood.  As a pragmatic line, it seems plausible enough.

Where I do disagree with him, and others, is that we can draw a sharp line at any point and say, “Here be consciousness.”  I don’t think it works like that.  Whittenberger notes that many see the onset of consciousness as more of a dimmer knob than a on-off light switch event, but then largely dismisses it.  I think hastily so.

On the other hand, I don’t know that it really weakens his thesis.  I doubt that there are any glimmers of consciousness before the point where he draws the line.  And at that line itself, we should remember that the cerebrum remains very immature.  It doesn’t even display discernible sleep-wake cycles until weeks 28-30.  And the fetus doesn’t display any indications that it plans its movements, a key indicator that there’s something more than reflexes at work, until the last few weeks of pregnancy.  So Whittenberger’s line should probably be regarded as one of an abundance of caution.

That said, the overall problem with arguments like this is that the abortion debate isn’t really about fetal welfare.  If it were, then these kinds of arguments might have some sway.  The real issue is people’s attitudes toward sexual promiscuity.  Those with a relaxed attitude toward promiscuity tend to be pro-choice, while those who condemn promiscuity tend to be pro-life.

This is borne out by the fact that most pro-life people have no real problem with the death penalty, which seems to be incompatible with the whole sanctity of life argument.  And they are usually willing to make exceptions for rape or incest, cases where the woman’s lifestyle choices presumably don’t lead to the situation she’s in.

And to be even handed, the pro-choice folks are often fine with laws that restrict people’s personal freedoms in other ways, such as seat-belt or drug laws, indicating that there’s more than just a libertarian impulse for reproductive freedom at work.  In both cases, attitudes toward sexual lifestyles seem more relevant.

In any case, it’s interesting to ponder when a developing human reaches various cognitive milestones.  Here our discussions about brainstem consciousness become relevant in a major way.   But however it’s enabled, what we call consciousness, to me, seems like something that comes on gradually throughout the fetal period and first two years of life.

In that sense, a newborn doesn’t strike me as being more than minimally conscious, although this changes rapidly in the first few months of life.  But they don’t seem to display the full range of metacognitive self awareness until around 18-24 months of age.  It may not be a coincidence that our earliest childhood memories only go back to the 2-4 year old mark.

What do you think?  When do you think personhood begins?  When does it end?

Posted in Zeitgeist | Tagged , , , , | 26 Comments

Why are we real?

Nathaniel Stein has an interesting article at Aeon, The why of reality:

The easy question came first, a few months after my son turned four: ‘Are we real?’ It was abrupt, but not quite out of nowhere, and I was able to answer quickly. Yes, we’re real – but Elsa and Anna, two characters from Frozen, are not. Done. Then there was a follow-up a few weeks later that came just as abruptly, while splashing around a pool: ‘Daddy, why are we real?’

After spending some time pondering exactly what his young son meant with this existential question, and veering through a good part of philosophical history (curiously sans Descartes), Stein finishes with this:

How, then, can I give a good answer, if the question is about what’s real and what’s pretend? I suppose the right answer has something do with the fact that trustworthy images are causally connected to their subjects in the right ways, and carry information about them because of these causal connections, but I don’t think we’re ready for that. I settle for something a bit simpler: he can do whatever he decides to do, but Elsa can do only what happens in the story. It’s not great, but it’s a positive message. It works for now.

That’s an interesting answer.  It assumes we have more free will than characters in a story.  While I’m a compatibilist on social responsibility, I don’t think we have the contra-causal version of free will, which essentially makes us characters in the story of the universe.  The characters in the movie, if they had any viewpoint, would likely see themselves as having just as much freedom of choice as we do.

So that’s not the answer I would have given.  Mine would have been that you’re always real to yourself.  Even if we’re only characters in an advanced 22nd century video game, relative to ourselves, we exist.  The very fact of asking that question makes you real to yourself.  This is, of course, a version of Descartes’ “I think therefore I am”, although a bit more hesitantly as, “I think therefore I am to me.”

The question gets more challenging when considering everything else.  I’m a skeptic, so by what means do I determine what is real?  We only ever have access to our own consciousness.  Everything else “out there” is a theory, a predictive model that our minds build from experience.  But our minds can hold both real and unreal things.  Is there any concisely stated standard we can use to tell the difference?

In the end, I think that which enhances our ability to predict future experiences, future observations, is real.  That which can’t, isn’t.  This isn’t always a satisfactory answer, because often we won’t know for some time, or possibly ever, whether a particular notion fulfills this role.  But if there’s another standard, I’m not sure what it might be.

So what do you think?  What would your answer have been?

Posted in Zeitgeist | Tagged , , | 72 Comments

Is the brainstem conscious?

(Warning: neuroscience weeds and references to gruesome animal research.)

Blausen.com staff (2014). “Medical gallery of Blausen Medical 2014”. WikiJournal of Medicine 1 (2). DOI:10.15347/wjm/2014.010. ISSN 2002-4436.

The vast majority of neuroscientists see consciousness as a cortical phenomenon.  It may be crucially dependent on sub-cortical and sub-cerebral structures, but subjective experience itself exists mainly or entirely in the neocortex.  In this view, the brainstem only produces reflex responses, with anything more sophisticated coming from higher level structures.

But there’s a small but vocal minority in neuroscience who see it differently.  Views in this camp vary somewhat.  Some are more cautious and see the brainstem as perhaps providing a more primal version of consciousness with the cortex providing higher level aspects of it, while others see the brainstem as the primary or even sole source of consciousness.

A scientist often cited for this view is Bjorn Merker and his paper: Consciousness without a cerebral cortex: A challenge for neuroscience and medicine.  (A PDF of the paper is publicly available.)

To understand what Merker is proposing, consider this diagram from the paper:

In each of the four images, the large oval on top is the cortex and overall cerebrum, while the small oval is the brainstem.  The white sections in each image are where consciousness is proposed to reside, with the grey being non-conscious processes.  The two top images reflect, more or less, mainstream neuroscience, with consciousness being entirely a cerebral phenomenon, although in the top right image, it is more crucially dependent on sub-cortical but cerebral structures such as the thalamus and basal ganglia.

The bottom images reflect the minority camp, with the bottom left one reflecting more cautious views involving consciousness spanning both the brainstem and cortex, and the bottom right one the more uncompromising version that only the brainstem is conscious, with the cerebral structures only supplying pre-conscious content.

Merker in the main paper seems to argue for the bottom right view, although in his response to the commentary that was published with the paper , he seems to back off a bit, retreating toward the bottom left view.  (Unfortunately, the commentary and response are pay-walled, but can be found here.)

So how does Merker reach this conclusion?  There’s a lot in this paper, and this post is going to necessarily be selective and highly summarized.  If you’re interested in the details, I highly recommend the paper itself.  It’s a fascinating read for anyone interested in neuroscience, albeit a very technical one.

Merker first cites the work of neurosurgeons Wilder Penfield and Herbert Jasper in the middle 20th century, who performed surgeries on patients with severe epileptic seizures.  It was often necessary for them to remove large tracts of the patient’s neocortex.  While undergoing the procedure, the patients were kept conscious with a local anesthetic so the surgeons could communicate with them and know if they were damaging their cognition.

In these procedures, Penfield and Jasper were impressed by the fact that removal of cortical sections never seemed to interrupt the consciousness of the patient.  They proposed that consciousness must be maintained in lower level structures.

Merker then discusses the Sprague effect.  When one side of a cat’s visual cortex is removed, despite functional eyes, the cat becomes unresponsive to half of its visual field.  In human patients, similar damage results in cortical blindness (and sometimes the phenomenon of blindsight).  However, when the cat’s upper brainstem is additionally damaged in a certain manner, some of the cat’s abilities to respond to visual stimuli returns.

Merker also discusses the abilities of rats that have been decorticated, that is, had their neocortex removed but with the rest of the brain left intact.  These rats often retain a remarkable ability to navigate and engage in customary behavior, including reproduction, despite losing many cognitive abilities detectable to a trained observer.

Finally, Merker discusses hydranencephalic children.  These are children who typically suffer a stroke in the womb that destroys much of their brain.  Generally they are born with only the brainstem and a few lower level cerebral structures.  Their cognitive ability seems to be roughly limited to that of newborns, although they never move beyond that stage.  Despite substantially missing a neocortex, they reportedly display powerful indications of a sort of primal consciousness.

There are issues with all these lines of evidence that weaken Merker’s case.  Some of them Merker admits to, but then summarily dismisses.  For example, in the case of the cat, another interpretation is that the followup damage to the upper brainstem merely destroys the cat’s ability to inhibit its reflexive reactions to visual stimuli, and decorticated rats retain a lot of cerebral structures that mainstream neuroscience sees as sufficient for habitual behavior.

But I’m going to focus on a broader issue.  As neuroscientist Anton Coenen asked in his commentary, “But what kind of consciousness is this?”  When we use the word “consciousness”, we can mean all kinds of things, but there are at least three broad meanings that often get conflated:

  1. Being awake and responsive to stimuli
  2. Awareness with phenomenal experience
  3. Self reflection

When we see behavior indicating 1, we tend to assume that all three versions are present.  In the case of a healthy developed human, it’s usually a safe assumption.  But the further we get from healthy humans, the weaker that assumption becomes.  In non-human animals, 3 may be limited to only a few primate species, and many patients in a vegetative state seem to have 1 without 2 or 3.

On inferring which level of consciousness is present based on behavior, I’m going to quote Richard Feynman on scientific observation:

The first principle is that you must not fool yourself — and you are the easiest person to fool.

Cargo Cult Science

Nowhere is this principle more needed than when using behavior to assess mental states in non-human animals and brain injured humans.  We have to be careful about taking affect displays such as crying, facial expressions, avoidance reflexes, etc, as evidence.  As intuitively powerful as they are, affect displays do not necessarily indicate conscious affective feelings.  Human psychology studies show that many affect displays are unconscious.  This is why body language and unguarded facial expressions are often cited as better indicators of mental states than the more conscious behavior.

The consciousness hierarchy above highlights how important it is to be clear about which type of consciousness we’re discussing.  Merker, to his credit, explicitly identifies the definition he’s working with: information integration for action.  And, despite my quibbles above, I do think he makes a good case that integration for action happens in the upper brainstem.  But integration for action only meets the first level of consciousness.

Consider the phenomenon of mind wandering.  I can be driving to work, mowing the lawn, taking a shower, or doing a host of other complicated physical tasks with little if any conscious thought going into what I’m physically doing.  When driving, I can be thinking about the next blog post I’m going to write or how I’m going to handle a presentation at work.  Clearly some part of my brain is doing integration for action in order for me to drive, but it doesn’t seem to be the parts we normally label as conscious, at least until something about the driving requires that I focus on it, on what needs to happen next.

In practice, most of the habitual automatic but learned behaviors described above are controlled by my basal ganglia, sub-cortical structures above the brainstem.  But if a loud noise causes a startle reflex, that is handled by the brainstem.  The frontal lobe cortex only seems to be involved when some degree of planning is needed, even if only planning for the next few seconds, utilizing integration for planning.

Merker seems right that what happens in the brainstem is the final integration for action, and that all action goes through it.  But the brainstem itself only appears to have its reflexive reactions, reactions which can be inhibited from higher level structures.  Whether those inhibitions arrive are driven by higher level integrations.  These structures process an enormous amount of information that never makes it down to the brainstem.

For example, about 10% of the axons from the retina project to the superior colliculus in the upper brainstem region.  Most of the remainder, including all of the axons from the color sensitive cone cells, project to the thalamus and visual cortex.  This means that the redness of red and many other conscious qualities only happen in the cerebrum.  That information is used by the cortex to decide which reflexive reactions in the brainstem to allow and which to inhibit.  The superior colliculus does have low resolution colorless images, but we appear to have no introspective access to them.

None of this is to imply that the brainstem isn’t crucial for consciousness, particularly the first level.  It arouses the cerebrum, provides the underlying impulses and valences that form the core of feelings, and generally drives the overall system toward homeostasis.  Everything above it is an elaboration of those functions.  But that doesn’t mean it has phenomenal awareness.  What it means is what we call phenomenal consciousness is itself an elaboration of its fundamental functions.

So perhaps a better way of saying this is that Merker and those of similar disposition aren’t wrong about the brainstem’s primacy.  The more cautious views aren’t even wrong that the brainstem has lower level consciousness in the sense of the first level described above.  They’re only wrong to the extent they claim that primacy includes phenomenal or self reflective consciousness.

As is often the case, much of the differences between mainstream neuroscience and the more cautious views in the minority camp amount to differences in what people are willing to call “conscious.”

Unless of course I’m missing something?

Posted in Mind and AI | Tagged , , , , | 24 Comments

Strong vs weak emergence

The Neuroskeptic has an interesting post on a paper challenging theories of mind based on strong emergence.

A new paper offers a broad challenge to a certain kind of ‘grand theory’ about the brain. According to the authors, Federico E. Turkheimer and colleagues, it is problematic to build models of brain function that rely on ‘strong emergence’.

Two popular theories, the Free Energy Principle aka Bayesian Brain and the Integrated Information Theory model, are singled out as examples of strong emergence-based work.

I’m familiar with IIT (Integrated Information Theory), and as many of you know, I’m not a fan.  To be sure, integration is crucial, but in and of itself, it isn’t sufficient.  It matters what the integration is for.  IIT strikes me as a theory attempting to explain how the ghost in the machine arises.  Since I think the ghost is a mistaken concept, the theory seems fundamentally misguided.

I’m not really familiar with the Free Energy Principle, although it comes up in conversation from time to time.  The link discusses a Bayesian understanding of the brain, which seems plausible enough, although I’m not sure how strong emergence necessarily fits in.

But the reason for this post comes from a quote from the paper:

A system is said to exhibit strong emergence when its behaviour, or the consequence of its behaviour, exceeds the limits of its constituent parts. Thus the resulting behavioural properties of the system are caused by the interaction of the different layers of that system, but they cannot be derived simply by analysing the rules and individual parts that make up the system.

Weak emergence on the other hand, differs in the sense that whilst the emergent behaviour of the system is the product of interactions between its various layers, that behaviour is entirely encapsulated by the confines of the system itself, and as such, can be fully explained simply though an analysis of interactions between its elemental units.

I occasionally note that I consider emergent phenomena to be just as real as the underlying phenomena that it emerges from.  Temperature is just as real as particle kinetics, that is, it remains a productive concept in our models.  There’s often a sentiment to regard such emergent phenomena as an illusion, but that doesn’t strike me as productive, since too much of what we deal with is emergent.  Such an attitude can leave you questioning whether anything other than quantum fields and spacetime exist.

Emergence, for me, is strictly an epistemic concept.  It’s more about what our minds can cope with and understand than anything ontological.  It’s simply a point in the hierarchy of phenomena where it becomes productive for us to switch to a new model, a new theory to describe what’s going on.  This understanding matches up with weak emergence.

On the other hand, strong emergence is an ontological assertion.  It’s a statement that something wholly new comes into existence from the lower level phenomena, something that can’t be reduced to its constituents and interactions, even in principle.  This type of emergence strikes me as far more problematic.

While I do think emergence is an important concept, I usually resist it as an explanation by itself for anything, particularly something like consciousness.  Certainly what we call consciousness is emergent from neural activity, but simply saying that doesn’t seem like an interesting or useful explanation.  It matters a great deal how it emerges.  When we understand that emergence, similar to the way we understand how temperature emerges, then we’ll have something useful.

What do you think?  Am I too dismissive of strong emergence?  Or of IIT?  Anyone familiar enough with the Free Energy Principle to succinctly describe it?

Posted in Zeitgeist | Tagged , , , , , | 42 Comments

Why faster than light travel is inevitably also time travel

I’ve always loved space opera, but when I was growing up, as I learned more about science, I discovered that a lot of the tropes in space opera are problematic.  Space operas, to tell adventure stories among the stars, often have to make compromises.  One of the earliest and most pervasive is FTL (faster than light) travel.

Interestingly, the earliest interstellar space opera stories in the late 1920s largely ignored relativity.  E.E. “Doc” Smith and Edmond Hamilton simply had their adventurers accelerate away at thousands of times the speed of light.  If relativity was mentioned, it was just as a superseded or wrong theory.

But by the early 1930s, authors found a way to seemingly avoid outright ignoring Einstein by simply hand waving technologies that bypassed the laws of physics.  One of the earliest and most enduring was hyperspace, a separate realm that a spaceship could enter to either travel faster than light, or where distances were compressed.  Over the decades, hyperspace came in a wide variety of fashions and with a lot of different names: subspace, u-space, slipstream, etc.

One variant, popularized by Isaac Asimov in his Robot and Foundation series, has hyperspace as a realm where ships jump through it to instantly move light years away.  (I’ll be using this version in an example below.)

There are a wide variety of other FTL technologies that often show up in science fiction.  An interesting example is the ansible, a device that allows instant communication across interstellar distances.  Often the ansible shows up in stories where actual FTL travel is impossible, but an interstellar community is enabled by the instant communications.

I’ve written before that there are lots of problems with all of these ideas.  Generally they’re not based on actual science.  They’re just plot gimmicks to enable the type of stories authors want to tell.  And the few that are somewhat based on science, such as wormholes or Alcubierre drives, involve speculative concepts that haven’t been observed in nature.

But FTL has another issue, one that I only started appreciating a few years ago.  FTL, no matter how you accomplish it, opens the door to time travel.  Most FTL concepts are conceptualized within a Newtonian understanding of the universe.  In that universe, there is an absolute now which exists throughout all of space.  If we imagine a two dimensional diagram with space as the horizontal axis and time as the vertical, then now, or the absolute plane of simultaneity, exists as a flat line throughout the universe.

But that’s not the universe we live in.  We live in a universe governed by special and general relativity (or at least one where those theories are much more predictive than Newton’s laws).  In our universe, there is no single plane of simultaneity, no universal version of now.  In this universe, talking about what is happening “right now” for cosmically distant locations is a meaningless exercise.

Most people are aware that, under special relativity, time flows slower for a traveler at speeds approaching the speed of light.  But not everyone is aware that, from the traveler’s perspective, it’s the rest of the universe that is traveling near the speed of light and experiencing slower time.  How can both see the other as having slower time than themselves?  Because simultaneity is relative.

Image credit: Acdx via Wikipedia

As this image animation shows (which I grabbed from the Wikipedia article on the relativity of simultaneity),  under relativity, whether certain events occur simultaneously is no longer an absolute thing, but a relative one.  If B is stationary, then events A, B, and C all happen simultaneously.  However, if B is moving toward C, B’s plane of simultaneity slopes upward, leaving C in its past.  On the other hand, if B is moving toward A, C is now in its future.  (Note: this never allows information to influence the past because, in normal physics, such information can only travel at the speed of light.)

An important point here is that these effects do not only happen at speeds approaching the speed of light.  They happen with any motion.  However, in normal everyday life, the effect is too small to notice, which is why Newton’s laws work effectively for relatively slow speeds and short distances.

Crucially, the upward or downward slope of simultaneity still happens at slow speeds, but the angle of difference is small, and again we don’t notice.  However, while a small angle of deviation may not be noticeable for everyday distances (say between New York and Sydney), or even for distances within the solar system, when the distances start expanding to thousands, millions, or even billions of light years, then even minute angle deviations grow to significant variances.

So imagine we have a spaceship heading out of the solar system at 1% of c (the speed of light).  Using the Asmovian version of hyperspace, the spaceship jumps to a destination 1000 light years away.

Which plane of simultaneity, which version of now, does the ship’s instant jump happen in?  The plane associated with stationary observers back on Earth?  Or the plane associated with the ship traveling at 1% c?  If it’s the ship’s plane, then when the ship exits hyperspace 1000 light years away, it will do so 18 days in the future of the stationary Earth observers.

That is true if the spaceship’s hyperspace jump is in the direction of its 1% c velocity.  But if the 1000 light  year jump is in the direction opposite the one of it’s velocity, it will arrive 18 days in the stationary observer’s past.

It doesn’t take a whole lot of imagination to see how this technology could be used to travel to arbitrary points in the past or future.  All a ship would need to do is jump in circles either in the direction of their rate of travel or opposite it to travel forward or backward in time.

We encounter exactly the same issue with other versions of FTL, such as warp drives or versions of hyperspace that take time to travel through, it’s just more of gradual than sharp jump in time.

In the case of ansibles, which version of simultaneity are the communications happening over?  The chances that the two correspondents happen to be traveling at the same speeds are nil.  The variances in the speeds of their star’s movement around the galaxy, the orbits of the planets, etc, will all conspire to ensure that their various planes of simultaneity are out of sync with and constantly changing in relation to each other.  An ansible accelerated to relativistic speeds could be used to communicate with the past or future.

Even wormholes would be an issue.  The wormholes in fiction always connect distant points together in the same now, but wormholes are connections between two points in spacetime.  There’s no particular reason it would be limited to some arbitrary version of now.  Indeed, a natural wormhole, like the one in Star Trek Deep Space Nine, would be more likely to open to some distant point in future, long after the heat death of the universe, than somewhere along the Bajoran plane of simultaneity.

We might imagine that if the FTL technology allowed us to choose which plane of simultaneity we moved under, maybe everyone would just agree on some standard, albeit an arbitrary one.  But that only makes the time travel capability more pronounced.  Orson Scott Card made the point years ago that if you’re going to introduce a technology into your fictional universe, you should account for all the ways that technology might be used, or abused.

It’s often said that the absence of tourists from the future probably indicates that time travel is impossible.  Even if future societies have strict taboos against interfering with the past, the idea that such taboos would hold for all societies until the end of time seems unsustainable.  Since FTL is also time travel, the same observation would seem to rule out most forms of it.  (Star gates or wormholes where a destination version has to be built might be the only ones that avoid this issue.)

Unless of course there’s something I’m missing about this?

Posted in Science Fiction, Space | Tagged , , , , , , | 81 Comments

Being committed to truth means admitting the limitations of what we can know

Michela Massimi has a long article at Aeon defending scientific realism.

The time for a defence of truth in science has come. It begins with a commitment to get things right, which is at the heart of the realist programme, despite mounting Kuhnian challenges from the history of science, considerations about modelling, and values in contemporary scientific practice. In the simple-minded sense, getting things right means that things are as the relevant scientific theory says that they are.

…We should expect science to tell us the truth because, by realist lights, this is what science ought to do. Truth – understood as getting things right – is not the aim of science, because it is not what science (or, better, scientists) should aspire to (assuming one has realist leanings). Instead, it is what science ought to do by realist lights. Thus, to judge a scientific theory or model as true is to judge it as one that ‘commands our assent’. Truth, ultimately, is not an aspiration; a desirable (but maybe unachievable) goal; a figment in the mind of the working scientist; or, worse, an insupportable and dispensable burden in scientific research. Truth is a normative commitment inherent in scientific knowledge.

Scientific realism is the belief that scientific theories describe the real world.  For a realist, when general relativity talks about space being warped, there is actually a thing out there being warped.  When particle physics talk about an electron, it is referring to a real thing out there that definitely exists.

The main alternative to scientific realism is instrumentalism, which holds that scientific theories are frameworks, tools, for predicting observations, with no particular guarantee that they describe reality.  Specifically, the reality of any statements the theory makes beyond predictions that can be tested, are suspect.  Often the testable predictions arise from the theory’s mathematics whereas the non-testable ones arise from the accompanying language narrative, but there can be non-testable mathematical predictions and testable narrative ones.

I’ve written before that I actually find this distinction invalid, because our understanding of reality is itself just another mental model.  Our brains build models of the world, the accuracy of which we can only measure by how well they predict future experiences.  I can see no other measure of truth.  (If you can think of another one, please let me know in the comments.)

Emotionally I am a scientific realist.  I try to maintain a commitment to truth and do see science as the pursuit of truth.  Indeed, I think it’s the best tool we have for learning about the truth.  But part of being faithful to truth means acknowledging the limitations of what we can know about it.  And my questioning of the distinction between realism and instrumentalism makes me, to a hard realist, a non-realist.

So, intellectually, I’m an instrumentalist.  All we ever get are conscious experiences.  A successful scientific theory predicts future experiences.  All statements about objective reality are theoretical.  But the nature of that reality beyond what can be tested about our theories, are unknown.

Which brings me to this statement in Massimi’s article:

Constructive empiricists, instrumentalists, Jamesian pragmatists, relativists and constructivists do not share the same commitment. They do not share with the realist a suitable notion of ‘rightness’.

As an instrumentalist, I say baloney!  Massimi mischaracterizes all alternatives to realism as accepting arbitrary notions.  While there are details about James’ pragmatic theory of truth I’m unsure about, I find the overall idea sound.  What is true is what works, what enhances our ability to make accurate predictions.

A commitment to this standard is just as much a commitment to “getting it right,” but with the added benefit that adherence to it can be tested.  Ultimately instrumentalism, and similar philosophies, are simply epistemic humility, which I think is crucial for actually getting it right.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , , | 50 Comments

Consciousness lies in the eye of the beholder

There are few things that everyone who ponders consciousness can agree on.  It’s a topic where debates on the very definition of the subject are common.  The only definitions that seem to command near universal assent are the ones oriented toward phenomenology, such as “subjective experience” or “something it is like.”  And even then, the question of whether these are real or illusory is hotly debated.

Moving beyond phenomenology, many people still hold to substance dualism, the idea that the mind cannot be explained with mere physics, chemistry, biology, etc, that something else is needed.  We appear to have a strong innate intuition for this view.  I think it comes from the fact that our mental model of a mind bears little relation to our model of the physical brain.  It leads to the “hard problem of consciousness.”

But the hard problem appears to actually just be a psychological one, a difficulty in accepting what over a century and a half of neuroscience is telling us, that there is no evidence for dualism.

Many people accept the above logic intellectually, but still retain latent dualistic intuitions.  Well, I guess we all retain those intuitions to some extent, but not everyone remembers to discount them in the same way we discount our intuition of a stationary earth, that humans aren’t animals, or that space and time are absolute.

In summary, there is no evidence for a spiritual ghost in the machine, nor is there any for an electromagnetic ghost, a quantum ghost, or even a physical one in the sense of a particular location in the brain holding the soul or psyche.  There is just the machine and what it does.

You could make the case that there is an overall informational ghost, but that would be true only to the extent that the “ghost” of Microsoft Windows is in the laptop I’m typing this post on.

This has implications for the concept of consciousness that I think many resist, even many stone cold materialists.  We have subjective experience that is generated by the capabilities of our nervous system.  Our own experience is the only one we ever get access to.  We can only infer the existence of similar experiences in other systems.  (In philosophy, this is know as the problem of other minds.)

Consciousness is a label we affix to a collection of capabilities that the information system we call our mind possesses.  (The exact composition of which is itself a matter of ongoing debate.) When we ask if something else is conscious, I think what we’re really asking is if it processes information similar to the way we do and has similar drives.

So, when Bob ponders whether Alice is conscious, he’s basically thinking about how much Bob-ness she has. When Alice ponders Bob’s consciousness, she’s thinking about how much Alice-ness he has. When humans ponder animal consciousness, we’re wondering how much human-ness they have.  And when we ponder machine consciousness, we’re wondering how much life-ness they might have.

This, incidentally, is very natural for us as social creatures.  Pondering how much another entity thinks like us likely goes back at least to the earliest social species.  Perhaps earlier animals even had an incipient theory of mind for prey and predators.  This mode of thinking, to widely varying degrees, may be very ancient.

But it’s always a matter of judgment because no two systems process information in exactly the same way. Even different members of the same species are going to vary. And the further from mentally complete humans we move, the less like us they process information, and the more in doubt their us-ness is.

This is just a special case of the fact that whether a particular system implements a particular function is always a matter of judgment.  To say that it isn’t is to invoke teleology, the idea that natural systems have some inherent purpose.  But teleology was abandoned in science centuries ago, because it could never be objectively demonstrated.  Function is an interpretation.

From the similarities, we decide how much moral consideration a particular system should have. If we decide that it should have it, we tend to think of it as conscious.  Consider all the cases where someone argues that a creature is conscious or sentient, that’s it’s like us, to make the case that it should be treated better.  But if there is no objective morality then it follows that there is no objective consciousness.

A commonly expressed objection to this is that it’s circular and subject to infinite regress.  But this can be said for any evolved trait.  How could the trait,particularly a social one, start if it’s required to first be in a parent or in partners?  The answer is generally that the trait evolved gradually.  The same can be said for consciousness.  There was never a first conscious creature, just increasing capabilities until a point was reached where we might be tempted to apply the label “conscious” to it.  But the first animal “worthy” of that label would not have been dramatically different from its parents.

All of which is to say, I think asking whether a system is conscious, as though consciousness is a quality it either possesses or doesn’t, is meaningless.  Such a question is really about whether it has a soul, an inherently dualistic notion.  Our judgment on this will come down to how much like us it is, how human it is.  When put that way, the answers seems somewhat obvious.  Some species, such as chimpanzees, obviously are a lot more like us than others, such as fish or snails, but all currently are much closer to us than any technological system.

This raises the question of whether we would ever consider a machine intelligence to be conscious unless it had very human like, or at least life like, qualities.  When Alan Turing proposed his famous test (now known as the Turing Test), he did so to move the debate on whether machines could think from philosophy to science.  But he may have identified the only true measure of other minds we can ever employ.  Some criticized that Turing was really testing for how human-like a system was, but that may have been the very point.

It seems that whether any given system is “conscious” is something that lies in the eye of the beholder.

Unless of course I’m missing something?

Posted in Mind and AI | Tagged , , , , , | 91 Comments