Panpsychism and definitions of “consciousness”

Disagreeable Me asked me to look at this interesting TED talk by Professor Mark Bishop.

The entire talk is well worth the time (20 minutes) for anyone interested in consciousness and the computational theory of mind, but here’s my very quick summation:

  1. The human mind, and hence consciousness, is a computational system.
  2. Since animal minds are computational, then other computational systems that interact with their environment, such as the robots Dr. Bishop discusses in the video, should be conscious.
  3. Everything in nature is a computational system.
  4. Given 3, everything in nature has at least some glimmers of consciousness.  Consciousness pervades the universe.

The conclusion in 4 is generally a philosophy called panpsychism.  It’s a conclusion that many intelligent people reach.

First, let me say that I fully agree with 1.  Although it’s often a ferociously controversial conclusion, no other theory of mind holds as much explanatory power as the computational one.  Indeed, many of the other theories that people often prefer seem to be more about preserving and protecting the mystery and magic of consciousness, forestalling explanation as long as possible, rather than making an actual attempt at it.

I also cautiously agree with 3.  Indeed, I might say that I fully agree with it, because if we find some aspect of nature that we can’t mathematically model, we’ll expand mathematics as necessary to do it.  (See Newton’s invention (discovery?) of calculus in order to calculate gravitational interactions.)  We could argue about exactly what computation is and whether something like a rock does it in any meaningful sense, but with a broad and long enough view (geological time scales), I think we can conclude that it does.

When pondering 2, I think we have to consider our working definition of consciousness.  We could choose to define it as a computational system that interacts with the environment.  If we do, then everything else follows, including panpsychism.

But here’s where I think pansychism fails for me.  Because then the question we need to ask is, what follows from it?  If everything is conscious, what does that mean for our understanding of the universe?  Does it tell us anything useful about human or animal consciousness?

Or have we just moved the goal line from trying to understand what separates conscious from non-conscious systems, to trying to understand what separates animal consciousness from the consciousness of protons, storm systems, or robots?  Panpsychists may assert that the insight is that there’s no sharp distinction, that’s it’s all only a matter of degree.  I’m not sure I’d agree, but even if we take it as given, those degrees remain important, and we’re still left trying to understand what triggers our intuitive sense of consciousness.

My own view is that consciousness is a computational system.  Indeed, all conscious systems are computational.  However, the reverse is not true.  Not all computational systems are necessarily conscious.  Of course, since no one can authoritatively say exactly what consciousness is, this currently comes down to a philosophical preference.

People have been trying to define consciousness for centuries, and I’m not a neuroscientist, psychologist, or professional philosopher, so I won’t attempt my own.  (At least not today. :-) )  But often when definitions are illusive, it can help to list what we perceive to be the necessary attributes.  So, here are aspects of consciousness I think would be important to trigger our intuitive sense that something is in fact conscious:

  • Interaction with the environment.
  • An internal state that is influenced by past interactions and that influences future interactions, i.e. memory.
  • A functional feedback model of that internal state, i.e. awareness.

I think these factors can get us to a type of machine consciousness.  But biological systems contain a few primary motivating impulses.  Without these impulses, this evolutionary programming, I’m not sure our intuitive sense of consciousness would be triggered.

What are the impulses?  Survival and propagation of genes.  If you think carefully about what motivates all animals, it ultimately comes down to these directives.  (And technically survival is a special case of the gene propagation impulse.)  In mammals and social species, it gets far more complex with subsidiary impulses involving care of offspring and insuring secure social positions for oneself and one’s kin (in other words, love), but ultimately the drive is the same.

It’s a drive we share with every living thing, and a system that is missing it may have a hard time triggering out intuitive sense of agency detection, at least in any sustained manner.  I think it’s why a fruit fly feels more conscious to us than a robot, even if the robot has more processing power than the fly’s brain.

Of course, a sophisticated enough system might cause us to project these qualities unto it, much as humans have done throughout history.  (Think worship of volcanoes, the sea, storms, or nature overall.)  But knowing we’re looking at an artifact created by humans seems like it would short circuit that projection.  Maybe.

Anyway, those are my thoughts on this.  What do you think?  Am I maybe overlooking some epistemic virtues of panpsychism?  Or is my list of what would trigger our consciousness intuition too small?  Or is there another hole in my thinking somewhere?

Update: It appears I misinterpreted Professor Bishop’s views in the video.  He weighs in with a clarification in the comments.  I stand by what I said above about general panpsychism, but his view is a bit more complex, and he actually intended it as a presentation of an absurd consequence of the idea of machine consciousness.

Posted in Mind and AI | Tagged , , , , , , | 22 Comments

97% of the observable universe is forever unreachable

Observable_universe_logarithmic_illustration (1)

Artist’s logarithmic scale conception of the observable universe with the Solar System at the center, inner and outer planets, Kuiper belt, Oort cloud, Alpha Centauri, Perseus Arm, Milky Way galaxy, Andromeda galaxy, nearby galaxies, Cosmic Web, Cosmic microwave radiation and Big Bang’s invisible plasma on the edge. By Pablo Carlos Budassi

The other day, I was reading a post by Ethan Siegel on his excellent blog, Starts With a Bang, about whether it makes sense to consider the universe to be a giant brain.  (The short answer is no, but read his post for the details.)  Something he mentioned in the post caught my attention.

But these individual large groups will accelerate away from one another thanks to dark energy, and so will never have the opportunity to encounter one another or communicate with one another for very long. For example, if we were to send out signals today, from our location, at the speed of light, we’d only be able to reach 3% of the galaxies in our observable Universe today; the rest are already forever beyond our reach.

My first reaction when reading this was, really?  3%.  That seems awfully small.

What Siegel is talking about is an effect that is due to the expansion of the universe.  Just to be clear, “expansion of the universe” doesn’t mean that galaxies are expanding into space from some central point, but that space itself is expanding everywhere in the universe proportionally.  In other words, space is growing, causing distant galaxies to become more distant, and with space growing in the intervening space, the more distant a galaxy is from us, the faster it is moving away from us.

This means that as we get further and further away, the movement of those galaxies relative to us, gets closer and closer to the speed of light.  Beyond a certain distance, galaxies are moving away from us faster than the speed of light.  (This doesn’t violate relativity because those galaxies, relative to their local frame, aren’t moving anywhere near the speed of light.)  That means they are outside of our light cone, outside of our ability to have any causal influence on them, outside of what’s called our Hubble sphere (sometimes called the Hubble volume).  Note that we may still see galaxies outside of our Hubble volume if they were once within the Hubble sphere.

How big is the Hubble sphere?  We can calculate its radius by dividing the speed of light by the Hubble constant: H0. H0 is the rate by which space is expanding.  It is usually measured to be around 70 kilometers per second per mega-parsec, or about 21 kilometers per second per million light years.  In other words, for every million light years a galaxy is from us, on average, the space between that galaxy and us will be increasing by 21 km/s (kilometers per second).  So, a galaxy 100 million light years away is moving away from us at 2100 km/s (21 X 100), and a galaxy 200 million light years away will be receding at 4200 km/s (21 X 200), plus or minus any motion the galaxies might have relative to their local environment.  The speed of light is about 300,000 km/s.  If we take 300,000 and divide by 21, we get a bit over 14000.  That would be 14000 million, or a Hubble sphere radius of around 14 billion light years.

(If you’re like me,  you’ll immediately notice the similarity between the radius of the Hubble sphere and the age of the universe.  When I first noticed this a few years ago, it seemed like too much of a coincidence, but I haven’t been able to find any relationship described in the literature.  It appears to be a coincidence, although admittedly a freaky suspicious one.)

Okay, so the Hubble sphere is 14 billion light years in radius.  According to popular science news articles, the farthest galaxies we can see are about 13.2 billion light years away, and the cosmic microwave background is 13.8 billion light years away, so everything we can see is safely within the Hubble sphere, right?

Wrong.  Astronomy news articles almost universally report cosmological distances using light travel time, the amount of time that the light with which we’re seeing an object took to travel from the object to us.  For relatively nearby galaxy, say 20-30 million light years away, that’s fine.  In those cases, the light travel time is close enough to the co-moving or “proper” distance, the distance between us and the remote galaxy “right now”, that it doesn’t make a real difference.   But when we look at objects that are billions of light years away, there starts to be an increasingly significant difference between the proper distance and the light travel time.

Those farthest viewable galaxies that are 13.2 billion light years away in light travel time are over 30 billion light years away in proper distance.  The cosmic microwave background, the most distant thing we can see, is 46 billion light years away.  So, in “proper” distances, the radius of the observable universe is 46 billion light years.

Crucially, the Hubble sphere radius calculated above is also in proper distance units.  (The radius in light travel time would be around 9 billion light years per Ned Wright’s handy Cosmological Calculator.)

volumeofsphereWe can use the radius of each sphere to calculate their volumes.  The volume of the Hubble sphere is about 11.5 trillion cubic light years.  The volume of the observable universe is about 408 trillion cubic light years.  11.5 divided by 408 is .00282, or around 3%.  Siegel knew exactly what he was talking about.  (Not that I had any doubt about it.)

In other words, 97% of the observable universe is already forever out of our reach.  (At least unless someone invents a faster than light drive.)

It’s worth noting that, as the universe continues expanding, all galactic clusters will become isolated from each other.  In our case, in 100-150 billion years, the local group of galaxies will become isolated from the rest of the universe.  (By then, the local group will have collapsed into a single elliptical galaxy. )   We’ll still be able to see the rest of the universe, but it will increasingly, over the span of trillions of years, become more red shifted, and bizarrely, more time dilated, until it is no longer detectable.  By that time, there will only be red dwarfs and white dwarfs generating light, so the universe will already be a pretty strange place, at least by our current standards.

If our distant descendants manage to colonize galaxies in other galactic clusters, they will eventually become cut off from one another.  If any information of the surrounding universe survives into those distant ages, it may eventually come to be regarded as mythology, something unverifiable by those civilizations living trillions of years from now.

Posted in Science, Space | Tagged , , , , , , , , , , | 40 Comments

SMBC: Robot heaven

Click through for full sized version and red button caption.

Source: Saturday Morning Breakfast Cereal

Of course, the upshot is that if you view humans as organic machines, it opens the door to something like robot heaven eventually working for us.  We might someday build heaven.  Indeed, if it should turn out that there is a heaven waiting for us, it might well work similarly to robot heaven.

Posted in Zeitgeist | Tagged , , , , | 2 Comments

Michael Graziano: What hard problem?

Michael Graziano has an article at The Atlantic explaining why consciousness is not mysterious.  It’s a fairly short read (about 3 minutes).  I recommend anyone interested in this stuff read it in full.  (I tweeted a link to it last night, but then decided it warranted discussion here.)

The TL;DR is that the hard problem of consciousness is like the 17th century hard problem of white light.  No color, particularly white, exists except in our brains.  White light is a mishmash of light with different wavelengths, of every color, that our brains simply translate into what we perceive of as white. Our perception of consciousness is much the same:

This is why we can’t explain how the brain produces consciousness. It’s like explaining how white light gets purified of all colors. The answer is, it doesn’t. Let me be as clear as possible: Consciousness doesn’t happen. It’s a mistaken construct. The computer concludes that it has qualia because that serves as a useful, if simplified, self-model. What we can do as scientists is to explain how the brain constructs information, how it models the world in quirky ways, how it models itself, and how it uses those models to good advantage.

I pretty much agree with everything Graziano says in this article, although I’ve learned that dismissing the hard problem often leads to pointless debates about eliminative reductionism.  Instead, I admit that the hard problem is real for those who are troubled by it.  But like the hard problem of white color, it will never have a solution.

Graziano mentions that there is a strong sentiment that consciousness must be a thing, an energy field, or exotic state of matter, something other than information.  This sentiment arises from the same place as subjective experience.  It’s a model our brains construct.  It’s that model that gives us that strong feeling.  (Of course, the strong feeling is itself a model.)  When some philosophers and scientists say that “consciousness is an illusion”, what they usually mean is that this idea of consciousness as separate thing is illusory, not internal experience itself.

Why is this a valid conclusion?  Well, look at the neuroscience and you won’t find any observations that require energy fields or new states of matter.  What you’ll see are neurons signalling to each other across electrical and chemical synapses, supported by a superstructure of glial cells.  You’ll see nerve impulses coming in from the peripheral nervous system, a lot of processing in the neural networks of the brain, and output from this system in the form of nerve impulses going to the motor neurons connected to the muscles.  You’ll see a profoundly complex information processing network, a computational system.

You won’t find any evidence of something else, of an additional energy or separate state of matter, of anything like a ghost in the machine.  Could something like that exist and just not yet be detected?  Sure.  But that can be said of any concept we’d like to be true.  To rationally consider it plausible, we need some objective data that requires, or at least makes probable, its existence.  And there is none.  (At least none that passes scientific scrutiny.)

There’s only the feeling from our internal model.  We already know that model can be wrong about a lot of other things (like white light).  The idea that it can be wrong about its own substance and makeup isn’t a particularly large logical step.

Graziano finishes with a mention of machine consciousness.  I think machine consciousness is definitely possible, and I’m sure someone will eventually build one in a laboratory, but I wonder how useful it would be, at least other than as a proof of concept.  I see no particular requirement that my self driving car, or just about any autonomous system, have anything like the idiosyncrasies of human consciousness.  It might be a benefit for human interface systems, although even there I tend to think it would add pointless complexity.

Unless I’m missing something?  Am I, or Graziano, missing objective evidence of consciousness being more than information processing?  Are there reasons I’m overlooking to consider out intuitions about consciousness to be more reliable than intuitions about colors or other things?  Would there be benefits to conscious machines I’m not seeing?

Posted in Zeitgeist | Tagged , , , , , , | 38 Comments

Which political candidate do your views align with?

I took this political test, which provided the results below.  For the most part, they’re what I expected, although I’m mildly surprised I agreed with Jeb Bush that much.

ISideWith

I couldn’t imagine what healthcare and science issues I had in common with Ted Cruz, but it turns out  he supports the legalization of marijuana, which given his courting of evangelicals I find a bit surprising, and he supports government spending for space exploration, which I guess counts as a science issue we agree with.

Obviously I’d be good with any of the Democratic candidates, and would regard any of the major Republican contenders getting into the White House as a disaster.

How do your views line up?

h/t Political Wire

Posted in Zeitgeist | Tagged | 34 Comments

Gödel’s incompleteness theorems don’t rule out artificial intelligence

I’ve posted a number of times about artificial intelligence, mind uploading, and various related topics.  There are a number of things that can come up in the resulting discussions, one of them being Kurt Gödel’s incompleteness theorems.

The typical line of arguments goes something like this: Gödel implies that there are solutions that no algorithmic system can accomplish but that humans can accomplish, therefore the computational theory of mind is wrong, artificial general intelligence is impossible, and animal, or at least human minds require some as of yet unknown physics, most likely having something to do with the quantum wave function collapse (since that remains an intractable mystery in physics).

This idea was made popular by authors like Roger Penrose, a mathematician and theoretical physicist, and Stuart Hameroff, an anesthesiologist.  But it follows earlier speculations from philosopher J.R. Lucas, and from Gödel himself, although Gödel was far more cautious in his views than the later writers.

I’ve historically avoided looking into these arguments for a few reasons.  First, I stink at mathematics and assumed that would get in the way of understanding them.  Second, given all the times reality has stomped over the most careful logical and mathematical deductions of great thinkers, I have a tendency to dismiss any assertions about reality based solely on theorems (other than for mathematical realities).  Finally, these arguments are widely regarded as unsuccessful by most scientists and philosophers.

Still it does seem to capture the imagination of a lot of people.  Fortunately it turns out that there’s a lot of material describing the theorems that don’t get lost in the technicalities.  One excellent source is this two part Youtube of Mark Colyvan describing the them, going right to the edge of, but not falling into the mathematical details.  (These videos add up to about 45 minutes.  You don’t have to watch them to understand this post, which continues below.  I’m just including them for those who want more details.)

There are many concise English statements of the theorems.  Based on what I’ve been able to find out about them, these versions from the Stanford Encyclopedia of Philosophy seem relatively comprehensive.

First incompleteness theorem
Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F.

Second incompleteness theorem
For any consistent system F within which a certain amount of elementary arithmetic can be carried out, the consistency of F cannot be proved in F itself.

All of the sources stress how profound these theorems are to mathematics and logic.  It shows that any mathematical system is going to have blind spots, called Gödelian sentences, aspects of itself that it cannot logically prove or disprove.  And if mathematics as a whole are taken as system, it shows that there may be mathematical realities which can never be proven or disproven.

A common English analogy of a Gödelian sentence is the Liar’s Paradox:

This sentence is false.

Is the sentence true or false?  If it’s true, then it’s false, but if it’s false, then it’s true.

Another analogy more closely relevant to the theorems is:

This statement cannot be proven.

If the statement is proven, it is disproven.  If it is not proven…, well hopefully you get the picture.  The point of this last example is it is a statement that we can say we know to be true, even if we can’t logically prove it is true.

Okay, fair enough.  But what does this have to do with human minds and artificial intelligence?  Well, a computer program is an algorithm, a mathematical system.  It would seem to follow that such a system will have the same blind spots, called Gödelian sentences, as any other mathematical system.  The idea is that a purely logical system cannot resolve these sentences, but a human mind can.  Therefore, the argument goes, a human mind must be non-algorithmic, or at least some portions of it must be.

I think this argument fails for two broad reasons.  To start, let’s look at the English versions of the theorems again, but with a couple points in each emphasized:

First incompleteness theorem
Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F.

Second incompleteness theorem
For any consistent system F within which a certain amount of elementary arithmetic can be carried out, the consistency of F cannot be proved in F itself.

The first point to consider is that the theorem addresses a system’s ability to prove Gödelian sentences within itself.  Proving such sentences using information from outside the system is reportedly not an issue.  For many statements that we, human minds, can look at and see the truth of, it is quite plausible that we are simply seeing that truth by comparing it with a wide range of patterns from our overall life experiences.

In other words, we are using information from outside the system to see the truth of a Gödelian sentence within that system.  There is nothing that prevents this from being a fully algorithmic process.  In other words, the human mind can be a computational system that can see truths in other systems.

Of course, this point doesn’t prevent there being Gödelian sentences within the system that is the human mind.  If the human mind is algorithmic, there may be aspects of it that the mind itself can’t logically prove.

It pays to remember that describing the mind as one system can be a bit misleading.  It’s really a collection of interacting systems.  Each system may well contain its own Gödelian sentences, that the other systems may be able to process without trouble.  Still, a collection of systems is still itself an overall single system, albeit a profoundly complex one.  It seems likely to have its own Gödelian sentences.

And using outside systems doesn’t solve the issue of considering mathematics as a whole, where there would be no “outside” of the system.  If we regard all mathematics as an overall system, then wouldn’t there be truths that can’t be mathematically proven?  Gödel himself considered this possibility, that there may be mathematical problems which could not be solved, although he thought it was implausible, feeling that there must be an infinite aspect to the human mind to enable solving them.

But here’s where the second point comes in.  Gödel’s theorems apply to consistent systems.  Is the human mind consistent?  It may be that it is consistent in the sense that, given the same sensory perceptions, beliefs, and natural tendencies, it would always arrive at the same answer.  Of course, the same combination of these factors will never repeat themselves, making consistency difficult to demonstrate.

Perhaps a better question is, what if we allow for an algorithm that isn’t guaranteed to consistently derive a Gödelian sentence?  Inconsistencies in mathematical proofs make them useless.  Wouldn’t they also make algorithms useless?  Not necessarily.

If an algorithm to solve a problem with certainty is impossible, it doesn’t rule out algorithms that can arrive at probable solutions.  Indeed, when we remember that human intuition is often wrong, it seems quite plausible that a lot of what is happening when we say we “know” the truth of a Gödelian sentence is exactly this type of algorithmic probabilistic reasoning.

This type of reasoning can be wrong of course.  It frequently is in humans, including in mathematicians intuitively feeling like they know a solution they haven’t proven.  If we allow computers to function this way, then Gödel’s theorems seem circumvented.  Indeed, this is an insight that goes all the way back to Alan Turing in 1947:

…I would say that fair play must be given to the machine. Instead of it giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques… In other words then, if a machine is expected to be infallible, it cannot also be intelligent. There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.

So Gödel’s theorems don’t seem to rule out machine intelligence or the computational theory of mind, although they do imply interesting things about how intelligence works.  (As I understand it, artificial intelligence researchers have known about this pretty much from the beginning and incorporated into their models.)

Unless of course I’m missing something?  Perhaps something that resides in my own Gödelian sentences?

Posted in Mind and AI | Tagged , , , , , , , , | 36 Comments

Merry Christmas

To all my online friends, whatever today and tomorrow mean for you, whether it’s a religious observance, family event, or merely a couple days off work, I hope you have a great holiday!

If by any chance it’s not a holiday where you are, then I hope you have a great Thursday evening and Friday.

Posted in Zeitgeist | 12 Comments