Time to dump the concept of cognition?

An interesting paper came up in my Twitter feed.  Neuroscientist Paul Cisek notes that many of our current models on how the mind works come from dualistic traditions, as well as psychological ones that were heavily influenced by dualism.  He sees the concept of cognition having largely been created after dualism was abandoned.  It made up for the missing non-physical aspect of the mind, sitting between perception and action.

In the paper, he proposes remodeling our concepts by studying the evolutionary biology of the brain.  Along those lines, he provides an overview of the functional evolution of the vertebrate brain, from the earliest nervous systems up through the primate lineage.

Central to this view is the idea that living organisms are control systems designed to maintain their state within certain homeostatic parameters.  Behavior should be seen as an elaboration of that control into the environment, control that is in a tight sensorimotor feedback loop.  (There’s a lot of resonance here with Antonio Damasio’s biological value idea.)

There’s a lot in this paper, and it’s fairly technical.  But I also came across a couple of videos of talks where he presents his ideas, although without the evolutionary history analysis.  Here’s the short one.  It’s about 49 minutes.

Some of us where having a discussion about how different regions of the brain talk with each other and the loops that are involved.  This fits right along with Cisek’s ideas.  All behavior is an ongoing loop of stimulus, a proliferation of possible actions in competition with one or two winning, followed by the next stimulus.

I’m intrigued by the possibility that the binding problem isn’t really a problem.  The dorsal vision stream, the “where” stream, feeds into action selection, and the ventral stream, the “what” one, is used later by that process.  Object identification becomes a detail of action selection.

All that said, I’m not entirely convinced that Cisek succeeds here in obviating the concept of cognition.  Most of what he’s talking about seems to involve minimal foresight.  Certainly, as he indicates, that is most of what animals do.  And, lets be honest, it’s a lot of what humans do.

But humans also spend a good amount of time planning, both in the short and longer term.  A lot of the actions he discusses actually happen below the level of consciousness in humans.  Consciousness itself seems to be reserved for dealing with novel situations, where short term planning is needed, or longer term contemplation, such as planning the chess moves he dismisses.

All of which is to say, I’m not sure where imagination or introspection fit in his framework.  It seems to be mostly concerned with in-the-moment decisions.  And within that scope, I don’t know that his view of how this works is all that different from the conventional versions.

Still, the idea of analyzing mental functions by looking at evolutionary history appeals to me.  The recent developments showing that the hippocampus, which was understood to be crucial for memory, is actually a navigation system, puts the evolution of memory into context of why early animals needed it.  I suspect many of our “higher order” cognitive functions have similar grounded origins.

h/t How Does It Work @generuso

Posted in Zeitgeist | Tagged , , , , , | 6 Comments

Platonism and the non-physical

On occasion, I’ve been accused of being closed-minded.  (Shocking, I know.)  Frequently the reason is not seriously considering non-physical propositions, a perception of rigid physicalism.  However, as I’ve noted before, I’m actually not entirely comfortable with the “physicalist” label (or “materialist”, or other synonyms or near synonyms).  While it’s fairly accurate as to my working assumptions, it actually doesn’t represent a fundamental commitment.

My actual commitment is empiricism.  By “empiricism” here, I don’t necessarily mean physical measurement, but conscious experience, specifically reproducible or verifiable experience, and inferred theories that can predict future experiences, with an accuracy better than alternate theories, or at least better than random chance.  I do generally assume physicalism is true, mainly because many physical propositions seem able to meet this standard, where non-physical ones seem to struggle with it.

But that raises a question.  Are there any non-physical propositions that do meet the standard?  It depends on what we’re willing to consider non-physical.   In the Chalmers post a few weeks ago, I noted that we could interpret his views in a platonic or abstract fashion, in which case the differences between him and a functionalist might collapse into differences in terminology.  Although as I also noted, neither Chalmers nor Dennett would agree.

And this bridge between the views depends on your attitude toward platonism.  Note that “platonism” with a small ‘p’ doesn’t really refer to the philosophy of Plato, but to a modern outlook that regards abstract concepts as real.  This is sometimes described as real in a separate platonic realm, which many misinterpret as meaning a physical existence in a parallel universe or something.

But in modern platonism, abstract objects are held to have no spatio-temporal properties, and to be causally inert.  If they have an existence, it is one completely separate from time and space.  It’s not even right to say they’re “outside” of time and space, because that implies a physical location, something they don’t have.

What are examples of these abstract objects?  Numbers, mathematical relations, properties such as redness, structures, patterns, etc.  Under platonism, these things are held to have a non-physical existence.  For the Chalmers outlook, the property one is important since he often refers to his view as property dualism.

But is platonism true?  One of the the strongest arguments for it appears to be the way we talk about abstract objects.  We refer to concepts like “7” as though they have an existence separate and apart from a pattern of seven objects.  We refer to structures and properties in much the same way.  The fact that we can discuss “redness” coherently seems to imply we accept that property as having an independent existence.

But this assumes that analyzing language is in any way meaningful for what’s real.  At best, it might just show our intuitions, intuitions we might not even believe.  For instance, we refer to things like the sun “rising” and “setting” all the time without seriously thinking that the sun is moving around us (at least since Copernicus and Galileo).  It might be that all this usage should be viewed as metaphorical, and abstract objects as “useful fictions”.

But the dividing line between a useful fiction and a real concept seems like a blurry one.  The more useful a concept is, particularly one useful in an epistemic fashion, the harder it seems to dismiss as a fiction.  We reach a point where we have to invest a lot of energy in explaining why it’s not real.

That said, a strong case against platonism is also an epistemic one.  If minds exist in this universe, and abstract objects exist without any spatio-temporal aspects, and are causally inert, how can we know about them?  We could say the mind is capable of accessing abstract objects, but this implies something super-physical about it.  The relevant physics appear to be causally closed, and this proposition wouldn’t meet the empiricism criteria above.

The more usual defense is that we infer the existence of abstract objects by what we observe in the physical world, by the patterns and relations we see there.  But if that’s how we come to know about abstract objects, why do we actually need the separate abstract objects themselves?  Why can’t we just get by with the models in our mind and the physical patterns they’re based on?

This last point has long been what makes me leery of platonism.  A ruthless application of Occam’s razor seems to make it disappear in a flash of parsimony.  It doesn’t seem necessary.  And given how far some people have tried to run with it, this seems important.

All that said, this is a case where I’m not confident in my conclusion, at least not yet.  I still wonder if its pragmatic value might not imply ontology.  Everything in physics above the level of fundamental forces and quantum fields seems to exist as structure and function, a pattern of lower level constituents.

In many cases, these structures and functions, such as wings or the shape of fish, seem convergent.  These convergences could be seen as implying that the converged structure has an independent reality.  Of course, these are optimal energy structures that emerge from the laws of physics, but then do the laws themselves have an independent reality aside from the physical patterns and regularities?  Are they themselves abstract entities?

And the fact that large portions of the mathematics profession are mathematical platonists gives me pause.  Mathematicians seem convinced that they’re discovering something, not developing tools in some nominalist sense, although the dividing line between invention and discovery itself seems pretty blurry.

If platonism is true, then we have a non-physical reality, and properties such as consciousness (the property of being conscious) could be said to exist non-physically in a platonic sense.  To be sure, this is a far more limited sense of non-physical than many advocates of dualism envision.

Interestingly, Chalmers himself does not appear to be a platonist, but appears to consider the question of the existence of abstract objects to have no fact of the matter answer, espousing a view called ontological anti-realism.  Given my own instrumentalist leanings, I may have to investigate this view.  But it also implies my attempt at steel-manning his argument is probably fruitless.

What do you think?  Do you see other arguments for platonism?  Or against?  Or is the whole thing just hopeless navel-gazing?

Posted in Philosophy | Tagged , , , , | 146 Comments

Recommendation: The Warship

Back in January, I recommended Neal Asher’s The Soldier, the first book of a series called The Rise of the Jain.  The series takes place in Asher’s Polity universe, a future interstellar civilization run by AIs (artificial intelligence) and featuring androids, various degrees of posthuman citizens, and lots of aliens, both in AI and organic forms.

Throughout the Polity and surrounding regions of space, the remains of an alien civilization, named the Jain, are often found.  Jain technology is far in advance of anything the Polity has, but the technology is never what it seems.  It is always a trap, frequently destroying anyone who tries to make use of it.  As a result, most of it is sequestered for safekeeping.

But there is a concentration of Jain technology in an accretion disk, apparently a developing solar system, at a location between the Polity and its long time enemy, the Prador Kingdom, although relations with the Prador remain tentatively peaceful.  This disk is   guarded by a human-AI hybrid named Orlandine and her fleet of AI controlled battle platforms.

The Warship is the second book in the series.  It’s difficult to describe much about it without getting into spoilers.  I’ll just note that the situation in the first book intensifies, with most of the action taking place around the accretion disk and the nearby planet of Jaskor, Orlandine’s base of operations.

We learn new things about many of the characters from the first book and meet new characters, including humans, AI, and Prador.  The action in this book begins early and moves at a good clip throughout the whole story.  There are battles, both on and under the surface of Jaskor, as well as epic space battles at the accretion disk.

Asher teases us a bit by how the event implied by the series title will come about.  We saw one possibility in the first book, and others are introduced.  But by the end of this book, we learn which one the title refers to.

As always, Asher excels at putting us in the viewpoint of utterly alien characters, exploring the workings of their minds, and mixing technological descriptions with battle tactics.  As I’ve noted before, Asher’s writing is a type of mind candy for people who enjoy futuristic science, technology, biology, and other concepts mixed with space opera ones.

That said, this isn’t the hardest science fiction around by a long stretch.  FTL (faster than light), anti-gravity, and many other magical technologies are liberally thrown around in the story.  But it’s also matched with excellent speculation about the way an alien species’ biology influences its philosophies.

So if epic space opera is your cup of tea, highly recommended, although only after reading The Soldier.

Posted in Science Fiction | Tagged , , , | 8 Comments

Apollo 11 and the lost space age

Buzz Aldrin on the moon.
Image credit: NASA via Wikipedia

I was very young when Neil Armstrong and Buzz Aldrin landed on the moon in 1969, so I have no memory of the landing, and only limited memory of the Apollo program in general.  I think I remember seeing some of Apollo 17 on TV in 1972, the final flight to the moon.  (At the time, my six year old self wondered why a huge rocket left and only a tiny capsule came back.)

For three years, we had men walking on another world.  Shortly after that, we had a space station in orbit (Skylab), and the space shuttle was rumored to be around the corner.  There was a sense that we were on the verge of a new age.  The movie 2001: A Space Odyssey conveyed a future of rotating space stations in orbit with hotels and restaurants and large scale bases on the moon, with regularly scheduled flights between it all.  In the early 1970s, this vision of the future seemed inevitable.

History has obviously not been kind to that vision.  Skylab had a lot of problems, it was several years before the space shuttle was operational, and it was never the economical vehicle, with flights every two weeks, that it was sold to be.  Indeed, the space shuttle is now generally regarded in the space industry as having been a gigantic waste of time and money.

In the early 1970s, the sentiment was that we’d be on Mars by the mid to late 1980s.  Over the years, Mars has steadily been moved back.  In the 1990s, I remember reading that it would be in this decade.  Today we talk about Mars in the 2030s.  It always seems to be 20 years in the future.

From the late 70s until the early 2000s, I had the most common attitude of space enthusiasts, that NASA after Apollo was hopelessly incompetent and simply lacked vision.  As time went on, I came to recognize that their budget, adjusted for inflation, was nothing like it had been in the Apollo years.  I then became frustrated that space was not a priority of the government.

In retrospect, despite being an incredible technological, organizational, and heroic achievement, it’s now clear to me that the Apollo program was also a gigantic cold war public relations project.  We went to the moon to get there before the Russians, primarily because we were upset that they’d gotten to space first.  Everything was orchestrated to put a man on the moon with an American flag behind him.  Apollo 11 was the culmination of that effort that had lasted throughout the 1960s.

This is exemplified by the fact that we went to the moon, but did so with very little thought to building any kind of infrastructure to stay there.  Apollo accomplished its main goal, a demonstration of American technological supremacy, the superiority of capitalism over communism.  From that perspective, once the goal was accomplished, the collapse of funding in the 1970s seems inevitable.

I’ve often wondered what would be needed to spark the space age vision of 2001.  We see some of the answer in the movie itself, which presents companies like Pan Am, Hilton, Howard Johnson, and the original AT&T, titans of the 1960s, operating businesses in space.  The implication is that space is not only economical, but profitable.

However, space remains far from economical.  Getting material into Earth orbit is appallingly expensive.  Relatively new companies like SpaceX are attempting to reduce the costs, but even with those reductions, they remain staggering.  And there’s no foreseeable solution in sight to reduce them by the orders of magnitude necessary for hotels and restaurants in space, at least for the middle class.

What space lacks is a strong economic incentive for governments and industry to make the huge investments necessary to operate in it.  There’s often a lot of talk about the spirit of exploration and comparison with the “Age of Discovery” (more like the age of conquest for non-Europeans).  But what’s often missing from those comparisons is what actually motivated rulers like Henry the Navigator and Isabella of Castile to fund exploration missions: economics, namely the promise of finding a route around the Ottoman Empire to the spice islands and other riches in Asia.

Men risked their lives and governments funded them because of the substantial economic benefits, the riches, that could be attained.  Yes, finding the fabled Prester John’s kingdom, spreading Christianity, and general exploration were also goals, but it’s doubtful anyone would have funded the missions on just those objectives.

Space exploration needs its own version of the spice trade.  Many see possibilities in asteroid mining, but it remains a speculative proposition, and the cost to get out there and know whether it would be profitable is a major obstacle.  Whatever the economic impetus might turn out to be, until it’s found, the large scale space age often envisioned in science fiction will continue to be only an aspiration.

It might be that technological advances, such as more efficient propulsion methods, will eventually make things cheap enough to at least put crewed scientific stations on Mars and other locations around the solar system, although if artificial intelligence continues to advance, the benefits of risking humans in these locations might remain a dubious proposition.

While humans, except for those three brief years, have generally remained in low Earth orbit, robots have explored the solar system,  and there are now multiple craft on their way into interstellar space.  Space belongs first and foremost to the robots.  It seems clear they will always be the pioneers.  (Which isn’t decadence on our part.  15th century explorers, rather than risk their lives, would have sent robots in their place if they’d had them.)  The only real question is to what degree humans will follow in their wake.

What do you think?  Are there economic incentives other than mining?  Or some other motivation that might drive humans out into the solar system?

Posted in Space | Tagged , , , , | 42 Comments

The difficulty of subjective experience

As I indicated in the Chalmers post last week, phenomenal consciousness has been on my mind lately.  In the last few days, a couple of my fellow bloggers, Wyrd Smythe and James Cross, have joined in with their own posts.  We’ve had a lot of interesting discussions.  But it always comes back to the core issue.

Why or how do physical systems produce conscious experience, phenomenality, subjectivity?  Why is it “like something” to be certain types of systems?

On this blog, when writing about the mind, I tend to focus on the scientific investigation of the brain.  I continue to believe that the best insights to the questions above come from exploring how the brain processes sensory information and produces behavior.

For example, we got into a discussion on the Chalmers post about color perception and its relation to the different types of retinal cones, processing in the retinal layers, and processing in the brain.  These dynamics don’t yet provide a full accounting of the experience of color, but it does provide insights.

But while these kinds of explorations do narrow the explanatory gap, and I think they’ll continue to narrow it, they’ll probably always fail to close it completely.  The reason is that, the subjective never completely reduces to the objective, nor the objective to the subjective.

In the case of the subjective, we can only ever reduce phenomenal properties so far, say down to the individual quale or quality, like redness.  These qualities are like a bit in computer software.  A bit, a binary digit, is pretty much the minimal concept within the software realm, being either 1 or 0, true or false.  No matter what the software does, it can’t reduce any further.

On the other hand, looking from the outside, a bit maps to a transistor (or equivalent hardware) which absolutely can be reduced further.  Software can have a model of how a transistor works, but it can’t access the details of the transistors it is currently running on, at least not with standard hardware.

Similarly, once we’ve reduced experience to fundamental phenomenal properties, we can’t reduce any further, at least not subjectively, no matter how hard we introspect.  But looking from the outside in, these phenomenal properties, these qualia, can be mapped to neural correlates, which can then be further reduced.

Of course, most phenomenal primitives are much more complicated than a bit, and will involve far more complex mechanisms.  Some philosophers argue that we’ll never know why those complex correlates map to that particular subjective quality.  But I think that gives up too quickly.  The fact is that different experiences will have many overlapping neural correlates.  The intersections and divergences can be analyzed for whatever functional commonalities they show or fail to show among phenomenal primitives, enabling us to get ever tighter correlations.

Something like this has already been happening for a long time, with neurologists mapping changes in patient capabilities and reports of experience to injuries in the brain, injuries eventually uncovered in postmortem examinations.  Long before brain scans came along, neurologists were learning about the functions of brain regions in this manner.  Increasingly higher resolutions of brain scans will continue to narrow these correlations.

But as people have pointed out to me, all we’ll then have are correlations.  We won’t have causation.  Which is true.  Causation will always have to be inferred.  However, that is not unique to this situation.  In empirical observation, correlation is all we ever get.  As David Hume pointed out, we never observe causation.  Ever.  All we ever observe is correlation.  Causation is always a theory, always something we infer from observed correlations.

Nonetheless, the explanations we’re able to derive from these correlations will never feel viscerally right to many people.  The problem is that our intuitive model of the mental is very different from our model of the brain.  And unlike the rest of the body, we don’t have sensory neurons in the brain itself to help bridge the gap.  So the intuitive gap will remain.

Similarly, in the case of the objective, we can never look at a system from the outside and access its subjective experience.  The objective cannot be reduced to the subjective.  As Thomas Nagel pointed out, we can never know what it’s like to be a bat.  We can learn as much as we want about its nervous system, and infer what its experience might be like, but it will always be from the perspective of an outsider.

So the subjective / objective gap can’t be closed completely.  But it can be clarified and bridged enough to form scientific theories.

But, some will say, none of this answers the “why”?  Why does all this neural processing come with experience?  Why doesn’t it just happen “in the dark”?  Why aren’t we all philosophical zombies, beings physically identical to conscious ones, but with no experience?

I think the best way to answer this is to ask what it would actually mean to be such a zombie.  Obviously when we say “in the dark” we don’t mean that it would be blind.  But what do we mean exactly?

Such a system would still need to receive sensory information and integrate it into exteroceptive and interoceptive models.  That activity would still have to trigger basic primal reactions.  The models and reactions would have to be incorporated into action scenario simulations.  And to mimic discussions of conscious experience, such as system would need some level of access to its models, reactions, and simulations.

In other words, if it didn’t have experience, it would need something very similar to it.  We might call it pseudo-experience.  But the question will be, what is the difference between experience and pseudo-experience?

The contents of experience appears to have physical causes and appear crucial to many capabilities.  That makes it functional and adaptive in evolutionary terms.  In the end, I think that’s the main why.  We have experience because it’s adaptive.

But the intuitive gap will remain.  Although like the intuitive gap between Earth and the planets, between humans and animals, or between life and chemistry, I think it will diminish as science makes steady progress in spite of it.

Unless of course, I’m missing something?

Posted in Mind and AI | Tagged , , , , , | 111 Comments

Consciousness science undetermined

An interesting paper by Matthias Michel on the underdetermined nature of theories of consciousness.

Consciousness scientists have not reached consensus on two of the most central questions in their field: first, on whether consciousness overflows reportability; second, on the physical basis of consciousness. I review the scientific literature of the 19th century to provide evidence that disagreement on these questions has been a feature of the scientific study of consciousness for a long time. Based on this historical review, I hypothesize that a unifying explanation of disagreement on these questions, up to this day, is that scientific theories of consciousness are underdetermined by the evidence, namely, that they can be preserved “come what may” in front of (seemingly) disconfirming evidence. Consciousness scientists may have to find a way of solving the persistent underdetermination of theories of consciousness to make further progress.

Michel looks at scientific thought on consciousness in the 19th century.  Interestingly, many of the same debates we have today raged on back then, with people arguing about definitions of consciousness, eschewing metaphysical issues for cognitively accessible ones, and whether consciousness resides in the cortex, the thalamus, midbrain, or somewhere else.

Apparently some scientists in the 19th century thought consciousness might reside in the spinal cord.  Experiments done on animals, such as surgically decapitating frogs but keeping the body alive, showed them still capable of complex motor responses, such as a frog’s body attempting to rub acid off its thigh with its foot, inviting many to speculate that they retained a “feeling consciousness”.  Glancing at the history of spinal cord injuries, scientists back then probably didn’t have many, if any, live patients with severed spinal cords as a data point.

Another debate that goes back to that period is whether consciousness “overflows” reportability.  In other words, are there aspects of consciousness that can’t be self reported?  We saw an example of this idea recently with Ned Block’s contention that phenomenal consciousness holds detailed images that can’t be accessed for self report.

Michel’s main thesis is how difficult it is to assess many theories of consciousness.  Many seem to go on after seemingly being falsified.  An example is IIT (Integrated Information Theory) after it was shown (by Scott Aaronson and others) to indicate consciousness in systems that give no indication of being conscious.  Giulio Tononi simply bit the bullet and asserted that those systems were in fact conscious.

The issue, Michel points out, are standards of detection, such as self report or appropriate behavior.  If someone has a theory of consciousness, and it indicates a particular system is conscious, but that system fails a detection, a proponent of that theory can often simply discount that method of detection.

Thus Block discounts self report as a valid detection standard when it fails to capture the contents he claims are in phenomenal consciousness, and Tononi discounts, well, apparently all methods of detection, when he asserts a set of inactive logic gates are conscious.

That’s not to say that some detection methods shouldn’t be challenged.  Some scientists assert that fish cannot be conscious because they lack a cortex.  But no invertebrates have a forebrain, much less a cortex, yet many display complex behaviors indicating some level of consciousness.  So using the absence of a particular structure, just because it’s implicated in mammalian consciousness, doesn’t seem justified.

It seems to me that, for healthy humans, self report should be the gold standard of detection.  Once we know that a particular activity (behavioral or brain scan activity) is associated with self report in humans, we can then test for similar activity in injured humans or non-human animals.  If someone can’t cite a chain of evidence back to self report, we should be skeptical.

But the detection issues seem like a consequence of a deeper issue, the lack of consensus on what consciousness is.  This isn’t so much about a lack of understanding of what it is, as much as trouble even agreeing what we’re talking about, except in the most vague manners in which it can be expressed.

This is why I sometimes wonder if science isn’t better off focusing on specific capabilities, such as perception, memory, metacognition, etc, and leaving consciousness itself to the philosophers.  But maybe it’s enough as long as scientific theories are clear which type of consciousness, or which aspects of it they’re addressing.

Posted in Zeitgeist | Tagged , , , , | 34 Comments

Chalmers’ theory of consciousness

Ever since sharing Ned Block’s talk on it, phenomenal consciousness has been on my mind.  This week, I decided I needed to go back to the main spokesperson for the issue of subjective experience, David Chalmers, and his seminal paper Facing Up to the Problem of Consciousness.

I have to admit I’ve skimmed this paper numerous times, but always struggled after the main thesis.  This time I soldiered on in a more focused manner, and was surprised by how much I agreed with him on many points.

Chalmers starts off by acknowledging the scientifically approachable aspects of the problem.

The easy problems of consciousness include those of explaining the following phenomena:

  • the ability to discriminate, categorize, and react to environmental stimuli;
  • the integration of information by a cognitive system;
  • the reportability of mental states;
  • the ability of a system to access its own internal states;
  • the focus of attention;
  • the deliberate control of behavior;
  • the difference between wakefulness and sleep.

But his main thesis is this point.

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

My usual reaction to this is something like, “You’re holding up two puzzle pieces that fit together.  Everything you need is in what you call the ‘easy problems’!”  In Chalmers’ view, this puts me into a group he labels type-A materialists, a group including people like Daniel Dennett and Patricia and Paul Churchland.

The distinction between the two viewpoints is best exemplified by remarks Chalmers makes in his response paper to the many commentaries on the Facing paper.  Daniel Dennett in particular gets singled out a lot.

Dennett’s argument here, interestingly enough, is an appeal to phenomenology. He examines his own phenomenology, and tells us that he finds nothing other than functions that need explaining. The manifest phenomena that need explaining are his reactions and his abilities; nothing else even presents itself as needing to be explained.

This is daringly close to a simple denial – 

(Note: Dennett’s commentary on Chalmer’s paper is online.)

However, Chalmers later makes this admission:

Dennett might respond that I, equally, do not give arguments for the position that something more than functions needs to be explained. And there would be some justice here: while I do argue at length for my conclusions, all these arguments take the existence of consciousness for granted, where the relevant concept of consciousness is explicitly distinguished from functional concepts such as discrimination, integration, reaction, and report.

Here we have a divide between two camps, one represented by Chalmers, the other by Dennett, staring at each other across a gap of seemingly mutual incomprehension.  One camp sees something inescapably non-functional that needs to be explained, the other sees everything plausibly explainable in functional terms.  Both camps seem convinced that the other is missing something, or maybe even in denial.

Speaking from the functionalist camp, I will readily admit that I do feel the profound nature of subjectivity, of the fact we exist and experience reality with a viewpoint.  I don’t feel like an information processing system, a control center for an animal.  I feel like something more.  The sense that there has to be something in addition to mere functionality is very powerful.

The difference, I think, is that functionalists don’t trust this intuition.  It seems like something an intelligent social animal concerned with its survival and actualization might intuit for adaptive motivational (functional) reasons.  And it seems resonate with many other intuitions that science has forced us to discard, like the sense that we’re the center of the universe, that we’re separate and above nature, that time and space are absolute, or many others.

But are we right to dismiss the intuition?  Maybe the mind is different.  Maybe there is something here that normal scientific investigation won’t be able to resolve.  After all, we only ever have access to our own subjective experience.  Everything beyond that is theory.  Maybe we’re letting those theories cause us to deny the more primal reality.

Perhaps.  In the end, all we can do is build theories about reality and see which ones eventually turn out to be more predictive.

Anyway, as I mentioned above, I’ve always struggled with the paper after this point, generally shifting to skim mode.   This time, determined to grasp Chalmers’ viewpoint, I soldiered on, and got the surprises I mentioned.

First, Chalmers, while being the one to coin the hard problem of consciousness, does not see it as unsolvable.  He’s not one of those who simply say “hard problem”, fold their arms, and stop.  He spends time discussing what he thinks a successful theory might look like.

In his view, experience is unavoidably irreducible.  Therefore, any theory about it would likely look like a fundamental one, similar to fundamental scientific theories that involve spin, electric charge, or spacetime, while accepting these concepts as brute fact.  In other words, a theory of conscious experience might look more like a theory of physics than a biological, neurological, or computational one.

Such a theory would be built on what he calls psychophysical principles or laws.  This could be viewed as either expanding our ontology into a super-physical realm, or expanding physics to incorporate the principles.

But what most surprised me is that Chalmers took a shot at an outline of a theory, and it’s one that, at an instrumental level, is actually compatible with my own views.

His theory outline has three components (with increasing levels of controversy).

The principle of structural coherence.  This is a recognition that the contents of experience and functionality intimately “cohere” with each other.  In other words, the contents of experience have neural correlates, even if experience in and of itself isn’t entailed by them.  Neuroscience matters.

The principle of organizational invariance.  From the paper:

This principle states that any two systems with the same fine-grained functional organization will have qualitatively identical experiences. If the causal patterns of neural organization were duplicated in silicon, for example, with a silicon chip for every neuron and the same patterns of interaction, then the same experiences would arise.

This puts Chalmers on board with artificial intelligence and mind copying.  He’s not a biological exceptionalist.

The double-aspect theory of information.  This is the heart of it, and the part Chalmers feels the least confident about.  From the paper:

This leads to a natural hypothesis: that information (or at least some information) has two basic aspects, a physical aspect and a phenomenal aspect. This has the status of a basic principle that might underlie and explain the emergence of experience from the physical. Experience arises by virtue of its status as one aspect of information, when the other aspect is found embodied in physical processing.

In other words, the contents of conscious experience are built from functional neural processing, functionality which is multi-realizable, and experience itself is rooted in the properties of information.

There are two other aspects of this theory that are worth mentioning.  First, note the “information (or at least some information)” phrase.  This shows Chalmers’ attraction to panpsychism.

Honestly, if I had the conviction that the existence of experience was inherently unexplainable in terms of normal physics, panpsychism would be appealing.  Seeing nascent experience everywhere but concentrated by certain functionality, frees someone with this view from having to find any special physics or magic in the brain, providing a reconciliation with mainstream neuroscience.  Indeed, under Chalmers’ principles, it’s actually instrumentally equivalent to functionalism.

The other aspect worth mentioning is the danger of epiphenomenalism, the implication that experience is something with no causal power, which would be strange since we’re discussing it.  Chalmers acknowledges this in the response paper.  If the physics are casually closed, where does experience get a chance to make a difference?

Chalmers notes that physics explores things in an extrinsic fashion, in terms of the relations between things, not intrinsically, in terms of the things in and of themselves.  In other words, we don’t know fundamentally what matter and energy are.  Maybe their intrinsic essence includes an incipient experential aspect that contributes to their causal effects.  If so, it might allow his theory to avoid the epiphenomenalism trap.  (Philip Goff more recently discussed this concept.)

To be clear, Chalmers’ theory outline carries metaphysical commitments a functionalist doesn’t need.  However, that aside, I’m surprised by how close it is to my own views.  I have no problem with his first two principles (at least other than the limitation he puts on the first one).

The main difference is in the third component.  I see phenomenal properties as physical information, and phenomenal experience overall as physical information processes, without any need to explicitly invoke a fundamental experential aspect of information.  In my mind, experience is delivered by the processing, but again that’s the functionalist perspective.  The thing is, the practical results from both views end up being the same.

So in strictly instrumental terms, my views and Chalmers are actually in alignment.  We both turn to neuroscience for the contents of consciousness, and both of us accept the possibility of machine intelligence and mind copying.  And information is central to both views.  The result is that we’re going to make very similar, if not identical predictions, at least in terms of observations.

Overall then, my impression is that while Chalmers is convinced there is something in addition to the physics going on, at least known physics, he reconciles that view with science.  Indeed, if we interpret the non-physical aspects of his theory in a platonic or abstract manner, the differences between his views and functionalism could be said to collapse into language preferences.  Not that I expect Chalmers or Dennett to see it this way.

What do you think?  Am I being too easy on Chalmers?  Or too skeptical?  Or still not understanding the basic problem of experience?  What should we think about Chalmers’ naturalistic and law driven dualism?

Posted in Mind and AI | Tagged , , , , , , | 116 Comments

To perceive is to predict

Daniel Yon has an interesting piece at Aeon on how our brains predict the outcomes of our actions, shaping reality into what we expect, and why we see what we believe, rather than the other way around.

This idea is part of a growing sentiment in the cognitive science community that prediction is at the heart of what brains bring to the picture.  The thing to understand is that this isn’t about conscious prediction (although that’s part of it) but about pre-conscious predictions that enter into our awareness as perceptions.  Put another way, perceptions are predictions.

This can be a little clearer if we back up from an evolutionary perspective and consider the nervous system of an early chordate worm.  Such a creature has a nodocord, a central nerve running along its length, where its sensory data is integrated and which serves as a central pattern generator, its main source of motor action.

This creature endogenously generates rhythmic movement and responds to stimuli with fixed action patterns, meaning that its behavior is largely set by its genetics.  Although it’s capable of some classical conditioning, its genetic programming doesn’t provide much flexibility.

In time, the creature’s descendants will develop sensory apparatus to do things like detect light, vibrations, and low concentrations of chemicals.  What will these capabilities provide to those descendants that the early creature lacks?  We can’t reference their eventual evolution into sight, hearing, and smell, because natural selection doesn’t act with foresight.  Every mutation has to be adaptive if it’s going to propagate and be enhanced.

The answer is prediction.  Initially these predictions were very simple.  Sensory data simply provided earlier triggering of reflexes that might previously not have been triggered until the organism came into direct contact with the stimulus.  The early reactions provided a survival advantage.  But over time, as the sensory data increased in resolution, the predictions became more detailed, until we get a fish that can see and flee from a predator before the predator has a chance to bite into it.

As we follow evolutionary history, the predictions become progressively more sophisticated, until we arrive at us predicting not only our spatial and temporal environment, but social situations, as well as our own actions.

As Yon describes in his article, perception being prediction means that sometimes we mis-perceive, that is, we predict wrong.  Being that the prediction is pre-conscious, it often isn’t something we can guard against.  We can only be sensitive to the error signal when it arrives.  Put another way, people’s worldview has a powerful effect on what they perceive.

This is one of the reasons why all observation in science is theory laden.  We can attempt to make pre-theory observations, as some philosophers have urged, but ultimately our perceptions come to us embedded in our existing beliefs.  All we can do is see how accurate, or not, our predictions are, and be open to revising our beliefs when those predictions fail.

I think the prediction paradigm has a lot of power, although I do sometimes worry that maybe it’s getting overused to describe everything about the brain.  The question is, are there counter-examples out there?  Or any other data that can’t fit into it?  Is the brain basically reflexes plus predictions?

Posted in Zeitgeist | Tagged , , , , , | 34 Comments

Rule out plant consciousness for the right reasons

In recent years, there’s been a resurgence in the old romantic sentiment that maybe plants are conscious.  I hadn’t realized that an entire sub-field had formed called Plant Neurobiology, the name itself incorporating a dubious claim that plants have neurons.  Although later renamed to the more cautious Plant Signalling and Behavior, it’s reportedly still popularly known by the original provocative title.

Apparently a number of biologists have had enough and published a paper in the journal Trends in Plant Science making the case that plants neither possess nor require consciousness.  From the abstract:

  • Although ‘plant neurobiologists’ have claimed that plants possess many of the same mental features as animals, such as consciousness, cognition, intentionality, emotions, and the ability to feel pain, the evidence for these abilities in plants is highly problematical.
  • Proponents of plant consciousness have consistently glossed over the unique and remarkable degree of structural, organizational, and functional complexity that the animal brain had to evolve before consciousness could emerge.
  • Recent results of neuroscientist Todd E. Feinberg and evolutionary biologist Jon M. Mallatt on the minimum brain structures and functions required for consciousness in animals have implications for plants.
  • Their findings make it extremely unlikely that plants, lacking any anatomical structures remotely comparable to the complexity of the threshold brain, possess consciousness.

The paper is not technical and is fairly easy and interesting reading, although there are numerous summaries in the news.  It makes some good points about the metabolic expenses of consciousness, and that plants are simply not in an ecological position where paying that energy price is adaptive for them.

Those of you who’ve followed me for a while might recognize the names Todd Feinberg and Jon Mallatt, as I’ve highlighted their work several times.  Their books have been a major influence on my views, so it makes a lot of sense to me that their work would be discussed in this context.

But while they are a major influence, I don’t buy all of their propositions.  In particular, I’m not comfortable with their neurobiological essentialism, or this paper citing it as the major driving force for rejecting plant consciousness.  Feinberg and Mallatt understandably hold this position because the only place consciousness has been conclusively observed is in such systems.

But evolution has repeatedly shown itself capable of finding alternate solutions to problems.  In the case of nervous systems, their low level functionality is actually a re-implementation of functionality that already existed in unicellular organisms.  So it seems to me that we should be open to alternate implementations of the high level functionality.

And it’s worth noting that Feinferg and Mallatt’s “structural, organizational, and functional complexity” criteria were developed by looking at vertebrate nervous systems (fish, reptiles, mammals, birds).  The invertebrates who they admit into club-consciousness: arthropods (insects, crabs) and cephalopods (octopuses, squids), make it in based on their sense organs and behavior, that is, their observed capabilities.

I think that’s how we should assess the proposition of plant consciousness, cognition, intelligence, etc, by their observed capabilities.  Doing so leaves us open to alternative possibilities, while also avoiding animal chauvinism.

Here I’ll pull out my own mental crutch: the hierarchy of consciousness.  Each layer builds on the previous one.

  1. Reflexes: automatic reactions to simuli, fixed action patterns determined by genetics (or programming) although modifiable by local classical conditioning.  Some will insist that these action patterns be biologically adaptive; if so, then this and the layers above are inherently biological.
  2. Perception: predictive models  of the environment build on information from distance senses (sight, hearing, smell), expanding the scope of what the reflexes can react to, enabling reaction prior to a direct somatic or chemorecptive encounter
  3. Attention: prioritization of which perceptions the reflexes are reacting to.  Attention can be bottom-up, essentially reflexes about reflexes (meta-reflexes), or top-down, driven by layers 4 and 5.
  4. Imagination & Sentience: simulations and assessment of possible action scenarios.  Based on the results of the simulations, individual reflexes are either allowed or inhibited, decoupling the reflexes, turning them into motivational states (affects) rather than automatic reactions.
  5. Metacognition: A feedback mechanism for assessing the system’s own cognitive states, particularly the reliability of beliefs.  An advanced recursive form of this enables symbolic thought including language, art, mathematics, etc.

Layers 1-4 make up what is commonly referred to as primary consciousness.  Based on all the evidence I’ve read, plants are mostly in layer 1, with perhaps some limited layer 2 abilities.  I haven’t seen anything implying layers 3 or higher.

Of course, many in the plant neurobiology community might insist that layers 1 and 2 are sufficient for the label “conscious”.  But if so, then we’d have to make careful distinctions between animal consciousness vs plant consciousness, and be clear that we don’t mean plants have imagination or self reflection.

Consciousness is in the eye of the beholder, but plant consciousness doesn’t strike me as a productive proposition.  If we accept it, what then?  Are we required to take their sensibilities into account?  Are we being cruel when we mow the yard or trim the hedges?  Are vegetarians as much killers as carnivores?

All in all, it would be a lot of trouble for a sketchy proposition, an extraordinary proposition that we should require extraordinary evidence for before accepting.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , | 39 Comments

The ASSC 23 debate on whether artificial intelligence can be conscious

The ASSC (Association of Scientific Study of Consciousness) had its annual conference on consciousness this week, which culminated in a debate on whether AI can be conscious.

Note: the event doesn’t actually start until the 28:30 minute mark.  The remaining part is about 99 minutes long.

I was delighted to see the discussion immediately become focused on the importance of definitions, since I think the question is otherwise meaningless.  In my humble and totally unbiased opinion, the first speaker, Blake Richards, hit it out of the park with his answer that it depends on which definition of consciousness we’re using, and in noting the issues with the folk definitions, such as subjective experience, phenomenality, etc.

In fact, I would go on to say that just about all of Richards’ positions in this discussion struck me as right.  The only issue I think he might have misplaced faith in is our ability to come together on one definition of consciousness that is scientifically measurable.  (And to be fair, it was more an aspiration than a faith.)  I strongly suspect that we’ll always have to qualify which specific version we’re talking about (i.e. access consciousness, exteroceptive-consciousness, etc).   But overall I found his hard core functionalism refreshing.

It’s inevitable that this type of conversation turns toward ethics.  Indeed, I think when it comes to folk conceptions of consciousness, the questions are inextricably linked.  Arguably what is conscious is what is a subject of moral worth, and what is a subject of moral worth is conscious.

I got a real kick out of Hakwan Lau’s personality.  As a reminder, he was one of the authors of the paper I shared last week on empirical vs fundamental IIT.

I was also happy to see all the participants reject the zombie concept in the later part of the discussion.

Generally speaking, this was an intelligent, nuanced, and fairly well grounded discussion on the possibilities.

As I noted above, my own view is similar to Richards’.  If we can design a system that reproduces the functional capabilities of an animal, human or otherwise, that we consider conscious, then by whatever standard we’re using, that system will be conscious.  The interesting question to me is what is required to do that.

What do you think?  Is AI consciousness possible?  Why or why  not?  And if it is, what would be required to make you conclude there is a consciousness there?

Posted in Zeitgeist | Tagged , , , , , , | 52 Comments