Is consciousness really a problem?

The Journal of Consciousness Studies has an issue out on the meta-problem of consciousness.  (Unfortunately, it’s paywalled, so you’ll need a subscription, or access to a school network that has one.)

As a reminder, there’s the hard problem of consciousness, coined by David Chalmers in 1995, which is the question of why or how we have conscious experience, or as described by others, how conscious experience “arises” from physical systems.

Then there’s the meta-problem, also more recently coined by Chalmers, on why we think there is a hard problem.  The meta-problem is an issue long identified by people in the illusionist camp, those who see phenomenal consciousness as an illusion, a mistaken concept.

The JCS issue has papers from a number of illusionists, which include the usual suspects, like Daniel Dennett and Keith Frankish discussing the virtues of the illusionist outlook.  It also has an entry by Michael Graziano discussing his attention-schema theory of consciousness.  Graziano gave a description of the meta-problem in his book Consciousness and the Social Brain back in 2013, although he didn’t call it that.  (Graziano has a new book out, which appears to be an updated look at his theory, which I might have to read at some point.)

Picking through the entries, two in particular caught my attention.  One by Hakwan Lau and Matthias Michel (whose work I’ve highlighted a lot lately), look at the meta-problem from a socio-historical perspective.  The main gist is that on some subjects, such as consciousness, we psych ourselves out, collectively convincing ourselves that it is unsolvable.

This has the perverse effect of making scientists reluctant to work on it, and attracting senior scientists, often from other fields, at the end of their career interested in making a revolutionary breakthrough, which often leads to outlandish ideas.  This in turn sets up a feedback loop leading to a breakdown of peer review, credibility, and funding.

The solution Lau and Michel contend, is to work on incremental gains, attempt to build up empirical evidence.  Gradually this will diminish many of the mysteries and give the idea that the problems are not as insoluble as they might appear.

Of course, many will never accept the theories produced by such an approach, but as Lau and Michel point out, this is often true in science, where Isaac Newton’s contemporaries were uneasy about the action-at-a-distance implied in Newtonian gravity, or Einstein’s reluctance to accept the results of quantum mechanics, but the old guard was eventually replaced by a newer generation of scientists who didn’t find the new theories objectionable.

I think there’s a lot to this view.  But I also think the incremental approach been happening in psychology and neuroscience for a long time.  I’ve read plenty of neuroscience material that were studies of aspects of consciousness, but that studiously avoided the word “consciousness”, focusing instead on specific capabilities and the neural wiring underpinning it.  Much of this material is what personally convinced me that the consciousness problem is overstated.

The other paper that caught my eye has a similar theme.  Justin Sytsma and Eyuphan Ozdemir challenge Chalmers’ contention that perception of the hard problem is widespread among the lay public, that it’s part of our folk psychology.  (There’s a free preprint available.)

They provide evidence showing that people are only slightly less likely to attribute phenomenal experiences like seeing red to a robot as opposed to a human.  The data do show that they’re less likely to attribute pain to the robot, but not as much as might be implied by a widespread feeling that phenomenal consciousness is uniquely human or biological.

In other words, according to the authors, most of the lay public don’t appear to hold a concept of the philosophical version of phenomenal consciousness, and so most of them don’t have the intuitive concern about the hard problem.

If true, I can’t say I find it particularly surprising.  Keith Frankish recently asked on Twitter if people recalled thinking about consciousness as a child.  I didn’t respond, mostly because I wasn’t sure what I remember thinking about consciousness as a child.  I certainly wouldn’t have used the word “consciousness”, but I was trying to think if I might have pondered it in some pre-terminological fashion.

But the truth is, prior to about ten years ago, I didn’t really give consciousness much thought.  The word “conscious” to me meant little more than being awake.  (Even back in my younger days, when I was a dualist.)  I suspect for a lot of people, that’s about the limit of what they consider about it.  Most of them have no problem conceiving of a robot as conscious.

What do you think?  Did you ponder consciousness as a child?  Or before you were interested in philosophy?  In other words, before you read it was a problem, did you actually perceive the problem?

David Chalmers on the meta-problem of consciousness

David Chalmers is famous as the philosopher who coined the hard problem of consciousness, the idea that how and why consciousness is produced from a physical system, how phenomenal experience arises from such a system, is an intractably difficult issue.  He contrasts the hard problem with what he calls “easy problems” such as discriminating between environmental stimuli, integrating information, and reporting on mental states.

Recently, Chalmers has been discussing another problem, the meta-problem of consciousness.  In essence, it’s the problem of why so many people think there is a hard problem.  I give him credit for addressing this, but it’s an issue that has been raised a lot over the years by people, mostly people in the illusionist camp, who have questioned whether the hard problem is really a problem.  Crucially, Chalmers admits that the meta-problem, at least in principle, falls into his easy problem category.

This talk is about an hour and 10 minutes.  I recommend sticking around for the Q&A.  The quality of the questions from the Google staff make it pretty interesting.

One of the things I found interesting in the talk were the multiple references to the idea of consciousness being irreducible.  I’ve pushed back against that idea multiple times on this blog.  I find it strange that anyone familiar with neurological case studies can argue that consciousness can’t be present in lesser or greater quantities, or that aspects of it can’t be missing.

However, what I found interesting is the idea that panpsychism involves an irreducible notion of consciousness.  When you push panpsychists on whether things like a single neuron, a protein, a molecule, an atom, or an electron are conscious, what you usually get back is an assertion that the consciousness in these things isn’t anything like the consciousness we’re familiar with.  It’s a building block of sorts.  Which seems to me like a reduction of our manifest image of consciousness to those these more primitive building blocks.

One prominent panpsychist recently equated quantum spin with those building blocks.  This just brings me back to the observation that the more natualistic versions of panpsychism  seem ontologically equivalent to the starkest forms of illusionism, with the differences between them simply coming down to preferred language.

Anyway, those of you who’ve known me for a while will know that my sympathies in this discussion are largely with the illusionists.  I think their explanations about what is going on are the most productive.

Except for one big caveat.  I don’t care for the word “illusion” in this context.  I do have sympathy with the assertion that if phenomenal experience is an illusion, then the illusion is the experience.  It seems more productive to describe experience as something that is constructed.  We have introspective access to the final constructed show, but not to the backstage mechanisms.  That lack of access makes the show look miraculous, when in reality it’s just us not seeing how the magician does its trick.

Chalmers main point in discussing the meta-problem seems to be an effort not to cede this discussion to the illusionists.  He points out that there may be solutions to the meta-problem that leaves the hard problem intact.

Perhaps, but it seems to me that the most plausible solutions leave the hard problem more as a psychological one, a difficulty accepting that the data provide no support for substance dualism, for any ghost in the machine.  To reconcile with that data, we have to override our intuitions, but that is often true in science.

Unless of course I’m missing something?

The prospects for a scientific understanding of consciousness

Michael Shermer has an article up at Scientific American asking if science will ever understand consciousness, free will, or God.

I contend that not only consciousness but also free will and God are mysterian problems—not because we are not yet smart enough to solve them but because they can never be solved, not even in principle, relating to how the concepts are conceived in language.

On consciousness in particular, I did a post a few years ago which, on the face of it, seems to take the opposite position.  However, in that post, I made clear that I wasn’t talking about the hard problem of consciousness, which is what Shermer addresses in his article.  Just to recap, the “hard problem of consciousness” was a phrase originally coined by philosopher David Chalmers, although it expressed a sentiment that has troubled philosophers for centuries.


It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does…The really hard problem of consciousness is the problem of experience. When we think and perceive there is a whir of information processing, but there is also a subjective aspect.

Broadly speaking, I agree with Shermer on the hard problem.   But I agree with an important caveat.  In my view, it isn’t so much that the hard problem is hopelessly unsolvable, it’s that there is no scientific explanation which will be accepted by those who are troubled by it.  In truth, while I don’t think the hard problem has necessarily been solved, I think there are many plausible solutions to it.  The issue is that none of them are accepted by the people who talk about it.  In other words, for me, this seems more of a sociological problem than a metaphysical one.

What are these plausible solutions?  I’ve written about some of them, such as that experience is the brain constructing models of its environment and itself, that it is communication between the perceiving and reflexive centers of the brain and its movement planning centers, or that it’s a model of aspects of its processing as a feedback mechanism.

Usually when I’ve put these forward, I’m told that I’m missing the point.  One person told me I was talking about explanations of intelligence or cognition rather than consciousness.  But when I ask for elaboration, I generally get a repeat of language similar to Chalmers’ or that of other philosophers such as Thomas Nagel, Frank Jackson, or others with similar views.

The general sentiment seems to be that our phenomenological experience simply can’t come from processes in the brain.  This is a notion that has long struck me as a conceit, that our minds just can’t be another physical system in the universe.  It’s a privileging of the way we process information, an insistence that there must be something fundamentally special and different about it.  (Many people broaden the privilege to include non-human animals, but the conceit remains the same.)

It’s also a rejection of the lessons from Copernicus and Darwin, that we are part of nature, not something fundamentally above or separate from it.  Just as our old intuitions about Earth being the center of the universe, or of us being separate and apart from other animals, are not to be trusted, our intuitions formed from introspection, from self reflection, a source of information proven to be unreliable in many psychology studies, should not necessarily be taken as data that need to be explained.

Indeed, Chalmers himself has recently admitted to the existence of a separate problem from the hard one, what he calls “the meta-problem of consciousness”.  This is the question of why we think there is a hard problem.  I think it’s a crucial question, and I give Chalmers a lot of credit for exploring it, particularly since in my mind, the existence of the meta-problem and its most straightforward answers make the answer to the hard problem seem obvious: it’s an illusion, a false problem.

It implies that neither the hard problem, nor the version of consciousness it is concerned about, the one that remains once all the “easy” problems have been answered, exist.  They are apparitions arising from a data model we build in our brains, an internal model of how our minds work.  But the model, albeit adaptive for many everyday situations, is wrong when it comes to providing accurate information about the architecture of the mind and consciousness.

Incidentally, this isn’t because of any defect in the model.  It serves its purpose.  But it doesn’t have access to the lower level mechanisms, to the actual mechanics of the construction of experience.  This lack of access places an uncrossable gap between subjective experience and objective knowledge about the brain.  But there’s no reason to think this gap is ontological, just epistemic, that is, it’s not about what is, but what we can know, a limitation of the direct modeling a system can do on itself.

Once we’ve accounted for capabilities such as reflexive affects, perception (including a sense of self), attention, imagination, memory, emotional feeling, introspection, and perhaps a few others, essentially all the “easy” problems, we will have an accounting of consciousness.  To be sure, it won’t feel like we have an accounting, but then we don’t require other scientific theories to validate our intuitions.  (See quantum mechanics or general relativity.)  We shouldn’t require it for theories of consciousness.

This means that asking whether other animals or machines are conscious, as though consciousness is a quality they either have or don’t have,  is a somewhat meaningless question.  It’s really a question of how similar their information processing and primal drives are to ours.  In many ways, it’s a question of how human these other systems are, how much we should consider them subjects of moral worth.

Indeed, rewording the question about animal and machine consciousness as questions about their humanness, makes the answers somewhat obvious.  A chimpanzee obviously has much more humanness than a mouse, which itself has more than a fish.  And any organism with a brain currently has far more than any technological system, although that may change in time.

But none have the full package, because they’re not human.  We make a fundamental mistake when we project the full range of our experience on these other systems, when the truth is that while some have substantial overlaps and similarities with how we process information, none do it with the same calibration of senses or the combination of resolution, depth, and primal drives that we have.

So getting back to the original question, I think we can have a scientific understanding of consciousness, but only of the version that actually exists, the one that refers to the suite and hierarchy of capabilities that exist in the human brain.  The version which is supposed to exist outside of that, the version where “consciousness” is essentially a code word for  an immaterial soul, we will never have an understanding of, in the same manner we can’t have a scientific understanding of centaurs or unicorns, because they don’t exist.  The best we can do is study our perceptions of these things.

Unless of course, I’m missing something.  Am I being too hasty in dismissing the hard-problem version of consciousness?  If so, why?  What about subjective experience implies anything non-physical?

A possible answer to the hard problem of consciousness: subjective experience is communication

In 1995, David Chalmers coined the “hard problem of consciousness”:

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.

…The really hard problem of consciousness is the problem of experience. When we think and perceive there is a whir of information processing, but there is also a subjective aspect.

Chalmers was giving a label to an issue that has existed for a long time, but his labeling of it has given many a clarity on why they find many scientific explanations of how the brain and mind work to be unsatisfactory.  Bring up the scientific understanding of consciousness, and someone is going to ask why all the data processing is accompanied by experience.

My last post discussed a movement in the philosophy of mind to label this distinction as an illusion.  And to be clear, I do think the idea that experience is something separate and apart from the information processing of the brain is an illusion.  That’s not to say that I think subjective experience itself doesn’t exist in some form.

But that still leaves the question of why we have subjective experience.  In earlier posts, I’ve noted that to have any hope of answering that question, we have to be willing to ask what experience actually is, and gave my answer that it is a system building and using internal models of the environment and of itself, in essence an inner world, as a guide to action.

I still think this answer is true, but in conversation with fellow blogger Michael, I realized that there may be a better way of articulating it, one that may resonate a little better with those troubled by the hard problem, at least with those open to explanations other than substance dualism or speculative exotic physics.

I think subjective experience is communication.

Consider that every aspect of experience seems to have a communicative value.  Using the examples Chalmers gave in the quote above, the deep blue communicates information about the object being modeled such as maybe a deep lake, and the sensation  of middle C communicates something about the environment (or given the nature of music, or art in general, the illusion of something).

Other examples are the vividness of red communicating ripe fruit, pain communicating damage to the body, severe pain communicating damage serious enough to perhaps warrant hiding and healing, or the delight of a child’s laughter communicating that the next generation of our genetic legacy is happy and healthy.

Now, the question you’re probably asking is, communication from what to what?  I think the answer is communication from various functional areas of the brain to the movement planning areas.  The source areas include the sensory processing regions, which model the information coming in from the outside world, and the limbic system, which adds valences to the models, that is judgments of preference or aversion.

Image credit: Anatomist90 via Wikipedia
The Limbic System: Image credit: OpenStax College via Wikipedia
The Limbic System: Image credit: OpenStax College via Wikipedia

(For those familiar with neuroanatomy, by movement planning areas I mean the prefrontal cortex, by sensory processing areas I mean the middle parietal lobe, posterior cingulate cortex, and all the regions that feed into them, and by limbic system I mean the structures commonly identified with that term, such as the amgydala, anterior cingulate cortex, hypothalmus, etc.)

Of course, as I noted in the post on consciousness being predictive simulations, this is actually a two way communication, because those movement planning regions instigate simulations that require participation of all the source regions.  But what we call experience itself, may be the preparation and transmission of that information to the executive centers of the brain, executive centers whose job it is to fulfill the main evolutionary purpose of brains, to plan and initiate movement.

One quick clarification.  This isn’t an argument for a homunculus existing in those executive centers.  This isn’t the Cartesian theater.  It’s communication from the sensing and emotion subsystems of you to the action oriented subsystem of you, and consciousness involves all the interactions between them.

Now, it’s almost certainly possible to find aspects of experience that science doesn’t have a communicative explanation for yet.  But is it possible to find any for which there is no conceivable explanation?  Even if there is, even if we can find the odd part here or there that has no imaginable explanation, if we can find ones for the vast majority of qualia, doesn’t that mean that most of them are fulfilling a communicative function?  And that experience overall is generally about communication?

Indeed, if you think about it, isn’t this type of communication crucial for a brain’s evolutionary purpose?  How else can the movement planning portions of the brain fulfill their function other than by receiving information from the input processing and evaluative portions?

If so, for a philosophical zombie to exist, it would need to have an alternate mechanism for this communication.  But why wouldn’t that alternate mechanism not simply be an alternate implementation of subjective experience?  If this view is correct, zombies are impossible, at least zombies sophisticated enough to reproduce the behavior of a conscious being in a sustainable manner.

So, this seems to be a plausible explanation for what experience is, and why it exists.  It seems like a possible answer to Chalmers’ hard problem.

Unless of course, I’m missing something?  In particular, what aspects of subjective experience might not fit this conception of it?  Are they significant enough to discount the answer?  Or are there other problems I’m not seeing?

Michael Graziano: What hard problem?

Michael Graziano has an article at The Atlantic explaining why consciousness is not mysterious.  It’s a fairly short read (about 3 minutes).  I recommend anyone interested in this stuff read it in full.  (I tweeted a link to it last night, but then decided it warranted discussion here.)

The TL;DR is that the hard problem of consciousness is like the 17th century hard problem of white light.  No color, particularly white, exists except in our brains.  White light is a mishmash of light with different wavelengths, of every color, that our brains simply translate into what we perceive of as white. Our perception of consciousness is much the same:

This is why we can’t explain how the brain produces consciousness. It’s like explaining how white light gets purified of all colors. The answer is, it doesn’t. Let me be as clear as possible: Consciousness doesn’t happen. It’s a mistaken construct. The computer concludes that it has qualia because that serves as a useful, if simplified, self-model. What we can do as scientists is to explain how the brain constructs information, how it models the world in quirky ways, how it models itself, and how it uses those models to good advantage.

I pretty much agree with everything Graziano says in this article, although I’ve learned that dismissing the hard problem often leads to pointless debates about eliminative reductionism.  Instead, I admit that the hard problem is real for those who are troubled by it.  But like the hard problem of white color, it will never have a solution.

Graziano mentions that there is a strong sentiment that consciousness must be a thing, an energy field, or exotic state of matter, something other than information.  This sentiment arises from the same place as subjective experience.  It’s a model our brains construct.  It’s that model that gives us that strong feeling.  (Of course, the strong feeling is itself a model.)  When some philosophers and scientists say that “consciousness is an illusion”, what they usually mean is that this idea of consciousness as separate thing is illusory, not internal experience itself.

Why is this a valid conclusion?  Well, look at the neuroscience and you won’t find any observations that require energy fields or new states of matter.  What you’ll see are neurons signalling to each other across electrical and chemical synapses, supported by a superstructure of glial cells.  You’ll see nerve impulses coming in from the peripheral nervous system, a lot of processing in the neural networks of the brain, and output from this system in the form of nerve impulses going to the motor neurons connected to the muscles.  You’ll see a profoundly complex information processing network, a computational system.

You won’t find any evidence of something else, of an additional energy or separate state of matter, of anything like a ghost in the machine.  Could something like that exist and just not yet be detected?  Sure.  But that can be said of any concept we’d like to be true.  To rationally consider it plausible, we need some objective data that requires, or at least makes probable, its existence.  And there is none.  (At least none that passes scientific scrutiny.)

There’s only the feeling from our internal model.  We already know that model can be wrong about a lot of other things (like white light).  The idea that it can be wrong about its own substance and makeup isn’t a particularly large logical step.

Graziano finishes with a mention of machine consciousness.  I think machine consciousness is definitely possible, and I’m sure someone will eventually build one in a laboratory, but I wonder how useful it would be, at least other than as a proof of concept.  I see no particular requirement that my self driving car, or just about any autonomous system, have anything like the idiosyncrasies of human consciousness.  It might be a benefit for human interface systems, although even there I tend to think it would add pointless complexity.

Unless I’m missing something?  Am I, or Graziano, missing objective evidence of consciousness being more than information processing?  Are there reasons I’m overlooking to consider out intuitions about consciousness to be more reliable than intuitions about colors or other things?  Would there be benefits to conscious machines I’m not seeing?

Why I think we will eventually have a scientific understanding of consciousness

It’s a common sentiment, even among many staunch materialists, that we will never understand consciousness.  It’s one I held to some degree until a few years ago.  But the more I’ve read about neuroscience, the more convinced I’ve become that we will eventually understand it, at least at an objective level.

That’s actually an important distinction to make here.  Many discussions of consciousness inevitably include pondering of the hard problem, the problem of understanding how subjective experience, what it’s like to be a conscious being, arises from physical systems.  I suspect we’ll never solve the hard problem, at least not to the satisfaction of those troubled by it.  It will remain a conundrum for philosophers, no matter what kind of progress is eventually made in neuroscience or artificial intelligence.

But I don’t think it’s reasonable to require that science solve it.  Science gave up looking for ultimate understandings centuries ago in favor of settling for pragmatic ones.  It was one of the first steps in the evolution from natural philosophy to modern science.  The approach, that it is better to settle for what can be understood rather than hold out for a perfect and perhaps unattainable understanding, has been an amazingly fruitful one.

Consider also that the entire history of science has been a demonstration that reality doesn’t match our subjective experience.  Subjectively, the earth is stationary and the center of the universe, but we’ve known for centuries that, objectively, it very much isn’t.  Subjectively, humanity is very different from animals, but we’ve known since Darwin that, humanity is just another animal species, albeit the alpha of alpha predators.

The objective facts in these areas have taken us farther from our subjective experience.  We have nothing to indicate that the mind will be different.  Any expectation that a scientific understanding of the mind will explain our subjective experience, why red is red, etc, is doomed, I fear, to be a frustrated one.

Sometimes along with that is any expectation that an understanding of the mind will somehow show our subjective experience is more real than objective reality.  It’s an expectation that there is still something different about us, something that makes us special, that separates us from the rest of nature, that vitalism in some form or another is still true.  It’s a sentiment that ignores the lessons of Copernicus and Darwin.

I fear that this is what motivates a lot of very intelligent people to speculate that consciousness operates using some form of unknown and unknowable physics.  One of the most common is to posit exotic quantum mechanics.  Of course, the mind depends on quantum mechanics, just as every other physical system in the universe.  But proponents of quantum consciousness often make an assertion that it uses an exotic and unknown aspect of quantum physics.

The problem is that there is zero scientific evidence for anything like this exotic physics.  Speculation in this area continues because we don’t yet understand consciousness, and some people conclude that this means there must be some new aspect of reality that we’re not seeing yet.  While this lack of understanding remains true, there’s nothing in mainstream neuroscience to indicate that we will need a new physics to understand the mind.

This isn’t to say that the mind may not use certain quantum phenomena such as entanglement.  After all, plants appear to use it in photosynthesis, and birds in detecting the earth’s magnetic field.  But while biology uses these phenomena, the phenomena themselves still operate according to the scientific understanding of how they work.  The mind using them would be, at most, complications to understanding, not an insurmountable barrier.

Smi32neuronBut the data most strongly indicate that consciousness arises from neural circuitry.  This circuitry is profoundly complicated, but it operates according to well known physical laws involving chemistry and electricity.  Understanding how the mind arises from it is largely understanding how information is processed in the brain’s neural network.

Because of this, while there are lots of theories of consciousness, I think it’s the ones by neuroscientists, the people actually studying the brain, which are likely to be closest to the truth.  (A quick note here: neurosurgeons, such as Ben Carson or Eben Alexander, are generally not neuroscientists.)  These theories seem to agree that information integration is crucial.  Some stop at integration and declare any integrated information system to have some level of consciousness, leading to a form of philosophical panpsychism.  But I think the better theories see integration as necessary but not sufficient.

My long time readers know that I’m a fan of Michael Graziano‘s Attention Schema Theory, but there are other similar theories out there.  Many of them posit that consciousness is basically a feedback system, allowing the brain to perceive some aspects of its own internal state.  These theories give a data processing explanation for our sense of internal experience, one that doesn’t require anything mystical.  These explanations are far less extraordinary than those requiring exotic physics.  We shouldn’t accept extraordinary claims without extraordinary evidence, particularly when far less extraordinary theories explain the facts.

Of course, there are still huge gaps in our knowledge.  Until those gaps are closed, we can’t completely rule out exotic physics or magic, just as we can’t completely rule out that UFOs are extraterrestrials, that bigfoot is roaming the forests of North America, or that ghosts are haunting old decrepit houses.  But we can note that there is zero actual evidence for any of these things.

None of this is to say that there aren’t aspects of reality that we may never understand.  It’s possible that we’ll never figure out a way to understand singularities at the center of black holes, whether there are other universes, or what actually happens during quantum decoherence.  But unlike these problems, which exist in realms we may never be able to observe, the brain shows no sign of being fundamentally beyond careful observation.

Yes, understanding the brain will be hard, very hard.  But even though there are many people who don’t want the mind to be understood, neuroscientists will continue making progress, year by year, decade by decade.  The gaps will shrink, eventually closing off the notions that depend on them.

I suspect that even when science does achieve an understanding of how consciousness and the mind arise from the brain, there will be many people who refuse to accept it.  It will be the same fights that heliocentrism and evolution once endured.  Many will look at the explanations and insist that their consciousness, their inner experience, simply can’t come from that.  But as I said above, we shouldn’t judge a scientific theory on whether it solves this hard problem.

David Chalmers: How do you explain consciousness?

In this TED talk, David Chalmers gives a summary of the problem whose name he coined, the hard problem of consciousness.

via David Chalmers: How do you explain consciousness? – YouTube.

It seems like people who’ve contemplated consciousness fall into two groups, those who are bothered by the hard problem, and those who are not.  In my mind, one of these camps is seeing something the other is missing.

Naturally, since I fall into the second one, I tend to think it’s those of us who are not bothered by the hard problem who are more aware of the fact that our intuitions are not to be trusted in this area.  No matter how much we learn about how the brain works, it will never intuitively feel like we’ve explained the experience of being us.  So, in my mind, the people bothered by the hard problem will never be satisfied, but that will not prevent us from moving forward.

Chalmers talks about three responses to the hard problem.  The first is Daniel Dennett’s view that the hard problem doesn’t really exist, that we will gradually learn more about how the brain works, solving each of the so called “easy problems”, until we’ve achieved a global understanding of the mind.  I have to say that my view is close to Dennett’s on this.

The second response is panpsychism, the idea that everything is conscious.  From what I’ve read about panpsychism, it’s a view that comes about by defining consciousness as any system that interacts with the environment, or something similar.  By that measure, even subatomic particles have some glimmer of consciousness.

But this is a definition of consciousness that doesn’t fit the common meaning of the word “consciousness”.  Using such an uncommon definition of a common word allows someone to say something that sounds profound, that everything is conscious, but that when unpacked using their specific definition, is actually a rather mundane statement, that everything interacts with its environment.  My reaction to such verbal jujitsu is to tune out, and that’s what I generally do when talk of panpsychism comes up.

Finally, Chalmers talks about a view of consciousness as it being something fundamental to reality, like maybe a fundamental force such as gravity or electromagnetism.  The idea is that consciousness arises through complex integration (which itself sounds more emergent than fundamental to me) and if we can just measure the degree of complex integration, we have a measure of consciousness.  This is a view that I’ve seen some physicists take.  It’s attractive because it might boil consciousness down to an equation, or a brief set of equations.

Personally, I think consciousness as fundamental or whatever is wishful thinking.  It’s an attempt to boil something complicated and messy down to a simple measurement.  And it still leaves the borderline between conscious and non-conscious entities as some magical dividing line that we can’t understand.

My own view is that consciousness, whatever else it is, is information processing.  The most compelling theories I’ve seen come from neuroscientists such as Michael Gazzaniga and Michael Graziano, who see it as something of a feedback mechanism.  (Just for the record, my sympathy for these guys’ theories have nothing to do with me sharing a first name with them 🙂 )

The brain is not a centrally managed system.  It doesn’t have a central executive command center making decisions.  Rather, it processes information and makes decisions in a decentralized and parallel fashion.  What allows the brain to function somewhat in a unified fashion is a feedback mechanism that we call awareness.

Awareness is the brain assembling information about its current and past states.  It is an information schema that allows the rest of the brain to be aware of what the whole brain is contemplating.  It doesn’t really control what the brain does, but it can affect what the brain will decide to do.

If true, our internal experience is simply this feedback mechanism.  Is this the whole picture?  Almost certainly not.  But it is built on scientific evidence from neuroscience studies.  It will almost certainly have to be revised and expanded as more evidence becomes available.  But I think it is far more promising than talk of fundamental forces and the like.

Of course, even if it is true, it won’t satisfy those who are trouble by the hard problem.  Consciousness as a feedback mechanism and information model, still doesn’t get us to the intuitive feeling of being us.  I’m not sure that anything ever will.