The magic step and the crucial fork

Those of you who’ve known me for a while may remember the long fascination I’ve had with Michael Graziano’s attention schema theory of consciousness.  I covered it early in this blog’s history and have returned to it multiple times over the years.  I still think the theory has a lot going for it, particularly as part of an overall framework of higher order theories.  But as I’ve learned more over the years, it’s more Graziano’s approach I’ve come to value than his specific theory.

Back in 2013, in his book, Consciousness and the Social Brain, he pointed out that it’s pretty common for theories of consciousness to explain things up to a certain point, then have a magic step.  For example, integrated information theory posits that structural integration is consciousness, the various recurrent theories posit that the recurrence itself is consciousness, and quantum theories often assert that consciousness is in the wave function collapse.  Why are these things in particular conscious?  It’s usually left unsaid, something that’s supposed to simply be accepted.

Christof Koch, in his book, Consciousness: Confessions of a Romantic Reductionist, relates that once when presenting a theory about layer 5 neurons in the visual cortex firing rhythmically possibly being related to consciousness, he was asked by the neurologist Volker Henn how his theory was really any different from Descartes’ locating the soul in the pineal gland.  Koch’s language and concepts were more modern, Henn argued, but exactly how consciousness arose from that activity was still just as mysterious as how it was supposed to have arisen from the pineal gland.

Koch said he responded to Henn with a promissory note, an IOU, that eventually science would get to the full causal explanation.  However, Koch goes on to describe that he eventually concluded it was hopeless, that subjectivity was too radically different to actually emerge from physical systems.  It led him to panpsychism and integrated information theory (IIT).  (Although in his more recent book, he seems to have backed off of panpsychism, now seeing IIT as an alternative to, rather than elaboration of, panpsychism.)

Koch’s conclusion was in many ways similar to David Chalmers’ conclusion, that consciousness is irreducible and fundamental, making property dualism inevitable, and leading Chalmers to coin the famous “hard problem” of consciousness.  These conclusions also caused Chalmers to flirt with panpsychism.

Graziano, in acknowledging the magic step that exists in most consciousness theories, argued that such theories were incomplete.  A successful theory, he argued, needed to avoid such a step.  But is this possible?  Arguably every theory of consciousness has these promissory notes, these IOUs.  The question might be how small can we make them.

Graziano’s approach was to ask, what exactly are we trying to explain?  How do we know that’s what needs to be explained?  We can say “consciousness”, but what does that mean?  How do we know we’re conscious?  Someone could reply that the only way we could even ask that question is as a conscious entity, but that’s begging the question.  What exactly are we talking about here?

It’s commonly understood that our senses can be fooled.  We’ve all seen the visual illusions that, as hard as we try, we can’t see through.  Our lower level visual circuitry simply won’t allow it.  And the possibility that we might be a brain in a vat somewhere, or be living in a simulation, is often taken seriously by a lot of people.

What people have a much harder time accepting is the idea that our inner senses might have the same limitations.  Our sense of what happens in our own mind feels direct and privileged in a manner that outer senses don’t.  In many ways, what these inner senses are telling us seem like the most primal thing we can ever know.  But if these senses aren’t accurate, much like the visual illusions, these are not things we can see through, no matter how hard we try.

Cover of 'Rethinking Consciousness' by Michael GrazianoIn his new book, Rethinking Consciousness: A Scientific Theory of Subjective Experience, Graziano discusses an interesting example.  Lord Horatio Nelson, the great British admiral, lost an arm in combat.   Like many amputees, he suffered from phantom limb syndrome, painful sensations from the nonexistent limb.  He famously claimed that he had proved the existence of an afterlife, since if his arm could have a ghost, then so could the rest of him.

Phantom limb syndrome appears to arise from a contradiction between the brain’s body schema, its model of the body, and its actual body.  Strangely enough, as V. S. Ramachandran discussed in his book, The Tell-Tale Brain, the reverse can also happen after a stroke or other brain injury.  A patient’s body schema can become damaged so that it no longer includes a limb that’s physically still there.  They no longer feel the limb is really theirs anymore.  For some, the feeling is so strong that they seek to have the limb amputated.

Importantly, in both cases, the person is unable to see past the issue.  The body schema is simply too powerful, too primal, and operates are a pre-conscious level.  It can be doubted intellectually, but not intuitively, not at a primal level.

If the body schema exerts that kind of power, imagine what power a schema that tells us about our own mental life must exert.

So for Graziano, the question isn’t how to explain what our intuitive understanding of consciousness tells us about.  Instead, what needs to be explained is why we have that intuitive understanding.  In many ways, Graziano described what Chalmers would later call the “meta-problem of consciousness“, not the hard problem, but the problem of why we think there is a hard problem.  (If Graziano had Chalmers’ talent for naming philosophical concepts, we might have started talking about the meta-problem in 2013.)

Of course, Graziano’s answer is that we have a model of the messy and emergent process of attention, a schema, a higher order representation of it at the highest global workspace level, which we use to control it in top down fashion.  But while the model is effective in providing that feedback and control, it doesn’t provide accurate information for actually understanding the mind.  Indeed, it’s simplified model of attention, portraying it as an ethereal fluid or energy that can be concentrated in or around the head, but not necessarily of it, is actively misleading.  There’s a reason why we are all intuitive dualists.

At this point we reach a crucial juncture, a fork in the road.  You will either conclude that Graziano’s contention (and similar ones from other cognitive scientists) is an attempt to pull a fast one, a cheat, a dodge from confronting the real problem, or that it’s plausible.  If you can’t accept it, then consciousness likely remains an intractable mystery for you, and concepts like IIT, panpsychism, quantum consciousness, and a host of other exotic solutions may appear necessary.

But if you can accept that introspection is unreliable, then a host of grounded neuroscience theories, such as global workspace and higher order thought, including the attention schema, become plausible.  Consciousness looks scientifically tractable, in a manner that could someday result in conscious machines, and maybe even mind uploading.

I long ago took the fork that accepts the limits of introspection, and the views I’ve expressed on this blog reflect it.  But I’ve been reminded in recent conversations that this is a fork many of you haven’t taken.  It leads to very different underlying assumptions, something we should be cognizant of in our discussions.

So which fork have you taken?  And why do you think it’s the correct choice?  Or do you think there even is a real choice here?

Michael Graziano on mind uploading

Michael Graziano has an article at The Guardian, which feels like an excerpt from his new book, exploring what might happen if we can upload minds:

Imagine that a person’s brain could be scanned in great detail and recreated in a computer simulation. The person’s mind and memories, emotions and personality would be duplicated. In effect, a new and equally valid version of that person would now exist, in a potentially immortal, digital form. This futuristic possibility is called mind uploading. The science of the brain and of consciousness increasingly suggests that mind uploading is possible – there are no laws of physics to prevent it. The technology is likely to be far in our future; it may be centuries before the details are fully worked out – and yet given how much interest and effort is already directed towards that goal, mind uploading seems inevitable. Of course we can’t be certain how it might affect our culture but as the technology of simulation and artificial neural networks shapes up, we can guess what that mind uploading future might be like.

Graziano goes on to discuss how this capability might affect society.  He explores an awkward conversation between the original version of a person and their uploaded version, and posits a society that sees those living in the physical world as being in a sort of larval stage that they would all eventually graduate from into the virtual world of their uploaded elders.

Mind uploading is one of those concepts that a lot of people tend to dismiss out of hand.  Responses seem to vary between it being too hopelessly complicated for us to ever accomplish, to it being impossible, even in principle.  People who have no problem accepting the possibility of faster than light travel, time travel, or many other scientifically dubious propositions, draw the line at mind uploading, even though the physics for mind uploading are far more feasible than those other options.

That’s not to say that mind uploading should be taken as a given.  It is possible that there may eventually turn out to be something that makes it impossible.

For example, I’m currently reading Christof Koch’s new book, The Feeling of Life Itself, in which Koch explores the integrated information theory (IIT) of consciousness.  A big part of IIT is positing that the physical causal structure of the system is crucial.  As far as IIT is concerned, mind uploading is pointless because, even if the information processing is reproduced, if the physical causal structure isn’t, the resulting system won’t be conscious.

I think Koch too quickly dismisses the idea of it being sufficient to reproduce the causal structure at a particular level of organization.  But if he’s right, mind uploading becomes far more difficult.  Although even in that scenario, the possibility of neuromorphic hardware, computer hardware engineered to be physically similar to a nervous system, including physical neurons, synapses, etc, may still eventually make it possible.

Even if nueromorphic hardware isn’t required in principle, it might turn out to be required in practice.  With Moore’s Law sputtering, the computing power to simulate a human brain may never be practical with the traditional von Neumann computer architecture.  A whole brain emulation might be conscious using the standard serialized architecture, but unable to run at anything like the speed of an organic brain.  It might take a neuromorphic architecture, or at least a similarly massively parallel one, to make running a mind in realtime feasible.

However, all of these considerations strike me as engineering difficulties that can eventually be overcome.  Brains exist in nature, and unless anyone finds something magical about them, there’s no reason in principle their operation won’t eventually be reproducible technologically.

Although this may be several centuries in the future.  I do think there’s good reasons to be skeptical of singularity enthusiast / alarmist predictions that it will happen in a few years.  Our knowledge of the brain and mind still have a long way to go before we’ll be able to produce a system with human level intelligence, much less reproduce a particular one.

On the awkward conversation that Graziano envisions between the original and uploaded person, with the original in despair about being the obsolete version, I think the solution would be to simply have mind backups made periodically, but not run until the original person dies.  That should avoid a lot of the existential angst of that conversation.

That’s assuming that there isn’t an ability to share memories between the copies, with maybe the original receiving them through a brain implant of some type.  I think being able to remember being the virtual you would make being the mortal physical version a lot easier to bear.  The architecture of the brain may prevent such sharing from ever being feasible; if so, then the non-executing backups seem the way to go.

I don’t know whether mind uploading will ever be possible, but in a universe ruled by general relativity, not to mention the conservation of energy, it seems like the only plausible way humans may ever be able to go to the stars in person.  If it does turn out for some reason to be impossible, then humanity might be confined to this solar system, with the universe belonging to our AI progeny.

What do you think?  Is mind uploading impossible?  If so, why?  Or is it possible and I’m too pessimistic of it happening in our lifetimes?   Are there reasons to think the singularity is near?

Michael Graziano’s attention schema theory

It’s been a while since I’ve had a chance to highlight Graziano’s attention schema theory.  This brief video is the very barest of sketches, but I think it gets the main idea across.

Those of you who’ve known me for a while might remember that I was once quite taken with this theory of consciousness.  I still think it has substantial value in understanding metacognition and top down control of attention, but I no longer see it as the whole story, seeing it as part of a capability hierarchy.

Still, the attention schema theory makes a crucial point.  What we know of our own consciousness is based on an internal model of it that our brain constructs.  Like all models, it’s simplified in a way that optimizes it for adaptive feedback, not for purposes of understanding the mind.

The problem is that this model feels privileged, to the extent that the proposition that what it shows us isn’t accurate, is simply dismissed out of hand by many people.  That our external senses aren’t necessarily accurate is relatively easy to accept, but the idea that our inner senses might have the same limitations is often fiercely resisted.

But there is a wealth of scientific research showing that introspection is unreliable.  It actually functions quite well in day to day life.  It’s only when we attempt to use it as evidence for how the mind works that we run into trouble.  Introspective data that is corroborated by other empirical data is fine, but when it’s our only source of information, caution is called for.

Graziano’s contention that conscious awareness is essentially a data model puts him in the illusionist camp.  As I’ve often said, I think the illusionists are right, although I don’t like calling phenomenal consciousness an illusion, implying that it doesn’t exist, instead currently preferring the slightly less contentious assertion that it only exists subjectively, a loose and amorphous construction from various cognitive processes.

Michael Graziano: What hard problem?

Michael Graziano has an article at The Atlantic explaining why consciousness is not mysterious.  It’s a fairly short read (about 3 minutes).  I recommend anyone interested in this stuff read it in full.  (I tweeted a link to it last night, but then decided it warranted discussion here.)

The TL;DR is that the hard problem of consciousness is like the 17th century hard problem of white light.  No color, particularly white, exists except in our brains.  White light is a mishmash of light with different wavelengths, of every color, that our brains simply translate into what we perceive of as white. Our perception of consciousness is much the same:

This is why we can’t explain how the brain produces consciousness. It’s like explaining how white light gets purified of all colors. The answer is, it doesn’t. Let me be as clear as possible: Consciousness doesn’t happen. It’s a mistaken construct. The computer concludes that it has qualia because that serves as a useful, if simplified, self-model. What we can do as scientists is to explain how the brain constructs information, how it models the world in quirky ways, how it models itself, and how it uses those models to good advantage.

I pretty much agree with everything Graziano says in this article, although I’ve learned that dismissing the hard problem often leads to pointless debates about eliminative reductionism.  Instead, I admit that the hard problem is real for those who are troubled by it.  But like the hard problem of white color, it will never have a solution.

Graziano mentions that there is a strong sentiment that consciousness must be a thing, an energy field, or exotic state of matter, something other than information.  This sentiment arises from the same place as subjective experience.  It’s a model our brains construct.  It’s that model that gives us that strong feeling.  (Of course, the strong feeling is itself a model.)  When some philosophers and scientists say that “consciousness is an illusion”, what they usually mean is that this idea of consciousness as separate thing is illusory, not internal experience itself.

Why is this a valid conclusion?  Well, look at the neuroscience and you won’t find any observations that require energy fields or new states of matter.  What you’ll see are neurons signalling to each other across electrical and chemical synapses, supported by a superstructure of glial cells.  You’ll see nerve impulses coming in from the peripheral nervous system, a lot of processing in the neural networks of the brain, and output from this system in the form of nerve impulses going to the motor neurons connected to the muscles.  You’ll see a profoundly complex information processing network, a computational system.

You won’t find any evidence of something else, of an additional energy or separate state of matter, of anything like a ghost in the machine.  Could something like that exist and just not yet be detected?  Sure.  But that can be said of any concept we’d like to be true.  To rationally consider it plausible, we need some objective data that requires, or at least makes probable, its existence.  And there is none.  (At least none that passes scientific scrutiny.)

There’s only the feeling from our internal model.  We already know that model can be wrong about a lot of other things (like white light).  The idea that it can be wrong about its own substance and makeup isn’t a particularly large logical step.

Graziano finishes with a mention of machine consciousness.  I think machine consciousness is definitely possible, and I’m sure someone will eventually build one in a laboratory, but I wonder how useful it would be, at least other than as a proof of concept.  I see no particular requirement that my self driving car, or just about any autonomous system, have anything like the idiosyncrasies of human consciousness.  It might be a benefit for human interface systems, although even there I tend to think it would add pointless complexity.

Unless I’m missing something?  Am I, or Graziano, missing objective evidence of consciousness being more than information processing?  Are there reasons I’m overlooking to consider out intuitions about consciousness to be more reliable than intuitions about colors or other things?  Would there be benefits to conscious machines I’m not seeing?

Michael Graziano on building a brain

I’ve written a few times on the Attention Theory schema of consciousness.  It’s a theory I like because it’s scientific, eschewing any mystical steps, such as assuming that consciousness just magically arises at a certain level of complexity.  It’s almost certainly not perfect, but I think it’s a major step in the right direction.

Michael Graziano, the author of the theory, has a new article up at Aeon, describing, under his theory, the essential steps in giving a computer consciousness.  Of course, the devil is in the details, as they always will be.  But it’s a fascinating new way to describe the theory.  If you’ve read my previous posts on this and still didn’t feel clear about it, I recommend checking out his article.

Artificial intelligence is growing more intelligent every year, but we’ve never given our machines consciousness. People once thought that if you made a computer complicated enough it would just sort of ‘wake up’ on its own. But that hasn’t panned out (so far as anyone knows). Apparently, the vital spark has to be deliberately designed into the machine. And so the race is on to figure out what exactly consciousness is and how to build it.

…In this article I’ll conduct a thought experiment. Let’s see if we can construct an artificial brain, piece by hypothetical piece, and make it conscious. The task could be slow and each step might seem incremental, but with a systematic approach we could find a path that engineers can follow.

Read the rest at Aeon.

 

The attention schema theory of consciousness deserves your…attention

English: Neural Correlates Of Consciousness
Neural Correlates Of Consciousness (Photo credit: Wikipedia)

Michael Graziano published a brief article in the New York Times on his attention schema theory of consciousness, which a number of my fellow bloggers have linked to and discussed.  I’m not sure this article was the clearest description of it that he’s given, and I suspect the title biased readers to think his theory is another consciousness-is-an-illusion one, which affected some of the discussion.

I’ve written about this theory before when I reviewed his book, ‘Consciousness and the Social Brain’, and alluded to it in several other posts.  I’m doing another post on it, partially to take another shot at describing it, partially to reaffirm my understanding of it, and partially to do my small part to call attention to a scientific theory of consciousness that I think deserves your attention.

Before starting on the theory, I think it’s important to understand that the scientific evidence doesn’t point to the brain operating under any central control.  There’s no homunculus, no little person inside controlling the brain.  The brain is more of a distributed set of modules that operate somewhat independently.

The first thing to understand with the theory is the distinction between attention and awareness.  Attention is the process of your brain deciding which sensory inputs to give priority processing to.  It’s  a messy emergent process with, again, no central control.  It can be top down, such as your attention to reading this blog entry, or bottom up, such as the attention you’d give to a spider crawling up your arm.

These sensory signals are constantly streaming into your brain, each signal is constantly striving for attention.  There is an ongoing contest in your brain with signals effectively forming coalitions, coming to prominence, and then receding to the next ascendant coalition of signals.

Some philosophers of mind stop here and say that this is consciousness, and that the feeling that there is anything else, that there is an inner experience of some kind, is an illusion.  But if this is an illusion, then what is experiencing the illusion?  And how is the illusion arising?  And how are the top down attentional states referenced above developed?

The answer may be awareness.  Awareness is not attention.  Your attention can be drawn to something without you being aware of it.  This is something every magician and illusionist knows.  They often misdirect your attention, without you being aware of it, which allows them to perform seeming feats of magic.

But if awareness isn’t attention, then what is it?  According to this theory, it is information.  Awareness is a model, an executive summary in your brain of the messy and emergent process of attention.  Like any executive summary, it lacks a lot of detailed information, it isn’t always accurate, and is by nature incomplete.

Compare this to what we know about the relationship between consciousness and the subconscious.  We are conscious of many things, but a lot more things go on within our subconscious that we have only incomplete or hazy information about, and much goes on that we simply have no information on.

In other posts, I’ve used the metaphor of a city newspaper.  The city is the brain, and the newspaper is awareness.  The newspaper gathers information, summarizes and simplifies it, and then makes it available to the rest of the city.  It is a feedback mechanism that allows the components of the city to know a summary of what is happening with all the other components of the city.

Awareness serves the same function in your brain.  It’s a feedback mechanism that allows the brain to monitor its attentional state.  According to the theory, it’s this feedback mechanism, this schema, that gives us our feeling of inner experience, of essentially experiencing our experience.

Another aspect of this theory is the idea that, just as we have an attention schema for our own attention state, we also have attention schemata for other minds.  The idea is that the same brain circuitry that processes awareness for our inner experience also processes our perceptions of what others are thinking.  For example, when we watch another person look at an apple, we model their attentional state and understand that their attention is on the apple.

In other words, consciousness is our theory of mind pointed back at ourselves, and our theory of mind is our awareness feedback mechanism pointed at other perceived minds.  (I’m tempted to go off on a tangent here about the importance of understanding yourself in order to understand others, but I think I’ll save that for some other time.)

Graziano feels that consciousness has at least some control over our actions, that asserting that it doesn’t, as many epiphenomenon theories of consciousness do, ignores the main thing we can know for sure about consciousness, that we can describe it.  I think that’s why his preferred metaphor for describing the attention schema is of a general plotting strategies with a map and toy soldiers serving as a model of the real battlefield.

I’m sure Graziano has his expert reasons for believing this, but based on all I’ve read, I’m less sure about consciousness being in control, thinking that maybe a better description might be to say that consciousness has causal influence.  I think this is one reason why I prefer the newspaper metaphor.  Unlike a general, a newspaper doesn’t have control over what happens in the city (at least not directly), but it has substantial causal influence through the information that it makes available.  The city, or more accurately the various faction within the city, may or may not use the information provided by the newspaper in their decisions.

This conception also melds well with Michael Gazzaniga‘s description of the interpreter functionality which seems to be revealed by split-brain patient experiments.  These experiments are some of the indications that we have that the brain isn’t controlled by any one central point.  The mechanism producing the attention schema is the interpreter, or at least a crucial part of it.

So, why am I enthusiastic about this theory?  Well, first, it seems solidly rooted in neuroscience and psychology.  In his book, Graziano discusses the empirical support for the theory.  He admits that the support is still incomplete, and that the theory may have to be modified as more data becomes available.  This is normal for a scientific theory.

Second, the theory doesn’t invoke an unknown magical step.  For example, the integrated information theory posits that consciousness arises from the integration of information without being able to describe exactly how much integration is necessary, or why integrated entities like the internet or the tax code aren’t conscious (at least not without making counter-intuitive assertions that they are conscious but with no ability to communicate with us).  The attention schema theory sees integration as necessary for consciousness, but not sufficient by itself.

Third, the theory doesn’t dismiss inner experience as an illusion.  It’s description of a feedback mechanism actually gives an explanation for the intuitive feeling of the homunculus that we all have.

And fourth, it gives insight into the type of architecture that might eventually be necessary for an artificial intelligence to be conscious, while showing how unlikely it is that such an architecture will come about by accident.

Is this theory the natural selection of consciousness, as Graziano admits he is looking for?  I don’t know, but it feels like at least an important step toward that theory.  This theory will rise or fall on whether or not the data support it, but it being rooted in the data that is already available makes me think it’s a closer approximation of that final theory than most of the other theories that often get tossed around.

David Chalmers: How do you explain consciousness?

In this TED talk, David Chalmers gives a summary of the problem whose name he coined, the hard problem of consciousness.

via David Chalmers: How do you explain consciousness? – YouTube.

It seems like people who’ve contemplated consciousness fall into two groups, those who are bothered by the hard problem, and those who are not.  In my mind, one of these camps is seeing something the other is missing.

Naturally, since I fall into the second one, I tend to think it’s those of us who are not bothered by the hard problem who are more aware of the fact that our intuitions are not to be trusted in this area.  No matter how much we learn about how the brain works, it will never intuitively feel like we’ve explained the experience of being us.  So, in my mind, the people bothered by the hard problem will never be satisfied, but that will not prevent us from moving forward.

Chalmers talks about three responses to the hard problem.  The first is Daniel Dennett’s view that the hard problem doesn’t really exist, that we will gradually learn more about how the brain works, solving each of the so called “easy problems”, until we’ve achieved a global understanding of the mind.  I have to say that my view is close to Dennett’s on this.

The second response is panpsychism, the idea that everything is conscious.  From what I’ve read about panpsychism, it’s a view that comes about by defining consciousness as any system that interacts with the environment, or something similar.  By that measure, even subatomic particles have some glimmer of consciousness.

But this is a definition of consciousness that doesn’t fit the common meaning of the word “consciousness”.  Using such an uncommon definition of a common word allows someone to say something that sounds profound, that everything is conscious, but that when unpacked using their specific definition, is actually a rather mundane statement, that everything interacts with its environment.  My reaction to such verbal jujitsu is to tune out, and that’s what I generally do when talk of panpsychism comes up.

Finally, Chalmers talks about a view of consciousness as it being something fundamental to reality, like maybe a fundamental force such as gravity or electromagnetism.  The idea is that consciousness arises through complex integration (which itself sounds more emergent than fundamental to me) and if we can just measure the degree of complex integration, we have a measure of consciousness.  This is a view that I’ve seen some physicists take.  It’s attractive because it might boil consciousness down to an equation, or a brief set of equations.

Personally, I think consciousness as fundamental or whatever is wishful thinking.  It’s an attempt to boil something complicated and messy down to a simple measurement.  And it still leaves the borderline between conscious and non-conscious entities as some magical dividing line that we can’t understand.

My own view is that consciousness, whatever else it is, is information processing.  The most compelling theories I’ve seen come from neuroscientists such as Michael Gazzaniga and Michael Graziano, who see it as something of a feedback mechanism.  (Just for the record, my sympathy for these guys’ theories have nothing to do with me sharing a first name with them 🙂 )

The brain is not a centrally managed system.  It doesn’t have a central executive command center making decisions.  Rather, it processes information and makes decisions in a decentralized and parallel fashion.  What allows the brain to function somewhat in a unified fashion is a feedback mechanism that we call awareness.

Awareness is the brain assembling information about its current and past states.  It is an information schema that allows the rest of the brain to be aware of what the whole brain is contemplating.  It doesn’t really control what the brain does, but it can affect what the brain will decide to do.

If true, our internal experience is simply this feedback mechanism.  Is this the whole picture?  Almost certainly not.  But it is built on scientific evidence from neuroscience studies.  It will almost certainly have to be revised and expanded as more evidence becomes available.  But I think it is far more promising than talk of fundamental forces and the like.

Of course, even if it is true, it won’t satisfy those who are trouble by the hard problem.  Consciousness as a feedback mechanism and information model, still doesn’t get us to the intuitive feeling of being us.  I’m not sure that anything ever will.

My philosophy, so far — part II | Scientia Salon

Massimo Pigliucci is doing an interesting series of posts on his philosophical positions.

In the first part [19] of this ambitious (and inevitably, insufficient) essay I sought to write down and briefly defend a number of fundamental positions that characterize my “philosophy,” i.e., my take on important questions concerning philosophy, science and the nature of reality. I have covered the nature of philosophy itself (as distinct, to a point, from science), metaphysics, epistemology, logic, math and the very nature of the universe. Time now to come a bit closer to home and talk about ethics, free will, the nature of the self, and consciousness. That ought to provide readers with a few more tidbits to chew on, and myself with a record of what I’m thinking at this moment in my life, for future reference, you know.

via My philosophy, so far — part II | Scientia Salon.

I find myself agreeing with most of Massimo’s positions.  I agree with his quasi-realist stance on morality (see my morality posts for details), and his position on free will compatibilism.

Until a few months ago, I have to admit that I would not have agreed with him on consciousness, that is I thought there was a good chance that it was an illusion, or at least our common intuitions about it were.  After reading Michael Graziano’s ‘Consciousness and the Social Brain’, I’ve changed my views.  I now think consciousness is a real thing and that it is a model of the attentional state of the brain.

However, I do think his skepticism of mind uploading is unwarranted. If the mind indeed arises from physical operation of the brain, I can’t see any reason why it shouldn’t eventually be possible to analyze that physical operation and recreate it, either physically or in a virtual environment. Even if consciousness ends up requiring wet chemical reactions, it still seems like something we’d eventually be able to recreate, although at that point you might refer to it as engineered life rather than uploading.

Now, I do think there is plenty room for skepticism that it’s going to happen in 20 years and lead to a transcendent “rapture of the nerds” singularity, but I see that as a separate issue from us eventually being able to record, store, and re-instantiate our minds. It might be centuries before it’s possible, but short of substance dualism or some other ghost in the machine mechanism being true, I think humans will eventually do it. (Assuming we don’t drive ourselves extinct first.)

How apraxia got my son suspended from school – Michael Graziano – Aeon

I’ve written before about Michael Graziano and his attention schema theory of consciousness, which seem to me to be the best candidate right now for a scientific theory that actually explains consciousness without resorting to magic steps or simply asserting that it doesn’t exist.

But this article isn’t about that.  It’s a sobering tale of what happened to his son, who exhibited some indications of being a special needs child, but instead was ostracized and bullied by the school staff.  The most sobering part of this tale is the realization that anyone not as educated as he was would likely have been completely crushed by the system, along with their child.

A few months ago, my son, who is in second grade, went on a field trip. As the class assembled in the parking lot, a new child joined in. He had metal leg braces and difficulty walking. Nobody quite knew how to talk to him and so he was left by himself at the edge of the crowd. But my son seemed drawn to him. As the little boy in braces began to struggle up the steps of the bus, my son went over to help and then sat beside him. Throughout the bus ride, they talked together. According to the teachers, that new little boy soon seemed like the happiest child in the group. One of the most sociable children in the class had made friends with him, and that goes a long way towards building self-esteem when you feel isolated and anxious.

I’m very proud of what my son did. He showed compassion. He was still a new pupil himself, and he had suffered bullying related to a disability of his own. The way he was treated at his previous school was so horrible that he might easily have decided to pay it back rather than forward. But kids can be amazingly smart about how to treat one another. After all, it wasn’t the children who bullied him at his old school. It was the adults.

read more at How apraxia got my son suspended from school – Michael Graziano – Aeon.

Is the United States conscious?

AVHRR satellite image of the 48 contiguous sta...
(Photo credit: Wikipedia)

I saw this interesting post by Eric Schwitzgebel on whether or not regarding the US as a conscious entity is compatible with materialism.  In the post, he examines an objection by David Chalmers, which is interesting, but not something that particularly resonates with me, seeming like a just so rational to a pre-intuited conclusion.  Eric also links to a paper he wrote on this, but that I haven’t read yet.

But his post reminded me that, ever since I read Michael Graziano’s book, ‘Consciousness and the Social Brain‘, where Graziano discusses his attention schema theory of consciousness, it’s occurred to me that a nation like the United States might be conscious.

A quick reminder.  The attention schema theory is that consciousness, awareness, is a data model, in the brain, of attention.  And attention is the messy emergent process of various coalitions of signals in the brain competing for resources, with some coalitions winning temporarily, until the next coalition unseats them.  In other words, consciousness is a model of some aspects of the information processes going on in the brain.  It’s a feedback mechanism of the brain to some of its own processing.

Does the US have this?  I think the answer is yes.  We have pollsters constantly gauging American opinions on various topics.  We have sociologists, historians, journalists, and many other types of information aggregators constantly researching trends in American thought, and publishing their findings, making them available to all of us, to the whole system.

In other words, we have a model of what’s going on in the minds of the nation, a model of its attentional state.  This is often referred to, with metaphorical intention, as our “collective consciousness”.  But this line of reasoning makes me wonder how metaphorical it really is.

Like the attention schema idea of awareness, this model doesn’t have direct control over what happens in the country, but its information is available to those who do.  And it affects and modifies what collective decisions we make in markets, elections, and other decision mechanisms.

But does this count as consciousness?  Well, as I’ve written before, ultimately consciousness is in the eye of the beholder.  And I could see an objection being made that we can’t communicate with the US as a whole, only with it’s constituents.  But this feels a bit like a cluster of neurons complaining that it can’t communicate with the whole brain.

I don’t know whether the US counts as a conscious entity, but I think it has a much better claim to it than the internet, or other things people sometimes contemplate being conscious.