The prospects for a scientific understanding of consciousness

Michael Shermer has an article up at Scientific American asking if science will ever understand consciousness, free will, or God.

I contend that not only consciousness but also free will and God are mysterian problems—not because we are not yet smart enough to solve them but because they can never be solved, not even in principle, relating to how the concepts are conceived in language.

On consciousness in particular, I did a post a few years ago which, on the face of it, seems to take the opposite position.  However, in that post, I made clear that I wasn’t talking about the hard problem of consciousness, which is what Shermer addresses in his article.  Just to recap, the “hard problem of consciousness” was a phrase originally coined by philosopher David Chalmers, although it expressed a sentiment that has troubled philosophers for centuries.


It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does…The really hard problem of consciousness is the problem of experience. When we think and perceive there is a whir of information processing, but there is also a subjective aspect.

Broadly speaking, I agree with Shermer on the hard problem.   But I agree with an important caveat.  In my view, it isn’t so much that the hard problem is hopelessly unsolvable, it’s that there is no scientific explanation which will be accepted by those who are troubled by it.  In truth, while I don’t think the hard problem has necessarily been solved, I think there are many plausible solutions to it.  The issue is that none of them are accepted by the people who talk about it.  In other words, for me, this seems more of a sociological problem than a metaphysical one.

What are these plausible solutions?  I’ve written about some of them, such as that experience is the brain constructing models of its environment and itself, that it is communication between the perceiving and reflexive centers of the brain and its movement planning centers, or that it’s a model of aspects of its processing as a feedback mechanism.

Usually when I’ve put these forward, I’m told that I’m missing the point.  One person told me I was talking about explanations of intelligence or cognition rather than consciousness.  But when I ask for elaboration, I generally get a repeat of language similar to Chalmers’ or that of other philosophers such as Thomas Nagel, Frank Jackson, or others with similar views.

The general sentiment seems to be that our phenomenological experience simply can’t come from processes in the brain.  This is a notion that has long struck me as a conceit, that our minds just can’t be another physical system in the universe.  It’s a privileging of the way we process information, an insistence that there must be something fundamentally special and different about it.  (Many people broaden the privilege to include non-human animals, but the conceit remains the same.)

It’s also a rejection of the lessons from Copernicus and Darwin, that we are part of nature, not something fundamentally above or separate from it.  Just as our old intuitions about Earth being the center of the universe, or of us being separate and apart from other animals, are not to be trusted, our intuitions formed from introspection, from self reflection, a source of information proven to be unreliable in many psychology studies, should not necessarily be taken as data that need to be explained.

Indeed, Chalmers himself has recently admitted to the existence of a separate problem from the hard one, what he calls “the meta-problem of consciousness”.  This is the question of why we think there is a hard problem.  I think it’s a crucial question, and I give Chalmers a lot of credit for exploring it, particularly since in my mind, the existence of the meta-problem and its most straightforward answers make the answer to the hard problem seem obvious: it’s an illusion, a false problem.

It implies that neither the hard problem, nor the version of consciousness it is concerned about, the one that remains once all the “easy” problems have been answered, exist.  They are apparitions arising from a data model we build in our brains, an internal model of how our minds work.  But the model, albeit adaptive for many everyday situations, is wrong when it comes to providing accurate information about the architecture of the mind and consciousness.

Incidentally, this isn’t because of any defect in the model.  It serves its purpose.  But it doesn’t have access to the lower level mechanisms, to the actual mechanics of the construction of experience.  This lack of access places an uncrossable gap between subjective experience and objective knowledge about the brain.  But there’s no reason to think this gap is ontological, just epistemic, that is, it’s not about what is, but what we can know, a limitation of the direct modeling a system can do on itself.

Once we’ve accounted for capabilities such as reflexive affects, perception (including a sense of self), attention, imagination, memory, emotional feeling, introspection, and perhaps a few others, essentially all the “easy” problems, we will have an accounting of consciousness.  To be sure, it won’t feel like we have an accounting, but then we don’t require other scientific theories to validate our intuitions.  (See quantum mechanics or general relativity.)  We shouldn’t require it for theories of consciousness.

This means that asking whether other animals or machines are conscious, as though consciousness is a quality they either have or don’t have,  is a somewhat meaningless question.  It’s really a question of how similar their information processing and primal drives are to ours.  In many ways, it’s a question of how human these other systems are, how much we should consider them subjects of moral worth.

Indeed, rewording the question about animal and machine consciousness as questions about their humanness, makes the answers somewhat obvious.  A chimpanzee obviously has much more humanness than a mouse, which itself has more than a fish.  And any organism with a brain currently has far more than any technological system, although that may change in time.

But none have the full package, because they’re not human.  We make a fundamental mistake when we project the full range of our experience on these other systems, when the truth is that while some have substantial overlaps and similarities with how we process information, none do it with the same calibration of senses or the combination of resolution, depth, and primal drives that we have.

So getting back to the original question, I think we can have a scientific understanding of consciousness, but only of the version that actually exists, the one that refers to the suite and hierarchy of capabilities that exist in the human brain.  The version which is supposed to exist outside of that, the version where “consciousness” is essentially a code word for  an immaterial soul, we will never have an understanding of, in the same manner we can’t have a scientific understanding of centaurs or unicorns, because they don’t exist.  The best we can do is study our perceptions of these things.

Unless of course, I’m missing something.  Am I being too hasty in dismissing the hard-problem version of consciousness?  If so, why?  What about subjective experience implies anything non-physical?

This entry was posted in Mind and AI and tagged , , , , , , , . Bookmark the permalink.

92 Responses to The prospects for a scientific understanding of consciousness

  1. Steve Ruis says:

    I read your former posts and you are way ahead of me on this topic, but I think we will finally solve even the hard problem. As with most mental properties I think consciousness is something that co evolved along the way (there’s a term for it, but I am no biologist). I think what distinguishes us from other animals (in degree, not necessarily exclusively) is the mental construct we call imagination. With it we create constructs of the world around us and then can do experiments on that construct rather than having to do them in real life. E.g. “is that rustling of the grass due to the wind or a leopard?” In our imagination we can see ourselves stepping up to that area of grass and seeing … grass … or the fangs of a leopard. Since the consequences of making a mistake could be lethal we take Door #2 and avoid the site of the rustling. This tool which helps us survive is amplified the more we can detail the experiments, which gives a form of inner life to our interior debates over our options, also a spur to an inner life.

    Possible this property we call imagination was due to something other than evolutionary pressure, maybe it was a mistake, but it is a damned useful mistake and evolution selected for it. Certainly a certain number of functioning neurons are required, etc., etc.

    Liked by 2 people

    • I think you’re right about imagination being an important capability. But I don’t think it was a mistake. It enhances our survivability. The earliest nervous systems simply responded to direct stimuli: touches, the presence of certain chemicals, etc. Perception from distance senses dramatically widened the scope in space of what our nervous system could respond to. Memory and imagination further expanded the scope but now in time. Together, perception and imagination dramatically increased the repertoire of available actions an organism could take, with an ability to respond to stimuli in an increasing envelope of time and space.

      Fish imagination is very limited, usually only able to look ahead a few seconds in the future. Land animals can generally imagine much further ahead, for minutes, or in some cases hours. But only humans can imagine a future days, weeks, months, or years ahead of time. I think what gives us this ability is symbolic thought. Imagine attempting to think about what will happen in 2020 if you didn’t have the symbolic framework of the calendar.

      Liked by 1 person

  2. jayarava says:

    “The general sentiment seems to be that our phenomenological experience simply can’t come from processes in the brain.”

    Which would be the polar opposite of what John Searle says about consciousness, so not sure why he gets mentioned in that sentence.

    Otherwise, yes. The example I use is standing on a hill watching a sunset. Various senses – kinesthetic, proprioceptive, inner-ear, viscera, all tell us we are at rest. This much is certain! Visual sense confirms we are at rest with respect to the hill and local surroundings. There is apparent movement on the horizon, therefore the sun is “setting”. This picture makes sense, feels right, and until 1610 most right thinking people would have agreed. It’s just that it’s completely wrong because our senses are tuned in such a way that we don’t experience coriolis forces even though we are subject to it. Not because we are stupid or anything like that. “The sun moving” is the logical conclusion until you invent the telescope to prove it isn’t happening.

    Consciousness is like this. We observe it to be one thing, through the senses that we have available to us, and draw the wrong conclusions. The “self” is a complex of experiences, or even perspectives on experience, that we call self and everyone knows what we mean because they experience it also. But there is no such thing outside of the experience of it.

    I think part of the legacy for a lot of these philosophical problems is that they are phrased in the abstract (the supposed realm of pure intellect and reason). What is the nature of consciousness? Well, that’s a question about an abstraction and the answer is always going to be that an abstraction has the nature of being abstract. Deduction only leads to restatements of your axioms. There probably is no shared quality that all experiences have that would make up the category of “consciousness”, especially since self-reflexivity is also an experience.

    Liked by 2 people

    • Good catch on Searle. I stand corrected. Not sure why I wrote his name there. I replaced it with Frank Jackson’s. (I do disagree with Searle on other things, but that’s a different topic.)

      Excellent points on the rest. I particularly like the abstraction one. Yes, consciousness is an abstraction, a category, a word we use to refer to a suite of phenomena. Treating it like a thing in and of itself is like treating humanity as a thing, rather than a category.

      Liked by 1 person

      • jayarava says:

        One can only disagree with Searle on naive realism. I was gutted when I realised he was serious about that, because I found his books on the mind and society brilliant.

        Yes, and the thing that the suite of phenomena have in common is that “I” experience them. And we already know that the self is a virtual model maintained by the brain. So the question becomes, “Is this a coherent category?” I’m not sure, but I’m starting to have doubts.

        Liked by 1 person

        • I think it’s a coherent category as long as we remember what it is. My issue with calling the self an illusion is that we then need to come up with another term for that internal model we maintain. But I could see the point of view that shedding the old vocabulary clears out a lot of misconceptions.

          Liked by 1 person

  3. Paul Torek says:

    Related: John Heil’s explanation – – of why the “hard problem” need not be as hard as it’s often made out to be.

    I also love your point about the communication between perceiving centers and planning centers of the brain: this hypothesis predicts that we should find an epistemic gap between scientific descriptions of brain activity, and the qualia they correspond to. And lo and behold, we do find such a gap! In other words, (any intelligent version of) physicalism predicts the very fact that is most often used against it.

    It’s not often that a theory is rejected because it makes a successful prediction. But there you have it.

    Liked by 2 people

    • Thanks Paul, both for the link to Heil’s article and your comments about my other post.

      You actually focus on an aspect of the communication hypothesis that I failed to note, that there won’t necessarily be communication from some regions of the brain, notably the early sensory processing regions. For example, in visual perception, we only consciously get the more prepared versions from later processing regions, not the raw sensory data coming in from the optic nerve. I suspect the circuits from these early regions so the movement planning regions don’t exist, and so no matter how hard we introspect, that raw sensory input is always denied to us.


      • Paul Torek says:

        This comment disappeared, then got called a duplicate; I’ll try again.

        Good point about early raw sensory input. I had a different gap in mind: I mean the epistemic gap between the whole brain process description of perceiving red (for example), versus why that feels this way (subjective-red). Contrast the heliocentric explanation of our perceptions of sundown – it explains everything we observe. We can think through the explanation and see, oh yeah, the angle from us to the sun will dip below the horizon, just as we see. Whereas the neural description doesn’t explain why subjective red feels like this.

        What would it take to explain feels like this? The language of the explanation would have to call up a memory in the hearer, a memory of subjective-red. So, something about our perception of neurons and brains would have to rely on the very same sensory-processing region that communicates redness. But that would be bizarre and wildly improbable. It would be like looking under the hood of a car and finding that the very same machinery that constitutes the engine was also the radio receiver. And we have done some looking under the hood (i.e. skull) and that’s not what we’re finding.

        Our scientific hypothesis can’t provide a non-circular explanation of why a particular quale feels like this. But it does provide an explanation of why there are no non-circular explanations of subjective feelings. And that is what is needed.

        Liked by 1 person

        • Sorry Paul. Looks like the spam folder ate the original. If that ever happens again, and you don’t feel like re-typing a comment, just post a short followup comment letting me know, or drop me an email, and I’ll fish it out.

          Thanks for the clarification. Well said!


  4. Mark Titus says:

    Take a look at the essay “McDonald’s Child” in the left column of this website: It identifies some of the biological/neural mechanisms by which experience is created and becomes conscious.

    Liked by 1 person

    • Thanks Mark. Your essay reminds me of a conversation I had with someone a few years ago, when they asked how we have the sensation of red. After I described the concept of red sensitive retina cones and the process where signals propogate up the optic nerve, are processed in the occipital lobe, and generate neural patterns in the rear temporal lobe and parietal lobe that interact with regions in the prefrontal cortex, they responded with, “Yes, but why do we experience red as redness?” I asked how else they thought it should be received, and pointed out that it has to be communicates some way. They responded by asking why that way in particular.

      In other words, some people simply aren’t prepared to receive the explanation, or at least an explanation other than what they’re hoping for.


  5. john zande says:

    This is the question of why we think there is a hard problem.

    That got my head nodding. Like you, provided I understand your position correctly, I don’t tend to think it’s really complicated at all.

    Liked by 2 people

  6. Callan says:

    But it doesn’t have access to the lower level mechanisms, to the actual mechanics of the construction of experience.

    Okay, here’s a devils advocate that might have some challenge to it.

    Just as much as you yourself do not have any kind of access to yourself to prove this statement you’ve made. You are working on faith that this is correct – you being unable to access the construction of experience means you can’t access that any such construction is taking place. You have a theory that inwardly you cannot confirm and have placed faith in it (stated as ‘having faith’ for controversy – arguably we have faith that we wont be run over by a bus or stabbed by our loved ones and it’s not controversial that we invest in that kind of faith. But I said I’d be a devils advocate so there’s some provocative wording for you)

    So how much of this lack of inner access can you actually know for sure…given your lack of inner access to determine you have a lack of inner access? Or are you running off faith that it is the case.

    And when the other guy runs off another faith, can you be so ready to jump on their unconfirmable intuition about themselves?

    “But the science! The brain scans!”

    All internally unconfirmable. The screen can show you what’s in your brain, but if you can’t feel it, how do you know it’s true?

    My main point is a pain in the ass humility point where a lack of internal access means you or I are no better off than the guy who surmises a supernatural soul is involved. We’re equally as oblivious as to what is involved as that guy.

    Liked by 1 person

    • I think we do have to always be cognizant that the current explanations may not always be the best ones, that more accurate ones, ones that make better predictions, might always replace the current ones. That kind of epistemic humility always needs to be there.

      That said, I don’t think the faith (I prefer the word “trust) is equal on both sides. First, we have a lot of neuroscience data that tells us that lower level processing takes place all over the place. I’m sure you’ve heard the example that we have a hole in the center of our field of vision which we never perceive, because the brain edits it out. Or that we perceive the feel of our hand touching something at the same time that we see the touch as happening concurrently, even though the visual signal reaches us much faster than the touch one. And of course, we know that autonomic processes, such as heartbeat regulation, breathing, hormonal regulation, etc, all happen well below the level of consciousness. So, low level processing happens.

      But many dualists would probably accept that. The real bone of contention might be where the upper level processing happens. Here, brain scans are only the latest iteration of something that’s been going on for a long time. We have a century and a half of medical data correlating capability loss with specific brain injuries. If dualism of some kind were true, we’d expect some aspect of the mind, of consciousness, to be immune, but the data doesn’t show that. V. S. Ramachandran wrote a whole book describing all the weird aspects of the mind that could be knocked out with such injuries.

      And the idea of higher level processing not having access to lower level mechanism will resonate with anyone who works in software development. Software applications don’t have access to the lower level processing of the operating system, or the hardware. In other words, the application’s models don’t include what happens at those lower layers. It’s not hard to imagine a similar dynamic happening in the brain, where conscious cognitive functions don’t have access to lower level ones.

      Liked by 1 person

      • Callan says:

        The thing is with the blindspot in the eye, you can actually test that. Can you test any of this science stuff internally and tell your brain works this way or that as suggested (with mounting evidence) by scientific investigation? The answer is no. We’re stuck in the same place as those claiming a supernatural soul or a dozen other interpretations of events.

        I mean, with more and more science out there you could arrange to have a needle put into your brain and fire a pulse of electricity to have certain responce effects. But could you tell that can be done from the inside? From self reflection? No.

        Without such access, does it count as any better understanding of the professed state of ‘consciousness’ than the dualist?

        The explanations require so much trust as to be to the point of faith. That’s most of your hard problem there – that leap of faith.


  7. Steve Morris says:

    I may perhaps be too literal in my thinking, or insufficiently subtle and agile of mind, but I have never once been able to grasp why the problem of consciousness is a problem at all, hard or otherwise. I suspect that you have nailed it in your reply, but I can’t even get to the point where I understand what the fuss is all about.

    Liked by 2 people

    • Steve Morris says:

      I have the exact same problem with spirituality. People try to explain to me what it is, but I never understand. It always sounds like religion or mysticism. I doubt that it exists at all. Same thing here.

      Liked by 1 person

    • john zande says:

      In full agreement

      Liked by 2 people

    • For a long time, I totally didn’t understand what the hard problem was about. Eventually I made a strong effort to get that understanding. So many people were troubled about it that it felt like I must be missing something.

      I think I do understand it now (although the language has never been particularly concise or concrete). The answer has always seemed obvious to me. Introspection is not to be trusted. Our first strategy with any mysteries or paradoxes it seems to produce should be to first rule out that they aren’t apparitions.

      Neuropsychologist Elkhonon Goldberg, in one of his books, made the quip that the only reason we still talk about consciousness is, “Old gods die hard.” I think consciousness remains a subject because a lot of people want it to be something more than it is.

      Liked by 3 people

  8. Michael says:

    Hi Mike,

    I think you’ve said this exceedingly well, and that you describe a coherent view of the world. It is also the case, I believe, that your conclusion is your premise. Unfortunately I think that is the very nature of this topic of discussion. Those who see this differently simply have a different premise.

    On the question of the hard problem, I would just say that when we first watched balls bouncing down the street, knowing nothing about physics, as nascent scientists we could formulate a dozen hypotheses for why bouncing balls behave as they do. Then we could test the predictions of our ideas on another ensemble of bouncing balls, and what’s more, we could invite everyone in the world to observe the same data we have in our possession.

    I think it is rational to admit that the problem of consciousness is another category of problem. The reason is that no one else can observe another person’s consciousness. I simply cannot test with 100% confidence any rules I may formulate about the state of your experience, and the state of your physical body, without hypothesizing a one-to-one coupling of subjective experience to the physical system. It is, I think, at least for the time being, quite simply the case that the possibility of a decoupling between the expressions we make on our faces and the experiences we are having at that particular moment in time does exist.

    The hypothesis that underpins your article is twofold I think. The first aspect is that a necessary and precise coupling exists between subjective experience and the state configuration of a body, and second that this coupling has a hierarchy to it, which you describe as a real and an illusory component. Taken together these hypotheses produce a coherent view of the world.

    The strength of argument that science offers us is evidence of the first premise: e.g. evidence of the coupling, insofar as we have been able to map various correlations. What science cannot yet ascertain is whether or not the hierarchical disposition is accurately posed. For instance, if I put a glove on my hand, I could confirm that everywhere the glove goes, my hand goes with it. There is perfect coupling, or correlation, and if our view was sufficiently limited, we could suspect the glove was driving the hand. It’s hard to envision making that mistake, but it’s merely a simple analogy.

    We have a preponderance of evidence at our disposal to suggest that balls do not bounce as they do because that is how they want to bounce. But when it comes to our subjective experience I think it remains the case that we cannot objectively study the matter. I don’t believe we have the capability to coincidentally examine our subjective experience and the state configuration of our body without dragging into the problem the unreliable nature of our perceiving. How do we do that? So the only way to approach the problem scientifically is to make the two claims above: to test for a perfect coupling of material states and subjective experience, and assert the principality of material states.

    It is a beautiful and powerful set of claims with considerable predictive power, and yields a scientific explanation of consciousness. But it is obvious why people who disagree won’t accept a scientific answer, and that is because it assumes the outcome through the second premise. What you’re left with is Occam’s Razor. At the present time, I respect opinions on both sides of the line, and look forward to what comes next.


    Liked by 2 people

    • Mark Titus says:

      Suppose the afferent neurons (the ones going from receptor cells to the brain) on the backs of two peoples’ hands were somehow grafted together. Isn’t it likely that a stimulus to a hair follicle on one of the person’s hands would produce a tickle in both of them? If it wasn’t precisely the same tickle (due to higher order neural processing), they could at least have an interesting discussion of “what it is like to experience that tickle.”

      Liked by 1 person

      • This reminds me of the conjoined twins who seem to share sensory perceptions.

        Of course, we from the outside can’t know what it’s like to be them in their shared state, although who knows what future technology might provide.


        • Mark Titus says:


          An utterly riveting article. Thanks so much for linking it—and in such a creative format!

          The “tickle” example in my comment was suggested to me by the neurobiologist Gordon Shepherd at Yale when I emailed him asking if he knew of any attempts to graft neurons. I’m surprised—in fact a little shocked—that he seemed unaware of cases of craniopagus twins. Sharing a “tickle” is small potatoes.

          I think the takeaway from the article (and your post) is that there really is no specific philosophical “problem of consciousness,” at least not in terms of its origins and general ontogeny. The problem is how to use it properly, especially in view of the fact that it perishes with our bodies.



    • Hi Michael,
      As always, I very much appreciate your kind words and thoughtful comments.

      If I understand your remarks correctly, you agree that science can corroborate correlations between subjective experiences and body/brain states, but has yet to validate the hierarchy of processing.

      On the first, I actually have to admit here that, while I think science is making progress in this direction, I actually don’t think it’s there yet. For example, it’s not clear to me that we yet know what the brain state is for the experience of redness.

      Ironically, I think we have better information on the lower level processing. For example, we know there are cones on the retina that get excited by certain wavelengths of light. A portion of these cones get excited by the range of wavelengths we generally define as red. This causes signalling to go up the optic nerve to the thalamus and then to the occipital lobe.

      We have a good idea of where various combinations of shapes, textures, and colors get resolved into particular categories of objects. We also have a good idea of where any motion of these objects may be identified and processed.

      This all happens below the level or outside of consciousness. I’m generally not conscious of my mind resolving a fish shape into the fish category, just the final perception, unless there are conditions that make this resolution difficult, such as poor lighting or other confounding factors. But we know from brain injury studies that this processing happens since injury to particular areas deprives people the ability to make those resolutions.

      But where does the brain determine that “I am seeing red” or “I am seeing a fish”? Where does it become conscious knowledge? Some people argue that it happens in the parietal lobe, others the prefrontal cortex. Some insist it doesn’t happen until the introspective mechanisms in the prefrontal cortex kick in. Myself, I find compelling the view that it happens collaboratively between several regions, but orchestrated by the prefrontal cortex. But at this stage it’s mostly (informed) speculation.

      It seems to me that these gaps in knowledge do give some comfort to those who see subjective experience as non-physical, but those gaps seem to be getting tighter every year.

      Liked by 1 person

  9. jayarava says:

    Some further thoughts on this. I’ve recently been reminded that the problem with materialism is that it is usually combined with reductionism. In reductionist materialism, the “real” is only the lowest layer of the universe (what happens at the Planck scale) and everything else is some kind of illusion. And because the lowest level is deterministic, then reductionist materialists argue that the entire universe of 100 orders of magnitude of mass, length, and energy is also deterministic. And then they try to explain consciousness within this straight-jacket.

    Consider a protein catalyst. A macromolecule made from 100s of amino-acids chained together. However the catalytic activity has nothing to do with the chemistry of amino-acid monomers or polymers. It is because the protein is a particular shape. It is the structure of the protein as a whole that gives it the catalytic property. The atomic theory of matter could never predict this. It’s not even in the remit of the atomic theory of matter (let alone quantum field theory!).

    Once we move along the scale to a living organism, structure is playing the decisive role and substance is almost irrelevant. Our atoms and molecules are constantly being swapped out for identical components without our noticing or needing to notice. This is pretty much true at the level of cells. Structures have the property of persisting over time even when components are replaced (Theseus’s ship is still a ship no matter how many planks are replaced).

    Yes, we can see the brain as a collection of atoms or cells, but this tells us little. What makes the brain interesting is the structure, the topology of it. A reductionist approach will certainly gain us knowledge, but not the kind of knowledge that allows us to replicate such a system.

    A biologist may learn something about the organism they are interested in by reducing it to its component atoms. But clearly they will learn more by looking at its molecules. And more still by looking at its organs. And still more by looking at a whole living organism. And most by seeing the organism alive and interacting with its environment. It’s not that reductionism tells us nothing about complex systems. We do gain some knowledge from destroying structures. The problem is that it is the structures themselves that make the higher levels interesting and if we don’t pay attention to that, then we are missing at least half of the universe.

    The phenomena that interest us are clearly emergent properties of an embodied brain (and are in the middle of the layers of scale in the universe). The concern some of us have over reductive materialism is that its simply the wrong approach for understanding such phenomena. We need anti-reductive (or emergentist) approaches, whether “materialist” or some other view. Personally I think mono-substance ontologies make the most sense. What most of us who get pejoratively called “materialists” really think is that there is no ontological distinction between mind and matter/energy, even though there might be a useful epistemological distinction.

    Part of the reason ontological dualists are pessimistic about understanding what they call “consciousness” is precisely that dualism makes the mind axiomatically unknowable. A year after David Chalmers published that article on the Hard Problem, he published another advocating dualism. In 1995 he was mildly optimistic about solving the HP, but by 1996 he had completely abandoned that optimism in favour of a pessimistic dualism which said we would never solve it.

    As ontological monists we tend to be more optimistic because of the successes in understanding what stuff is made of and made into, may be extrapolated into other manifestations of the one kind of stuff. Our methods are applicable across the board and thus we can expect progress.

    The problem is that substance monism is counter-intuitive: there is a clear epistemic mind-body difference, and people assume that this is because of an ontic difference (I argue that it isn’t). Most people, throughout history have been some kind of mind-body dualists. Most people I know still are. But it is difficult to have a conversation about it because neither side really understands their own views very well, i.e. many don’t adequately distinguish epistemology and ontology; and within that don’t see the reductionist/antireductionist distinction clearly. Add to that my previous comment about consciousness qua abstraction and one begins to see why there is no consensus. The commitment to legacy concepts in philosophy and science seems to me to be the greatest stumbling block to progress in this area.

    Liked by 4 people

    • Well said.

      I do think we have to make a distinction between reductionism and eliminative reductionism. I count myself in the first camp, but not the second. While I think everything eventually reduces to elementary particles, I don’t find it productive to dismiss higher level phenomena. The higher level phenomena aren’t just jumbles of the lower level ones. The structure of these phenomena, the organization, the patterns, matter.

      Epistemically, Steve Carroll pointed out that we can’t use quantum theory to predict the periodic table of elements, much less stars, planets, galaxies, and the rest of the universe. The fact is that our minds have limited capabilities. We construct models to understand reality. Some of those models feel primal to us, because they’re innate or ones we develop very early on, such as models of our bodies, homes, and social environment. Others require substantial metaphor, such as our understanding of the very big or very small.

      The main thing is, that we have to switch models as we scale up or down in size or complexity. When thinking about particle physics, we employ particular models for that. We employ different models for chemistry, biology, or a social gatherings.

      Even in computing, I hold certain models when I’m thinking about hardware, different models when thinking about device drivers or low level operating system functionality, and yet different models when thinking about automated business processes. In this case, there’s no doubt for anyone that the higher level processes are completely built on the lower level ones, but dealing with the higher level ones productively requires employing the right model.

      So I think you’re right. Substance dualism is intuitive precisely because we employ a different model for mind than we do for brain. It’s completely rational for us to do so. And the exact connections between the two were, for a long time, completely obscure. Even today, as I admitted to Michael above, the mapping between them is far from complete. Even if it ever is, the connection between the two won’t feel like one arises from the other.

      And the model for our own consciousness is innate, an evolved adaptation. But there’s nothing to map that model, at least intuitively, to the others. There never will be. We’ll always be faced with having to look beyond our intuitions, as we have to do with other aspects of science.

      Liked by 3 people

      • jayarava says:

        Hmm. Of course all structures can be reduced, but all the knowledge one gains in the process is knowledge of substance. Knowledge of structure and phenomena that depend on structures, is ipso facto eliminated by reductionism. Any reductionist ontology leads to methodological reductionism and this is always eliminativist. If you break a brain into pieces you won’t find a conscious mind anyway.

        However, since we are interested in high levels of organisation and structure, reductionism can only shed a limited amount of light on the subject.

        In the classic example of the Ship of Theseus, the focus is usually on the identity of the ship, which is eliminated by reductionist methodologies. However, no matter how many planks we replace, the ship is still a ship. The structure has integrity over time and has properties that are more than simple aggregations of planks etc. A pile of wood may float, but a raft is never going to give you the advantages of a boat in terms of carrying capacity and hydrodynamic efficiency.

        If one takes a reductionist ontology as axiomatic and goes around dismantling everything, then of course one gains knowledge of parts and substances. And this is taken as an endorsement of reductionist methodologies. But even the most hardcore reductionist has to turn this on its head to understand systems. And they all do, even if only tacitly.

        Any complex phenomena must be seen through two lens: substance reductionism and structure antireductionism. The brain is made of neurons, but it is not a mush of disordered neurons, it is a network of neurons with a very specific topology. Without that topology the neurons are not a brain. For example, the 100 million neurons that line our gut do not constitute a brain, even though that is about how many neurons a cat brain has.

        It’s not enough to accept weak emergentism and switch reductive models as we change scale. Of course we must do this. And we all do it, though few people these days talk about this and even Sean Carroll who does, maintains a commitment to a reductionist ontology which we can see from his frequent references to “fundamental reality”. But in itself this is not enough.

        Because of paradigmatic reductionism, particle physicists tend to dominate the discussion about metaphysics. The axiom is: “smaller is more real”. But this is just an ideology. It’s just something that people with a metaphysical commitment to reductionism assert. It’s not true. By most standards of “real”, structures that persist over time and can act as causal agents are every bit as real as simple objects. From a human point of view, macro objects seem more real because we can interact with them directly. Biologists can also tell us about reality. Sociologists can tell us about social reality. I make no claims for psychology which I am currently very dubious about.

        To declare oneself to be a reductionist, is to announce the poverty of one’s approach to any given problem. Its states that one’s ontology limits the methods one is are allowed to use. This is more religion than science. But the fact is that you almost certainly use antireductive methods, but cannot talk about it because reductionism has impoverished the philosophical vocabulary and made systems talk taboo (the very term antireductive says it all really). Systems-oriented sciences are still seen as on the fringe of mainstream science precisely because of this hegemony of reductionism.

        Methodologically, substance reductionism gains us knowledge of parts and substances. Structure antireductionism gains us knowledge of systems. Structure reductionism eliminates potential knowledge. Substance antireductions leads to false positives like mind-body dualism or panpsychism.

        Doing philosophy should not be a situation in which one takes sides, defends a goal, and attacks the opposition. It should be a situation in which one uses the best tools for the job of building something of value.

        The reason I think that substance dualism is intuitive is not to do with models, since it emerged at a time when the world was all on one basic scale as far as anyone knew. It was long before the telescope and microscope opened up new vistas for us to ponder. It’s more to do with how the brain evolved and erring on the side of caution. It’s better to avoid 100 false tigers than to fail to avoid on real tiger. So we over-determine objects as having agency. This allows us to imagine disembodied agency. It combines with certain types of weird experience – the paradigm being the out-of-body experience – and with the fear of death (plus one or two minor factors) to make disembodied minds minimally counterintuitive (to use Justin L Barretts phrase). The discovery of different scales has not affected this in the vast majority of people. Even as someone with a background in science, I have to admit I find very small and very large things hard to imagine – and I’ve spent many hours peering down microscopes and proving Newton’s Laws (in a non-relativistic frame).

        I do agree that our eventual understanding of minds will be counter-intuitive for the majority of people. For example, my standing assumption these days is that anyone still talking about “consciousness” simply hasn’t understood the problem they are trying to think about. So most philosophers are going to be left scratching their heads when we figure out what makes a mind and mental states.

        Liked by 2 people

        • I disagree that reductionism is necessarily eliminativist. Just because I learn about the underlying components and the way they interact does not mean that I cease to see the whole system as existing. Having read him at length, I’m sure Carroll, and many other self described reductionists, would agree.

          But this is a matter of definition and I’m not particularly keen to debate definitions. I do agree with most of your remarks against the eliminativist version of reduction.

          I think to understand a system, you must be willing to reduce it, to pierce its veil, to dissect it. Until you can do so, I don’t think you can truly say you understand it. Sure, you can gain a surface level understanding without that reduction, but in general I find the anti-reductionist stance unproductive. It prematurely ends investigation and ensures that the phenomena in question will not be understood, except perhaps in high level terms.


        • Paul Torek says:

          There’s a third thesis that often gets labeled “reductionism”, besides your two (substance reductionism, structure reductionism). Let’s relabel it “nomological micro-compatibility”. This thesis says that there are no exceptions to low-level laws (how electrons behave for example) due to interference from high-level laws like laws of biology, economics, etc. I like this idea. I call it “compatibility” to distinguish it from the idea that high level laws are logically implied by low level laws, which seems dubious, and which I’m pretty sure I’ve seen labeled “nomological reductionism”. (Un?)fortunately there are no label police, so people can label ideas however they want, with no consistency.


    • Hariod Brawn says:

      ‘ . . . there is no ontological distinction between mind and matter/energy.’

      Yes; so too dichotomous apprehending i.e. subject(ivity)/object(ivity). Call it a mind-construct, call it all a brain construct — makes no difference; it’s all a construct, an abstraction from (or rather, ‘along with’) what is actual, and which so-called consciousness can never ‘touch’ as if it (consciousness) were ‘here’ and the actual ‘there’.

      Liked by 1 person

      • jayarava says:

        You missed out the portion where I said that there is epistemic distinction. We perceive mind via different senses than we perceive sights, sounds, etc.

        I’m quite familiar with this kind of cant (being an ordained Buddhist and published scholar in the field of the history of Buddhist ideas) but I no longer find it very interesting or informative.

        The fact is that however enlightened people experience their perceptions, it doesn’t change the basic ontology. The light entering their eyes still only enters their eyes and is still only processed by their brain, producing an experience that only they experience. Whether or not that person accepts it, the fact is that the experience created by their brain is subjective. I have no more access to the Buddha’s mind, than I have to the nutrients of the food he eats. So the distinction is real and it continues to impose real limitations.

        You find this when you talk to people who report having no sense of self. There is no problem with them knowing whose turn it is to talk, or what things are being referred to, or even what objects are being pointed out. Ontology hasn’t changed, The mountain is still a mountain. What has changed is the meditator’s epistemology, their understanding of experience. And the value they place on certain aspects of experience.

        The fact that so many people try to turn this epistemic insight into an ontological doctrine is indicative of the poverty of religious thinking and nothing else. Instead what appears to happen with enlightened people is that certain types of epistemic distinctions are not made and that certain types of conscious processing of information become unconscious (though still accessible to introspection). They all report that this is a more satisfying state of affairs, which I do not doubt. But none of them becomes confused about how to walk down to the shops and buy a loaf of bread. They all continue to live, to use language and numbers, to socialise, and very often to work. None of which would be possible is the traditional ontologies were taken seriously.


        • Hariod Brawn says:

          I find your tone rather insulting, but never mind, as that appears to be your intent.

          What do you mean by ‘subjective’ — that it only occurs as a phenomenon inside the cranium?


  10. I’d like to challenge Michael Shermer’s position. He says that science will never solve consciousness, freewill, and God, not because of dualism (mysterian), not because of cognitive insufficiency (new mysterian), but rather because these are inherently unanswerable questions (his own “final mysterian” classification). Well let’s take a look.

    First, what is meant by “solve”? Ideally this would be to understand reality, though there’s only one answer that he can ever attain in this regard, or that he personally exists in some manner (from Descartes). So what he must truly be after are functional principles from which to work regarding God, freewill, and consciousness, not the ultimate solutions which he’s fundamentally deprived of. Furthermore I’ll suggest that my single principle of metaphysics would be wonderful for him to learn in this capacity. It goes like this:

    To the extent that causality fails, there’s nothing to figure out anyway.

    This is to say that because “figuring out” demands causal dynamics rather than the converse, figuring out that things don’t function causally inherently violates its own premise. So if Michael Shermer wants to explore reality through reason rather than faith, then he can only metaphysically presume that there is no God — and may ultimately be wrong about that. (Some theists propose that their God provides a realm that’s otherwise causal, which would be fine for us as functional naturalists, though the full premise could only exist through faith rather than reason, or metaphysics rather than physics .)

    Next is freewill, which causality mandates does not exist. But this is from a perfect perspective rather than that of a human. From a mere human perspective in a vast causal realm, yes we can say “freedom exists”, even though greater perspectives should naturally tend to dispel such illusions. (It disheartens me that academia hasn’t quite grasped this yet.)

    Then finally there is consciousness. How are phenomenal experiences produced in a causal realm? Such a practical understanding could conceivably be figured out however. If we were to produce a machine which outputs phenomenal experience for an associated second form of computer to experience (and thus these experiences motivate the second computer to try to feel good rather than bad), then we should also effectively solve this “hard problem”. I have no confidence that humanity ever will produce such a machine, though contra Michael Shermer, it’s certainly not off the conceptual table given the metaphysical premise of causality, or reason over faith.

    Liked by 1 person

    • Hey Eric,
      Shermer is a skeptic and an atheist, so I think he’d agree with your conclusions about God.

      I’m not sure, but on free will, I think he’s a compatibilist. But compatibilists see libertarian free will as false, so again, I think he’d be on the same page. (The questions then become what the scope is of freedom we’re talking about, whose or what’s freedom, and whether social responsibility remains a useful concept.)

      “How are phenomenal experiences produced in a causal realm?”

      I don’t know how Shermer would respond to this. My response is that I think in considering phenomenal experience something that is “produced”, you may be introducing an implied assumption of dualism. In my mind, phenomenal experience, whatever else it may be, is information. I think it’s just part of the information processing of the brain going about its movement planning functions.

      In any case, I agree that it’s scientifically approachable. But I’m more optimistic that we will eventually produce a machine who capabilities and inclinations trigger our intuition of consciousness / humanness, although there will always be those who, clinging to the version of consciousness I dismiss in the post, insist that the machine doesn’t have the real version.

      Liked by 1 person

    • But Mike, the point isn’t that he and I see eye to eye on some things, which I suppose we do. Shermer claims to be a “final mysterian” in only these three regards. I’m saying that he should be a final mysterian in all regards but one, or something which would effectively rob his position of novelty and potential usefulness. Then when we step down to effective rather than ultimate understandings, I’m proposing that my single principle of metaphysics should help quite a bit regarding his three supposed unanswerable questions.

      On God, to the extent that our realm functions supernaturally, reason does not apply. Thus a person who seeks to explore reality through reason rather than faith, needs to effectively presume that there is no God in order for their “reason metaphysics” to be maintained. One down.

      On freewill, I’m surprised that people still consider this question puzzling. Causality mandates that it cannot exist ultimately, though humans don’t have anything near an ultimate perspective. Thus from such a crippled position freedom can usefully be said to exist. What is the scope of this freedom? Freedom here is a function of ignorance — the less that a given situation is understood, the more freedom that should be apparent. What can be considered free in this regard? Something that is conscious, and from a non-omniscient perspective. Social responsibility is definitely a useful concept from this position, given how far the human happens to be from “all knowing”. Two down.

      Given my naturalism I simply can’t be “introducing an implied assumption of dualism” by means of the “produce” term. We use this term in all sorts of natural ways, such as hydrogen and oxygen can “produce” water. I don’t know what physical dynamics create phenomenal experience specifically, but apparently a natural non-conscious computer can output the stuff and so drive the function of a conscious form of computer. Three down.

      I certainly agree with you that phenomenal experience provides information. Toe pain not only tells me that I have a problem there (sense information), but motivates me to protect it from getting worse, or provides valence from which to drive my conscious processor. But how does a computer which is not conscious output such a conscious input? We may never get the engineering worked out there, though I do appreciate your optimism. (Apparently Garth is optimistic about that as well, or at least while high!)

      Liked by 1 person

      • Eric,
        Sorry, didn’t mean to imply by my remark about “produces” that I thought you were abandoning naturalism, only that the way you phrased it made it sound like an unintentional dualistic notion. But I’ve pushed back often enough on your concerns about my specific language that I’ll concede point. Sorry for being pedantic.

        “But how does a computer which is not conscious output such a conscious input?”
        Okay, bear with me, since, despite what I just said, I’m going to make a language point here. Let’s take your language just prior to this question…

        “Toe pain not only tells me that I have a problem there (sense information), but motivates me to protect it from getting worse, or provides valence from which to drive my conscious processor.”

        …and replace it with this language:

        “Toe damage signalling not only tells the central system that there is a problem in that location, but activates an action plan for evaluation by the movement planning subsystem to protect it from getting worse, or provides weighting from which to drive the movement planning function.”

        Does the second one seem more approachable from a technological perspective? Is there meaning in the first that is lost in the second? If so, what?

        Liked by 1 person

    • Mike,
      You said, “In my mind, phenomenal experience, whatever else it may be, is information. I think it’s just part of the information processing of the brain going about its movement planning functions.”

      From there I agreed, but did so by putting things explicitly in terms of my own “dual computers” model. Here all elements of the conscious computer exist through output of the non-conscious computer. Thus the phenomenal experience of toe pain is somehow created by a vast non-conscious computer, for a distinct conscious computer to experience as input. This provides both location, as well as personal punishment (or anti-value). Thus from my depiction, toe pain certainly provides conscious information, though I doubt that you meant it quite like that.

      You now ask me to consider the following potential depiction of my position,

      “Toe damage signalling not only tells the central system that there is a problem in that location, but activates an action plan for evaluation by the movement planning subsystem to protect it from getting worse, or provides weighting from which to drive the movement planning function.”

      Your first question here asked if this seems more technological, and to me yes it definitely does. This seems like how one of our non-conscious robots could be programmed. If something hurts me then I wouldn’t say that I’m “told” that there’s a problem. Instead I’d say that it just f—ing hurts! I interpret this as the non-conscious computer in my head punishing the conscious one (or me). Furthermore the “weighting” term seems extremely robotic. Presumably a damage weighting of “5” for example, would incite associated algorithmic function. I do believe that the non-conscious computer in my head constantly does this sort of thing, but in addition I believe that it outputs a second from of computer that has true teleology, or is motivated to feel good and not bad. So it’s this second computer that I’m not getting a sense of from your iteration.

      I’ll also say that this conscious form of computer is actually the third of the four that I theorize. The first concerns genetic material and functions on the basis of chemical properties. I believe that this computer emerged with “life”. The second is a whole organism processor and functions on the basis of neurons. The third is outputted by the second, but functions on the basis of value rather than neurons. Then the final is a technological variety which is teleologically fabricated. Ours generally function by means of — not chemical properties, not neuron systems, and not value — but rather electricity.

      Liked by 1 person

      • Eric,
        On mapping back to your model, no worries. We always have these discussions with each of us holding our own model of what we’re discussing. As we’ve noted before, I think there’s enough overlap that we can have many productive discussions. And the differences can be what make the discussions interesting.

        My questions about the description versions are designed to get at what in particular about consciousness you think makes it so difficult for a technological system. Yes, my revision does sound pretty robotic. But what is the missing difference? We agree that substance dualism is false, so what is it in the subjective version that is missing from the robotic version?

        (Note: I do think something implicit in the subjective version is missing from the robotic one, and have my own answer, but I’m curious what yours might be, particularly since my missing ingredient is technologically achievable.)

        “If something hurts me then I wouldn’t say that I’m “told” that there’s a problem. Instead I’d say that it just f—ing hurts! I interpret this as the non-conscious computer in my head punishing the conscious one (or me). ”

        But isn’t something “f—ing hurting” information? Isn’t the receipt of that information being transmitted about something? Isn’t pain communication from one part of us to another part of us signalling a situation? Even if it is punishment as you characterize it, isn’t punishment itself a form of communication?

        Certainly pain is a far more primal and visceral form of communication than being told something with language. Language has to be encoded by the sender from its imagination and encoded by the recipient into its imagination. That does make it qualitatively different, more abstract than something received as pure sensation. But isn’t the sensation, fundamentally, still communication? If not, why not?

        Liked by 1 person

    • Yes Mike, when something hurts it inherently provides input and is thus informative. That was actually my original agreement with you. Then from there I packaged things up with my model and so we’ve been talking about that as well.

      What the robot is missing from my model is conceptually simple. It’s missing the conscious second computer which is outputted by the neuron based computer. This conscious computer is not motivated through (1) chemical dynamics, (2) neuron systems, or (4) electricity. This second computer functions on the basis of a punishment/ reward dynamic, or value. I consider value to be the strangest stuff in the universe and have no idea how a non-conscious computer outputs it to drive the function of a computer that experiences it. As before, I remain an architect rather than an engineer (not that any modern engineers can answer this question either).

      If that’s clear enough, what would you say that the robot is missing? And do you consider your answer to conflict with mine?

      Your main question asks why I’m pessimistic about humanity building conscious forms of computer? Well first let me say that once we discard vast swaths of “futurism” nonsense, I should actually be one of the most optimistic people around. Beyond Shermer’s “final mysterian” position (where God, freewill, and consciousness are off the table), note that mental and behavioral scientists in general seem to consider what they study to be “naturally soft”, or somewhat beyond their potential to model. Then there are modern philosophers with both two and a half millennia of western culture to oversee, as well as their own domain of reality to address. Apparently they’ve decided that humanity needn’t develop a respected community that has its own agreed upon principles of metaphysics, epistemology, and axiology from which to work. Conversely I’m optimistic about all of these fields, and believe that progress in philosophy will help our soft sciences harden.

      What I can’t quite get over however, is the magnitude of complexity found in nature’s machines versus ours. A standard human cell might hold the complexity of the function of a vast human metropolis. How might we design and build something comparable? And note that we’d have to do so while carrying an enormous extra condition. Evolution needn’t understand anything in order for its machines to be developed, unlike us. I’m not saying that it would be impossible for us to build a conscious computer. I’m saying that even after all of the advancement in human understandings which I foresee (and modern professionals do not), we’re playing under far more stringent rules. Right?

      Liked by 1 person

      • Eric,
        “I consider value to be the strangest stuff in the universe and have no idea how a non-conscious computer outputs it to drive the function of a computer that experiences it.”

        Given your anti-mysterian stance, I’m surprised you’re willing to just leave it at that. Personally, I don’t find value that mysterious. In my view, it’s the survival circuits, the reflexes, the programming that exist at the base of the brain that determines what our dispositions will be toward various perceptual patterns, evolved because it increases our homeostasis and gene preservation. But maybe I’m missing something?

        “If that’s clear enough, what would you say that the robot is missing? And do you consider your answer to conflict with mine?”

        My answer is metacognition, introspection, the system building predictive models of aspects of its own processing. But that’s an addition to what I called the movement planning centers in the rewording. Together the movement planner and metacognitive framework (all in the prefrontal cortex), could be considered your second computer.

        So, there’s conceptual overlap, but they’re not quite the same. The introspector and the movement planner (often lumped together as the “executive center” in neuroscience literature) are far less discrete, far more entangled with the rest of the brain, and far vaster than the second computer you envisage.

        “I’m saying that even after all of the advancement in human understandings which I foresee (and modern professionals do not), we’re playing under far more stringent rules. Right?”

        Perhaps. There are problems where there’s room for doubt that science will ever solve them, such as wave-particle duality, black hole singularities, multiverses, etc. But based on all the neuroscience I’ve read, while I see an enormously complex system, I don’t see anything beyond science’s ability to observe and learn it. It may take decades, perhaps even centuries, but I think we’ll get there. If it’s observable and coherent, history hasn’t been kind to those who said it was forever unknowable.

        Liked by 1 person

    • Mike,
      It’s not mysterian to say that I have no idea about something that I have no idea about. This is just being honest. Surely if I were to claim that I understood things that I didn’t understand, then my models would suffer for it?

      As I define “value”, and though you’re perfectly free to define it in separate ways, metacognition is irrelevant to it. So here I’m not contradicting you, but rather explaining a separate definition which I’ve found useful.

      Note that we call “gravity” a property of nature by which matter attracts matter. Well I theorize an element to reality by which a computer can cause something that it creates, to suffer or feel good. I define this stuff as “value”. Why do I theorize such a strange dynamic to exist? Because I am such a thing, and my evidence suggest that I was created by means of computation. As a conscious computer I naturally try to do what will make me feel good and avoid what will make me feel bad. Still as a naturalist I don’t believe that value can only exists when it’s “functional”. I believe that the properties of nature mandate that it’s possible for an organic computer to produce +/- value beyond evolutionary effectiveness. If not, how would evolution have this tool at its disposal?

      I realize that you’re defining “value” such that it depends upon metacognition. From my own separate definition however, do you have reason to doubt the existence of a property of nature by which existence can be anywhere from horrible to wonderful?

      I like this statement:

      If it’s observable and coherent, history hasn’t been kind to those who said it was forever unknowable.

      Indeed! Of course there’s a difference between “knowing” and “doing”, or essentially “science” and “engineering”. Evolution obviously does have the tools to create nothing less than life, though for example, I don’t presume that evolution has provided the human with such tools as well.

      More importantly however there is our optimism regarding human understanding. Might you be as optimistic as I am regarding philosophy and our mental and behavioral sciences? And would you say that science might very well be aided by means of certain derived principles of metaphysics, epistemology, and axiology from which to work?

      Liked by 1 person

      • Eric,
        Actually, my understanding of value doesn’t require metacognition. Many (most?) animal species don’t have metacognition, yet still have drives that factor into their behavior.

        “do you have reason to doubt the existence of a property of nature by which existence can be anywhere from horrible to wonderful?”

        It depends on what you mean here by “property of nature”. You make a comparison with gravity, which I know you understand is one of the fundamental physical forces. Do I think value is anything like that? No. I see no evidence for it. A physics view of the universe strikes me as an utterly nihilistic one.

        If by “property” you mean something like biological value, the urge to preserve and propagate genetic legacy, something that every life form strives for, then I’d say yes, it does exist as a property of biology, an inevitable aspect of natural selection.

        Or do you mean a property of conscious creatures? Again, I’d say yes to this version too, but to me this is a specialization of biological value. (Although I think I remember you rejecting that connection.)

        But in the case of both biological and conscious value, I don’t see anything fundamentally mysterious, just biological instinct, i.e. programming. Granted, it’s not intuitive to think that our feelings of joy or sorrow are due to our evolved programming, but I think that’s where the data points.

        Again, maybe I’m missing something?

        “Might you be as optimistic as I am regarding philosophy and our mental and behavioral sciences?”

        If by “optimistic”, you mean that I think the social sciences can produce theories with predictions more accurate than random chance? Sure. But if you mean that their predictions will be as accurate as General Relativity or the Standard Model, then no. I’m not optimistic about philosophy, because too much of it delves in the unobservable or the incoherent.

        “And would you say that science might very well be aided by means of certain derived principles of metaphysics, epistemology, and axiology from which to work?”

        In my view, the only unalterable axiom of science is that more accurate predictions are better than less accurate ones. As I see it, every other rule, principle, or methodology should only be kept to the extent it pragmatically helps with that one overriding value. Things like empiricism, materialism, or the scientific method, are products of science, not dogmas of it.

        Liked by 1 person

    • Mike,
      I believe that I’ve discussed these issues with you quite a bit more than I’ve discussed them with any single person. I’d be surprised if you couldn’t say the same about me, or at least over the past couple years. But for some reason I’m only now realizing a crucial element to your consciousness beliefs, and incited by your discussion with Fizan below. And then with this insight I’ve been able to realize that you did essentially tell me the same just above. There you said

      “But in the case of both biological and conscious value, I don’t see anything fundamentally mysterious, just biological instinct, i.e. programming. Granted, it’s not intuitive to think that our feelings of joy or sorrow are due to our evolved programming, but I think that’s where the data points.

      So you don’t consider what’s known as “the hard problem of consciousness”, to exist? This is to say that pain for example, essentially occurs in the human through programming, or essentially in the manner that turbo tax calculates someone’s taxes on a PC? If so then this is actually quite a relief in one regard anyway. It might help explain why I haven’t quite been able to teach you the nature of my “two computers” model. One must presume tremendous specialness to phenomenal experience in order to get it, or what I consider to be reality’s most amazing phenomena (not that know about all phenomena, but still).

      Before I continue, could you give me some specific direction about your beliefs on the matter?

      Liked by 1 person

      • Eric,
        Maybe we’ve had a breakthough? Or perhaps just a break 🙂

        “So you don’t consider what’s known as “the hard problem of consciousness”, to exist?”
        I recognize that there is a uncrossable divide between subjective experience and the correlated objective mechanisms. But I don’t see this as a problem so much as a profound fact.

        “This is to say that pain for example, essentially occurs in the human through programming, or essentially in the manner that turbo tax calculates someone’s taxes on a PC?”
        The determination of pain is obviously far more complicated than what Turbo Tax does, but ultimately I do think that pain is fundamentally just information processing. Of course, from the point of view of the system (us), it’s far more than that. But from the outside perspective, that’s all it is.

        “One must presume tremendous specialness to phenomenal experience in order to get it”
        Obviously our own phenomenal experience, from our perspective as the system, is very special. But again, I see it as information processing, neural computation.

        As far as I can determine, we are information processing systems, part of gene survival machines. Phenomenal consciousness is just the perspective of the system itself. Because the system is us, we feel that there must be something more to it than that. But it’s just us privileging the way we process information, the same way we once privileged ourselves in other ways as being above animals or nature, or in believing that we were the point of the universe.

        All that said, if you can give me reasons why this view is wrong, I’m totally open to reconsidering it.

        Liked by 1 person

    • Mike,
      Apparently we’re working from some quite different models, so rather than field various higher level issues raised here a la carte, I’ve dropped down to a basic discussion below that might thus be more productive to begin from.


  11. Hey Mike. I think you are, in fact, being “too hasty in dismissing the hard-problem version of consciousness”. I think there can, should, and will be an explanation derived solely from experiment and logic.

    The problem, as I see it, is that people believe their experience of “red” has some property that other experiences (like experiencing “blue”) do not have, let’s call this property “redness”. It seems natural for me as a person to assume that this property of redness is part of your experience of “red” also. But it also seems like this property could theoretically instead be part of your experience of “blue”, and so even when you say you’re experiencing blue, you’re actually experiencing what I call redness.

    Now you and others (including myself) will say that this experience of redness is an illusion, but I think you can and should explain exactly how that works. Here’s my take:

    Each experience does have a (kind-of) property, and that property is “difference”, i.e., the experience is identifiably different from other experiences.

    My thinking has taken a mechanistic track of late, so it’s easier to explain in terms of a mechanism. Let’s assume any event (such as an experience) can be expressed as:
    Inputs —> Mechanism —> Outputs
    We’re only interested in sets of inputs which generate unique outputs. If different inputs generate the same output, that’s a difference that doesn’t make a difference.

    So suppose that all the inputs are similar in character but are distinguishable. For example, suppose the inputs were a set of lights placed randomly around a wooden board. Each individual light is similar, but the mechanism of reading the board to produce distinct outputs can differentiate each individual light from the others. Imagine there is a camera looking at a room and wired such that when a red light is on in the room, a particular but arbitrary light on the board lights up. When a blue light is on, a different light on the board is turned on. As far as the mechanism is concerned, there is only inputs and associated outputs. But the mechanism is not “concerned”. The mechanism, as Dennett would say, is competent without comprehension. If queried, the Mechanism would know nothing about the board of lights, only whether or not the “red one” was on or off.

    I expect you good materialists know where this is going. Instead of lights on a board we have incoming synapses. So the scientific question is, what is the mechanism that has the ability to differentiate all the possible incoming inputs? Which parts of the brain are inside the mechanism and which outside? Mike thinks parts of the cortex (prefrontal) are inside the Mechanism. I’m inclined to think that the entire cortex is outside the mechanism, and the inside includes the thalamus, basal ganglia, probably claustrum, and possibly other subcortical parts. But science will determine that discrepancy.

    I should note that deciding what is inside the Mechanism is arbitrary. Once inside and outside is defined, you can always ask what Inputs produce what Outputs for that particular grouping. When I refer to “the Mechanism” above, I am referring to that Mechanism which is responsible for all the inputs and outputs we associate with Damasio’s autobiographical self, i.e., what most philosophers mean when they refer simply to “the self”.

    [tagline: the cortex is the umwelt of the autobiographical self]

    Liked by 1 person

    • Hey James,

      “Now you and others (including myself) will say that this experience of redness is an illusion,”
      Actually, I don’t think it’s an illusion (at least not in the sense that it doesn’t exist). I have sympathy with the camp that says if it is an illusion, then the illusion is the experience. I think it’s more productive to say it’s a construction. But this may be a matter of terminology on a point that we agree on ontologically.

      Interesting take on the mechanism. If I understand what you mean by the mechanism, as a point of clarification, I would consider the prefrontal cortex and its associated nuclei in the thalamus to be in it, but with inputs coming in from and outputs to all over the cerebral-thalamic system.

      The basal ganglia strike me as being more involved in habitual movement decisions, much of which happens below the level of consciousness. After our discussion the other day, I do suspect that the claustrum’s role may be as a pace setter, a sort of processing clock, which would meet all the observations of its involvement with cognition but be compatible with its diminutive size.

      The problem is that the discrimination between different visual signals seems to happen at multiple levels. The mid-brain region: superior colliculus, seems to be involved in rudimentary determinations, to the extent that even if the occipital lobe is destroyed making conscious discernment impossible, patients can often still take an accurate “guess” at what in front of their eyes (see blindsight). And obviously if the basal ganglia is involved in habitual movement decisions, it still has to receive visual signals, even if we’re often not conscious of them.

      It may also be that the redness of red or blueness of blue doesn’t happen unless the introspection mechanisms are involved, and there’s data indicating that they’re also in the prefrontal cortex.

      All that said, I’m not clear why any of this make you think the hard-problem version of consciousness should not be dismissed. I don’t perceive it’s necessary for any of the above. But maybe I’m missing something?

      Liked by 2 people

      • Mike, you do seem to have the correct understanding about the mechanism. Unfortunately I’m afraid I don’t know enough about what processing happens where in the brain to be able to have an intelligent conversation about it. I could be totally wrong about this, but I’m pretty sure that frontal lobotomy is not associated with significant specific changes to consciousness. I would expect significant damage to the mechanism of interest (autobiographical self) would tend to destroy the ability to have a significant subset of reportable experiences. Such is what you find with the destruction of the thalamus, for example. There is certainly processing that goes on totally within the cortex. If my hypothesis is correct, localized damage to cortex would have the effect of removing certain specific inputs to the mechanism without affecting others.

        [as for the other stuff, I think your understanding is the same as mine]

        Liked by 1 person

        • On frontal lobotomies, full ones are relatively rare. Most of the ones that were performed were partial to some degree, which clouds the effects of separating that region from the rest of the brain. Mercifully, lobotomies are rare today.

          We do know that frontal lobe pathologies can rob people of the ability / motivation to self report anything. The question is whether they’re still conscious and unable to report, or simply lack high order consciousness due to the pathology.

          Koch, in one of his position papers, reported that some frontal lobe patients who had recovered use of their frontal lobes, later reported that they were conscious but were simply unable to respond. However, this was anecdotal and hearsay, not clinical data, and since it contradicts other data, I’d need to see corroborating evidence before I changed my understanding.

          All we can say for sure is that we don’t have a perfect understanding yet. I suspect both of us will have to update our views as more research is done.


    • Hey agrudzinsky,
      Just to make sure we’re on the same page, this is what the transcript showed (I’m at work so watching the video would have to wait):
      “The mind can be defined as an embodied and a relational process that is self organizing and emergent. And what it means is that arising from energy information flow is a self organizing process. So self organizing emerging process that’s both embodied and relational that regulates the flow of energy and information.

      So, the simplest way of saying it is, one part of the mind can be defined as an embodied in-relational process that regulates the flow of energy and information.”

      This doesn’t seem very concrete to me, and a bit too buzz-word heavy.

      To me the mind is evolution’s solution for an organism to plan and execute its movements for optimal homeostasis and gene survival, and everything needed to support that function. But I fully realize that definition will come across as too reductionist for many people.


      • agrudzinsky says:

        I wouldn’t expect a definition of a mind to be very concrete. I like how Siegel defines mind as a process rather than a system, for example. Read his book “Mindsight” for more understanding of what he means.

        I don’t think that your definition is very far from Siegel’s. What he means by the “embodied relational process that regulates the flow of energy and information” is that organisms process information, such as threats, availability of food, social signals of all kinds, etc., and react to it. Some reactions are physical like increased heart rate and release of energy in the muscles, some are emotional in terms of releasing pleasure hormones or adrenalin. But most of these reactions can be described as “regulating energy” which is a more general description of “planning and executing movements for optimal homeostasis”.

        What he means by “relational” is that the process works through connecting the neurons in the brain, connecting parts of the brain to work together, and also relating the processes in the brain to the processes outside the body – memories, emotional attachments, etc.

        It is also interesting that from Siegel’s definition, a group (society or a nation) can also have a “mind” in terms of ways to process information such as internal and external threats and reacting to it by directing resources and energy.

        It also follows that it is possible for a machine to have a “mind”. Because machines can process information and direct energy as well.

        Liked by 1 person

        • Thanks for the clarification. I think I’m on board with each of the terms.

          “Relational” and “embodied” in particular remind me that minds always exist relative to their environment. I’m often told that minds can’t be information processing, because information processing is always a matter of interpretation and so relative to other minds, but embodiment provides the environment’s interpretation of the information processing happening in the brain, just as a machine body would provide the interpretation for the information processing of its central control systems.


  12. Travis R says:

    This lack of access places an uncrossable gap between subjective experience and objective knowledge about the brain.  But there’s no reason to think this gap is ontological, just epistemic, that is, it’s not about what is, but what we can know, a limitation of the direct modeling a system can do on itself.

    This. It seems to me that the hard problem just is the difficulty, or maybe impossibility, of seamlessly transitioning between the 3rd person and the 1st person descriptions of the mental world. Even if we were to ever be able to perfectly predict, or cause, phenomenal experiences from descriptions of neural processes, people would still cry “Law of identity! The process is not the same as the experience!” If satisfying the law of identity is the criteria we need to meet to have an explanation, then maybe ‘mysterianism’ is the proper response. But that seems like an excessive requirement that exceeds the burdens we place for claiming ontological equivalence in other domains where there are multiple, variant descriptions.

    Liked by 2 people

    • jayarava says:

      Yes. The trouble is that “the 1st person descriptions of the mental world” are part of the same mental world they describe. Describing an experience is another kind of experience. The sense of being the someone who is having or describing the experience is also an experience. What is it like to wonder about what it is like to be a bat?

      When we try to examine our minds from the inside we just see endless reflections, because what our mind does is generate experiences. As far as I can tell, simple observation can never get behind this. The experience of looking behind experience is another kind of experience. There appears to be no way out of the hall of mirrors in that sense. We have what appears to be an absolute epistemic limitation on the kinds of knowledge we can gain on our own experience, because everything we do generates more experience of the same basic nature.

      And where-ever there are epistemic limits, ontologies should remain uncertain. Thomas Metzinger’s insights into the nature of mind by observing what goes wrong with it and trying to reverse engineer something that could break in that way are decisive. The observing self in his terms is actually a virtual self model generated by the brain. In my terms being a self is another kind of experience.

      The trick is that we can compare notes on experiences and this allows us both to infer knowledge of what the world is like and what minds are like. Such knowledge as we infer is more or less accurate and we inch towards greater accuracy. Whether we ever reach our goal of full knowledge is moot, and really not that interesting while we are still making progress with no end in sight. Speculating on what might happen is for journalists having a slow news day.

      Every time some philosopher comes out and says “you’ll never know X” or “you’ll never understand Y” I take this to be an example of the Mind Projection Fallacy. The basic axiom of Shermer et al is that “If I can’t understand this, then no one can understand it.” I suggest that no intelligent person would take such pronouncements seriously (and leave the obvious corollary unstated).


    • Thanks Travis. Good point about the Law of Identify issue. In truth, I’m not sure any scientific theory could ever satisfy it. At what point does the causal explanation become the empirical observation? Ultimately, for any scientific theory, we always end up dealing in ever more granular correlations, without the two ever fully meeting.


  13. Mark Titus says:

    It seems to me that once neural systems are able to create sensations (as described in an essay I linked earlier to SelfAwarePatterns), they are able to preserve and store them (memory), and then through flexibility of function (neural plasticity) select, combine, arrange, edit, and rearrange the results into a product that we call consciousness. In this way consciousness is “bootstrapped” into existence by and within neural systems.

    Liked by 1 person

    • Mark,
      In other posts, I’ve described a possible hierarchy of that bootstrapping.
      1. Reflexes, programmatic responses to stimuli.
      2. Perceptions, built from input from distance senses (sight, smell, hearing), and semantic memory, which expands the scope in space of what the reflexes can respond to.
      3. Attention, prioritizing what the reflexes respond to.
      4. Imagination, scenario simulations of both of future and past episodic events, increasing the scope in time of what the reflexes can respond to, and choosing which reflexes to allow or inhibit. It is here that reflexes become feelings.
      5. Metacognition, introspection, recursive monitoring of some aspects of 1-4 for fine tuning of action planning, and symbolic thought. Human self awareness requires this layer.

      In the literature, 1-4 is often call first order, primary, or sensory consciousness, and 5 second or higher order consciousness.

      Most computer systems remain firmly at 1. Machine learning systems are at 2. Some autonomous robots (self driving cars, Mars rovers, etc) are approaching 3. The Deepmind people are striving to add very simple versions of 4 to their systems.


      • Mark Titus says:

        I think we should try to keep the hierarchy simple:

        Organisms with neural systems create sensations from environmental stimuli (radiations, pressure changes, molecules).
        They respond to their sensations by moving toward them (“pleasure”) or away from them (“pain”).
        They organize their sensations into patterns they can successfully move around in and survive.
        When their neural systems become complex enough to create those patterns in their absence, they become SelfAwarePatterns.


        • With these models, it’s all in what you hope to get out of them. You simple layout gets a lot of work out of, “They organize their sensations into patterns they can successfully move around in and survive.” No worries, but I want to know more about how this piece happens, which is what leads me to layers above.


          • Mark Titus says:

            “…patterns they can successfully move around in and survive” refers to what ethologists call “fixed action patterns.”

            Of course we are able to modify, combine, and elaborate on these patterns with our more complex neural systems, creating the “higher” layers you refer to.

            (I had originally labeled the hierarchy into 4 points–point 3 was the one about patterns. Somehow it got changed when I posted the comment.)


    • Thanks for the link!

      “Fixed action pattern” seems equivalent to a reflex. They’re definitely rare, in the brain, but there are a number of them in the spinal cord (such as knee jerk and avoidance reflexes).

      Overall, I think we’re saying much the same thing, just with different emphasis and focus on different aspects.


      • Mark Titus says:

        Fixed action patterns might be thought of as sequences or fixed patterns of reflexes perhaps; but they are definitely a product of higher, more central neural processing than reflexes (which is why a spinal cord is sufficient for a knee jerk, but it takes a brain to do what animals with FAPs do).


        • I view it as something of a hierarchy. In the spinal cord are fixed, programmatic reflexes, but their only stimulus is somatosensory, such as a rubber hammer hitting the knee. In the mid-brain regions are FAPs, but they’re broader since they also have access to exteroceptive senses (sight, hearing, smell).

          And then there are the ones that can be inhibited to one degree or another. These are the initiators of affects, dispositions to act that are either allowed or inhibited by higher level functionality, the primal survival circuits around which higher level emotions develop.


          • Mark Titus says:

            We seem to be getting bogged down in a question of which neural functions enable which behavioral activities, and how they are to be arranged into a hierarchy of complexity.

            It seems we might agree that close to the top of the hierarchy is a degree of neural complexity that enables an organism to mirror its behavior–for example, to dream about it when its body is asleep. At the very top would be an organism that knows that it dreams–or shall we say, dreams that it dreams.

            Perhaps you know the Nobel-Prize winning biochemist George Wald’s quip, “A physicist is an atom’s way of knowing about atoms.” That inscrutable intuition, it seems to me, is the bottom line.


    • Mark,
      Sorry. I didn’t intend for my previous response to come off as disagreeing. I was just fleshing out the FAP / reflex concepts.

      Definitely agree about the top part of this hierarchy being organisms metacognitive self awareness, if it has it. (An interesting question is how widespread that awareness of awareness is in the animal kingdom.)

      I hadn’t heard that particular quip before. Thanks! It reminds me of Carl Sagan’s statement that we are the universe experiencing itself.


  14. garthdaisy says:

    Perhaps the Higgs field turns information processing into conscious experience in the same way it gives particles mass, I’m pretty high right now bur that sounds right.

    Liked by 2 people

  15. Fizan says:

    Hi Mike,
    I think you already have a pretty good idea of where I stand on this. I’m probably never going to be satisfied by any explanation given for the hard problem. And I’ve actually been converted to a hard-problemer fairly recently.
    Perhaps even in the stone age, people knew that we see with our eyes. Even some animals would know that I think. Saying we see with certain neuron networks is essentially the same thing. It does add new useful knowledge of how the process works. But as saying we see with our eyes tells us nothing about why we see, elaborating it further also tells us nothing about why we see.

    Liked by 1 person

    • Hi Fizan,
      I remember you describing how an associate had convinced you that there is a hard problem. I very much respect your admission that no explanation will suffice.

      That said, I do hope you’ll indulge me a bit while I explore your specific question:
      “But as saying we see with our eyes tells us nothing about why we see, elaborating it further also tells us nothing about why we see.”

      My question is, what do you mean here by “see”?

      If you mean: why do we build predictive perceptual models based on visual sensory input? If so, I think the answer is an adaptive one, because it increases the accuracy of predictions that we make about the environment, which increases the chances we’ll find food, mates, and avoid predators.

      Or maybe you mean: why are we aware of that we have these perceptual models? Here, I think the answer is that it’s paired with the feelings of valences associated with what is being perceived, and aids in helping to determine which mental reflex we should allow or inhibit.

      But you probably mean: why do we have the experience of seeing? Here I’d ask what is meant by “experience”, and offer a possible answer, that experience is the stuff of what’s happening, what the information flow in the above paragraph feels like.

      Or you might mean, why do we have the appreciation of that experience? Here I’d reach for metacognition, self reflection, which enhances our ability to simulate our own future reactions to things and facilitates our ability to communicate about it.

      Or are my definitions of “see” and “experience” deficient? If so, what are better ones?

      I’m not trying to trip you up in any way, just either to get you thinking about this, or learn what I’m missing about it.

      Liked by 1 person

  16. Fizan says:

    By “see” I mean what everybody intends to mean when using this word. We know what it is and we have always known.

    By “see” do I mean:
    “why do we build predictive perceptual models based on visual sensory input” – No. That has nothing to do with seeing. We know why we do that, it is because certain frequencies of the electromagnetic spectrum stimulate certain types of cells in our eyes. The patterns in which these cells are stimulated gets encoded into neuronal networks. These patterns are then compared and composited. It’s a process.

    “why are we aware of that we have these perceptual models?”
    This is somewhat close in my opinion. Although I would add that we are not aware that we have “perceptual models”. Through some inquiry we can become aware that certain perceptual models are associated with seeing. When you say “it’s paired with the feelings of valences associated with what is being perceived” it pushes the ball further to ‘feelings’ what do you mean by this? if it’s the same as ‘seeing’ then there we are again.

    “why do we have the experience of seeing?”
    That is what I’m saying although I don’t think I have to add the word experience. Experience is a broader term. I don’t have to say I experience seeing a tree. I can just as well say I am seeing a tree. When you say “…what the information flow in the above paragraph feels like.” Again we have the word ‘feel’ which is again only part of the problem.

    “why do we have the appreciation of that experience?”
    I think here again the problem becomes the word ‘appreciation’ what do you mean by it?

    Liked by 2 people

    • Fizan,
      Thanks for engaging with this.

      “By “see” I mean what everybody intends to mean when using this word. We know what it is and we have always known.”
      This is often the crux of the matter in these discussions. If we’re not willing to explore what concepts like seeing, experience, or similar terms actually are, then I think we make progress impossible.

      I agree that we’re not aware of the perceptual models as models, only as perceptions, the final result. But I think they are models, or representations, or image maps, concepts, or prediction frameworks, or whatever term is preferred.

      By “feelings”, I meant the conscious experience of emotions and/or affects. More controversially, I mean the communication from the brain regions which construct the affect or emotions to the reasoning or movement planning regions of the brain.

      By “appreciation”, I meant knowledge of the experience, or more controversially, models / representations / concepts of the experience, in essence models of the models, concepts of the concepts, awareness of the awareness.

      I know we won’t solve this today, but again appreciate you discussing it.

      Liked by 1 person

      • Fizan says:

        Yes, it took me a year’s worth of discussions to move a little from my original positions so we’re not going to solve it today. Though I’m still interested in this to see if there is perhaps something that does become apparent in a new light.

        I think I am open to exploring concepts. ‘Seeing’ is not a concept it is something we do as essential as creating concepts is also something we do. It is also something many animals do. But yes to capture this ‘thing we do’ in language we have words like seeing.

        “..But I think they are models..”
        But what is ‘they’? perceptual models are by definition models. You are equating perceptual models with the perception itself. That’s the contention. Perceptual models are arrangements of neuron networks.

        Feelings are a conscious experience as you said at first. They can’t explain the conscious experience itself.
        But then your controversial suggestion of feelings being communications is again the same as equating perceptual models with perceptions. Even if the model is true, it does not help us understand why a communication ‘feels’ like anything.

        Liked by 2 people

        • Thanks Fizan. I’ll leave you one last point to consider.

          When asking why it feels like anything, another question to ask is, what would an evolved pre-language system for subsystems in an animal’s control center to communicate with each other be like? How would those subsystems communicate varying wavelengths of light? Or vibrations of air? Or recognition that a certain signal indicated damage to part of the body? What, other than the raw phenomenal experiences of seeing, hearing, and pain, might be the answer?


    • Hope you don’t mind my jumping in with a different tack. I’m also trying to figure out how to explain the nature of qualia (what it “feels like”) from a physicalist viewpoint.

      My view is ultimately extremely close to Mike’s, but I start at a more basic level. A mechanistic level, to be precise. I view every experience as Input —> [Mechanism] —> Output. So in “seeing” a tree, we have light impacting cells in the retina, and then various neural processing identifying edges and textures, and ultimately a set of neurons firing in a particular pattern. This pattern becomes the Input, and has the semantic meaning of “tree”. The Mechanism responds to this input by generating Output. Some of this output constitutes memory, which can later trigger the same “tree” pattern as more input. Additional output might be triggering action, depending on current context.

      The point with regard to qualia is that if the Mechanism wants to communicate that this experience happened, its only reference to the experience is via the meaning of the input, i.e., “tree”. The Mechanism cannot produce responses regarding the pattern of neural firing. It can only refer to the fact that the pattern was recognized, and that the pattern was different from other significant patterns, like “House”. This reference is the “feeling”. We say it “felt” like “seeing a tree”.

      I would be grateful if anyone could let me know what parts of the above are helpful and what parts are unhelpful.


      Liked by 2 people

      • Hey James,
        Your comments are always welcome. Please do jump in anywhere you’d like to offer anything.

        I think your description of the Mechanism has important insights, but for some reason I’m struggling tonight to come up with a suitable response. Fatigue may be the culprit. Anyway, hopefully I’ll be able to get it together better tomorrow.


      • Ok James, let me take another shot.

        I think your insight that the Mechanism wouldn’t hold the image is important. It gets to an important fact about subjective experience that’s worth pointing out. Namely that the theater aspect of it is an illusion.

        There are regions of the brain that build image maps, and there are regions that categorize an image with a known concept, and associates it with related concepts. Then there is a region which uses this information to plan movements. But the movement planner never receives a full scale theater, only specific pieces of information. Where then do we get the impression of the theater?

        Consider when you see a tree. The tree image is formed in the visual regions, and identified with a tree in the associative regions. The impression of a tree then reaches the movement planning regions. Not the full tree image, just the treeness of it.

        Then the movement mechanism points the eyes at the trunk. The trunk image is formed in the visual regions, it’s trunkness identified in the associative regions, and then an impression of trunkness arrives at the movement planner.

        Importantly, while the movement planner is considering the trunk, it is not considering the tree overall. It may appear that we are taking in both the tree and a detail of it, but this is an illusion built on the fact that we can quickly switch contexts and aren’t conscious of the transitions.

        We then look at the bark on the trunk, and the cycle loops again. We can quickly switch from the tree, to the trunk, to the bark, back to the tree, then to a branch, etc, and have the impression that there is a theater happening.

        But what’s actually happening is that the movement planner is having rapid communications with the associater, the visual image mapper, and perhaps other sensory regions. The movement planner can rapidly flick between all these different domains.

        That said, some signals do arrive in parallel. For example, the sense of self is constantly streaming in. And any affects and emotions triggered by the tree impression, or noticed aspects of it, flow in as well. It’s a never ending loop that gives the impression of the Cartesian theater, but is really just subsystems communicating with each other.

        Does that fit with what you’ve described with the Mechanism?


        • Actually that does fit, and well said. I think your description explains why people (like Tononi) think there is a “unified field” of Consciousness when in fact there is just a unified world, and whenever you look at part of it, there it is. It also ties in with all the new neuroscience stuff about predictive processing (which really should be called expectation processing, but ah well). The brain has expectations of what it will see/experience depending on where it looks or what it (the body) does. I think these expectations are just inductive in that the “prediction” is that you will see whatever was there last time you looked, but taking into account things like velocity and time and other context.

          And we still differ as to which part of the brain is likely to be the Mechanism in question. You refer to the “movement planner”, but I’m wondering how the movement planner is involved when you are simply watching beautiful scenery, like a fjord in Norway, which is where I am at the moment. 🙂 You may be right about what the movement planner does, and it would certainly count as a separate Mechanism in my schema, but I still don’t think it is the Mechanism of the autobiographical self. It seems to me this self learns of the goals and plans some time after the goals and plans are made. Time will tell.

          BTW, if you want a neurologically plausible scheme for how concepts like a tree and its parts can be managed, combined, uncombined, recombined, etc., you should look up Chris Eliasmith and Semantic Pointers.


          Liked by 1 person

          • Thanks James.

            On the movement planner, that’s my name for what a lot of people refer to as the executive. “The executive” term works if you understand what executives do, which is planning. (I’m not wild about the term because many people think executives give direct orders for actions all day, instead of leaving that for more front line people.)

            Often that planning involves doing research whose relevance to the ultimate plans may not be obvious. I think your appreciation of the fjord would fit in that category. (I’m very jealous by the way.) I think beauty is some perceived thing triggering our primal instincts to focus on the thing in a way that was once adaptive, but may not be now.

            But I think your point about the Mechanism learning about the plans after the fact is right too. In truth, I was over simplifying above, because what I’m called the movement planner is itself a complex system. One of its subsystems is a metacognitive function, a recursive feedback system, and it is here where our knowledge of our own experience is captured. Meaning a lot of plans get made which escape the notice of that metacognitive function, although the metacognitive function often does have causal influence on the overall planner.

            Eliasmith’s book looks interesting. Thanks for the recommendation! Just added it to my Kindle.

            Liked by 1 person

  17. I see that Mike and James are presenting positions where “the hard problem of consciousness” is a relatively standard form of computation (though obviously involved). From this perspective phenomenal experience might for example be a matter of system and subsystem communication. Regardless I think it’s clear that they’re presenting the human nervous system as an individual form of computer.

    Many suspect that the hard problem of consciousness is not so easily dismissed however, as expressed above by Fizan (which is certainly my perspective). Thus I’ll present my own “four forms of computer” model, which I think remains more true to the perspective of materialism.

    Before “life” existed, I believe that everything functioned mechanically. This is to say that input dynamics were not algorithmically processed for output function regarding anything from stars to molecules and beyond. Life did add algorithmic computation, I think, by means of its genetic material. Here chemical properties of matter interact with genetic material to produce output function. Furthermore apparently life evolved to facilitate more and more complex varieties of itself to occupy wider assortments of niches. My point however is that we don’t consider this first form of computer to function by means of feeling good or bad. Presumably that’s simply not one of its properties.

    Then after billions of years of life on Earth (and perhaps inciting the Cambrian explosion), apparently a second variety of computer evolved as well from which to help guide the function of organisms that are multicellular. This provided a full organism form of computation which accepts input information, as well as processes it algorithmically for output function. Instead of base chemical dynamics, this one algorithmically processes input by means of neuron systems. And while some will say that this form of computer can evolve to incorporate punishing and rewarding feelings into such function, I personally suspect that it cannot. Like genetic material, I don’t believe that neurons systems function in a purpose driven way, or such that anything can be personally good or bad for them.

    To briefly skip the conscious form of computer, there is also the technological variety that we create. Of course they process input information algorithmically for output function by means of electricity. Like the neuron and genetic material based computers, I suspect that they do not feel good or bad. Given that we create them, wouldn’t we otherwise know?

    So now regarding the third form of computer that does have the capacity to feel good and bad, I believe it works like this. Apparently at some point the neuron based computer created punishing/ rewarding feelings by means of its standard algorithmic function, for something other than it to experience. Initially this dynamic should have been carried along as a useless element (as evolution is wont to do). But then it should have reached a point where this other dynamic was put in charge of deciding something for the organism, and so made these decisions in a fundamentally different way. For this other type of function existence personally matters. Thus “stop if it hurts” and “do more if it feels good”.

    Though it may seem that straight algorithmic function can do the same thing by means of weighted parameters, the problem with this may be that it requires programming to be developed for relatively specific situations. Conversely the conscious form of computer seems more plastic in the sense that it has some inherent guidance – it will go into any situation with the quest to feel better, and this will be determined by its pain, hunger, itchiness, curiosity, and so on. Furthermore I believe that such an organism should tend to develop its own conscious form of memory as another input from which to work, and interpret inputs and construct scenarios about what might complete a given quest. I consider this computer to mark the rise of the teleological, or purpose driven, form of computer. And apparently to support it a computer must exist that does not function alone on the basis of chemical dynamics (1), or neuron dynamics (2), or electrical dynamics (4). This third from of computer instead functions on the basis of personal value, or output of the neuron based computer.

    So now to the issue which James presented of a tree that is seen by something conscious. For my own interpretation, not only does such “sense” information arrive, but also generally a “valence/ value” input that feels good to bad. Thus a scene might be beautiful, disgusting, hopeful, and so on for a particular subject to feel given its nature. Conversely non-conscious computers do not have this element. So for conscious computers, inputs (sense, value, memory) are consciously interpreted, and scenarios are constructed about what to do to promote personal value. The only non thought conscious output that I know of is “muscle operation”. Here the conscious form of computer is manufactured by and supported through a vast supercomputer that is not conscious. I believe that the conscious form of computer as defined here, does less than a thousandth of a percent as much processing as the non-conscious form of computer which creates it. In a sense the big computer creates and fosters the small one given that the big one is inherently deprived of teleological function.

    Liked by 2 people

    • Eric,
      As we’ve discussed before, I see considerable overlap between our understandings. And the idea of the hierarchy of computation you describe is interesting.

      Although I’m not really comfortable drawing sharp lines between “mechanical” and computational systems, since computational systems are themselves physical mechanisms. I think what distinguishes systems we typically describe as computational from others is that the ratio of its causal dynamics to the magnitude of the energy involved is high, with the ratio opposite for systems we typically see as primarily physical. But if so, that means there’s no hard line separating computational from non-computational systems. There are only degrees of information content.

      That said, I could see the information processing between DNA, the transcription proteins, mRNA, and ribosomes as computational, although exactly where to draw the border seems complicated. Ribosomes go on to produce proteins, most of which are arguably more about physical effects than information.

      But for your model of consciousness in particular, as we’ve also discussed, I have two broad issues.

      The first is that you make little effort to explain what “feeling good or bad” actually is. I know you consider yourself to be at the architectural level rather than the engineering one, but it seems to me that a theory of consciousness that doesn’t reduce to non-conscious terminology risks being just an alternate description of the phenomenon rather than an actual explanation. And given that you see achieving this capability as the missing ingredient that technology may never find, this strikes me as a crucial omission.

      This, incidentally, is a variation of my overall issue with hard problem thinking. It posits something mysterious: subjective experience, but then denies any attempt to examine the components of that thing. It guarantees that the thing remains mysterious and intractable. To have any hope of making progress, I think we have to be willing to dissect these concepts, to pierce the conceptual veil.

      The second issue is the idea of a discrete second computer. A phrase for this concept occurred to me this morning, which I don’t know if you’ll find useful: physical dualism. In my own understanding, to quote Laplace, “I have no need of that hypothesis.” I might feel different if there was neuroscience that supported this concept.

      Backing up from persnickety details, I do think the dual computer model has resonance with the mainstream neuroscience idea that various regions of the brain specialize in particular functions, and in particular does have some resonance with my own idea of the movement planner and perhaps James’ idea of the Mechanism.

      Liked by 1 person

    • Let me first mention that I’d love for James, Fizan and others to join wherever they please. Here I’ll yet again be attempting to deconstruct and analyze objections from our sometimes persnickety friend, Michael Smith. 🙂

      I have never wanted to imply an absolute line between computational and mechanical function. And yet now after some years of proposing this position, I’m starting to consider the distinction “harder” than I had originally thought. Where are the grey areas? And I’m surprised that no one has yet provided a fifth example of something that can reasonably be said to algorithmically process information. Thus we have the genetic form based upon chemical dynamics, the whole organism form based upon neuron function, the conscious form based upon feeling good/bad, and the technological form based upon electricity (though I suppose that light could do the trick, or sound, and so on).

      Note that it’s the third form of computer, or the one that’s outputted by the second, which I theorize necessary for Strong AI, as in John Searle’s Chinese room thought experiment. I highly doubt that a person manually following computer code could ever thus become a computational device that’s able to “understand” Chinese, let alone produce phenomenal experience.

      I think what distinguishes systems we typically describe as computational from others is that the ratio of its causal dynamics to the magnitude of the energy involved is high, with the ratio opposite for systems we typically see as primarily physical.

      I was ready to put that into my own bag of tricks (with attribution of course), though upon reflection I’m not sure it’s sufficient. We don’t consider pushing a rock off a cliff computational, though the ratio of causal dynamics to initiating energy is high here. I may need to stick with my original algorithmic processing criteria, as in the mechanical typewriter doesn’t function this way, while a digital device does.

      I actually see no grey area regarding the function of ribosomes. Here input substances interact algorithmically with genetic material to produce output proteins.

      Given the tremendous number of things which my models do address, let me suggest that it’s standard human psychology which leads you to keep bringing up the one thing that I quite publicly assert that my models do not address. Here I tell you something that I can’t provide, and so you naturally retort that I’ve deleted the exact thing which is required to assess the validity of the models that I’ve developed. It’s textbook. And the funny thing here is that while I consider the hard problem of consciousness to be a truly hard problem, and frame it as an output product of certain neuron based computers, you’ve been denying the hardness of this problem given your own models. Of course saying that it isn’t hard doesn’t make is so. I suppose it is what it is regardless of what you or I suspect.

      In any case this uncertainty isn’t something that I consider very consequential. I suppose that if an answer were gifted to the AI people, then they might be able to use it, though perhaps not since they might be too technically inept to output conscious computation by means of a non-conscious computer (as my model, at least, suggests is required).

      Psychology, psychiatry, sociology, cognitive science, and yes your favorite, neuroscience, do not need to understand the technicals of how brains manufacture phenomenal experience, I think. I don’t see how such information would benefit them today. But the critical element which they do currently lack, or the thing which I consider to mandate the softness of these sciences, is that they do not yet have effective definitions and models regarding the nature of what they study. These sciences have not yet found their “Newton”. And yes in the past some have gotten snarky with me here for my audacity to hope that I might help. Me? Yes, why not me? Shouldn’t all interested people be trying to help these fields improve?

      Who’s denying any attempt to examine the components of phenomenal experience? I not only welcome that exploration, but provide models which, if empirically supported, should be quite helpful in that quest. Indeed, without effective architectural models in these fields, when we look at the brain, how might we interpret such data? You may protest at the following, but neuroscience today seems similar to astronomy at the dawn of science. I don’t mean this in terms of lacking measurement devices, since we have plenty of those, but rather in terms of lacking theory from which to effectively interpret such information.

      I agree that my dual computer model does conform with many elements of what’s already accepted. For example people today refer to an “unconscious” dynamic. As I’ve mentioned in the past, I don’t like this term given that it seems to be used without notice in at least three different ways. Firstly it’s used it as what I call “non-conscious”, such as for a computer or even a rock. Secondly it’s used it as “quasi-conscious”, such as when one wishes to refer to conscious function that’s subtly influenced by the non-conscious computer, as in unacknowledged racial prejudices. Thirdly it’s used for “altered states of consciousness”, such as sleep or being stoned. So this term is one of many issues that I’d like to help clean up in these fields.

      In the end I foresee various generally accepted principles of metaphysics, epistemology, and axiology (whether future theorists here call themselves “scientists” and/or “philosophers”). I believe that this theory will come to found all branches of science.

      I’m quite sure that your “movement planner” and James’ “Mechanism” could find a place in my own model. Note that the conscious form of processor, which interprets inputs and constructs scenarios about what to do in the quest to promote personal value, is what I call “thought”.

      Liked by 1 person

      • Eric,
        On the distinction between computational and non-computational systems, I’ve had people assert to me that all kinds of things were computational: molecules, proteins, synapses, cells, plants, weather systems, stars, black holes, even rocks, not to mention the overall universe. I did a post on this subject a couple of years ago:

        The TL;DR is that whether a system is implementing a particular algorithm, or any algorithm, is a matter of interpretation. Yes, under computationalism, this means that consciousness is a matter of interpretation. It exists in the eye of the beholder. That post references a Stanford Encyclopedia of Philosophy article on computation in physical systems:
        My position generally pairs up with the limited pancomputationist view.

        All that being said, as a pragmatic matter, I only consider it productive to treat systems that require minimal interpretive energy to be computational as computational. The only things I’ve historically considered to pass muster are the various forms of technological computers, and central nervous systems. DNA does strike me as a possible addition, although DNA specifically seems more spools of data and programming. We have to consider the transcription proteins in the cell nucleus to get at the computer interpretation, but the overall nucleus, not to mention the overall cell, strikes me as a physical system, which leads to the issue I mentioned above of where to draw the borders.

        On your model, sorry if I hit a nerve. You keep putting your model forth, and I keep telling you what I think about it. I’ll try to keep my future responses to aspects I haven’t commented on before.


Your thoughts?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.