Is consciousness a simulation engine, a prediction machine?

Back in September (which now seems like a million years ago), I did a series of posts on consciousness inspired by Todd Feinberg and Jon Mallatt’s recent book, ‘The Ancient Origins of Consciousness: How the Brain Created Experience‘.  In that series, I explored consciousness as a system modeling its environment and itself as a guide to action.  My enthusiastic run of five posts reflected how much F&M’s excellent book had shaken me out of my anthropocentric views.

But that series had a glaring omission, and some of the people I’ve referred to it have called me on it.  F&M’s book was focused on animal consciousness and its evolution.  As I noted in the first post, while that broad approach had a lot of benefits, it had one big drawback.  Animals can’t describe their conscious experience, most notably what behavior they do consciously versus unconsciously.  As a result, this particular boundary wasn’t addressed in F&M’s book, and I only briefly alluded to it in the series.

This post on that boundary is admittedly my own speculation, informed by F&M’s book, but also by an outstanding article at Aeon by Anil K. Seth.  My long time readers will know that I’ve historically put a good deal of stock in metacognitive theories such as Michael Graziano’s attention schema or Michael Gazzaniga’s interpreter, where consciousness is a model of some aspects of the internal processing of the brain.  I still think these metacognitive models exist (it seems we use them anytime we have these discussions) but I’m less sure that they’re the sole crucial ingredient, although they could still be one of those ingredients.

Okay, so consider an early pre-Cambian animal.  This animal doesn’t have a brain or even a spinal cord, but it does have a nerve net, with sensory neurons connecting directly to motor neurons.  If the animal receives a sensory stimulus (such as touch or maybe a chemical gradient), it triggers a signal to the motor neurons resulting in movement.  In this nervous system, stimulus A results in action A, stimulus B results in action B, etc.  While some conditioning can modify the processing, there’s no consciousness here, just reflex actions.

Later species such as chordates developed a spinal cord.  This centralized cord allowed for a combination of sensory inputs to lead to combinations of actions.  So events A and B resulted in actions A and B.  Again, these actions, while modifiable by conditioning (a primitive form of learning), were still basically reflex actions.  Very few people (aside from panpsychists) think we’re at consciousness yet.

As animals began to develop distance senses (eyesight, hearing, smell), the amount of information available to the reflexes began to increase dramatically.  This led to the spinal cord enlarging near those distance senses so the information from them could be quickly processed.  The distance senses led to the creation of image maps, exteroceptive models of the environment.  The mental reflexes described above now reacted to information in the models rather than directly to sensory inputs.

These exteroceptive models of the environment, along with the interoceptive models of the animal’s body state, formed an inner world.  They provided the foundation of conscious experience.  But I’m not sure they are what we would call consciousness.  Another ingredient was necessary.

The large amount of information caused a problem.  The models resulted in situations where the quantity of action reflexes triggered by a particular set of circumstances could be large, with some of those triggered actions perhaps being incompatible with other triggered actions.  For example, an early Cambrian fish might see food off in the distance, which triggers a desire to approach and eat it, but not much further beyond the food is a predator, which triggers a desire to flee.

Our fish can’t do both actions.  It could follow the stronger impulse.  If it’s eaten recently, perhaps the urge to flee is stronger and that’s what it does.  But maybe it’s desperately hungry, so it does attempt to get the food and risk getting close to the predator.

But given the life and death circumstances, our fish needs a new ability.  It needs to be able to simulate what might happen if it takes certain actions.  Having the ability to be aware of its own primal reflexive desires, in other words to do affect modeling, and then do trade-off decision processing on which desire to listen to, would have provided a survival advantage.  This trade-off processing would involve running simulations: if action A is taken, it will result in consequence A, if action B, consequence B, etc.

In other words, the fish needs the ability to do predictive modeling on various possible courses of action, courses of action that would result from following each of its triggered action impulses.   The consequences revealed by each simulation are evaluated in turn by the limbic system (or fish equivalent), each resulting in its own negative or positive affect, in other words, an evaluation of whether the consequence is desirable or undesirable, “good” or “bad” for the organism.

It’s this trade-off processing, this ability to simulate different courses of action, to do predictive modeling, that I’m suspecting is at the heart of what consciousness is.  This modeling would have been very simple in the earliest conscious creatures, but increased steadily over hundreds of millions of years in sophistication and capacity.  But at all times, it would have been the same basic functionality, simulations of possible courses of action as a guide to movement decisions.

Some of the predictive modeling would have involved simulating past sensory experiences, in other words, episodic memory.  It’s important to understand that episodic memory isn’t a recording, but a reconstruction of past sensory events, a simulation.  That’s why memory is so unreliable.  But it’s effective as an aid to the trade off processing I’m talking about.

Consider what requires our own conscious awareness and what doesn’t.  I can often drive to work without being conscious of what I’m doing.  I’ve driven to work a great many times so that I can now do it in a habitual slumber.  More precisely, the non-conscious aspects of my mind have been conditioned so that they will supply the right movement decisions when presented with each specific stimuli of the driving to work experience.  Most of the time, this frees my mind to think about other things, to do simulations on other situations, like maybe what I’m going to do when I get to work, or maybe to mull that show I watched last night.

But then I suddenly run into severe traffic.  Now I “wake up” and have to think about what I’m going to do.  Can I get off the main highway and find an alternate route to get around the traffic?  I now need to simulate various courses of action.  I am “aware” and “thinking” about the drive now.  I am conscious of it now.

Or perhaps the drive is going normally, but I’m doing it in a borrowed car, perhaps a type and model I’ve never driven before that handles differently than what I’m used to.  Now my simulation engine is engaged in the minutia of the driving mechanics, and will be until handling the new vehicle becomes “natural”, that is, until it can be done without the need for constant simulations, without the need for conscious control.

On the other hand, I might be driving to work in my habitual slumber, and suddenly there is wreck happening and split second decisions are necessary.  There is no time for conscious deliberation, no time for simulations, I have to just use whichever unconscious impulses are strongest.  Although later simulations of the event will almost certainly be done.

The Limbic System Image credit: OpenStax College via Wikipedia

The Limbic System Image credit: OpenStax College via Wikipedia

If this view is right, then consciousness is a simulation engine, a prediction mechanism designed to serve as a guide to action, allowing an animal to subjectively travel backward or forward in time as it ponders movement decisions.  In humans, the simulations would likely be initiated by the prefrontal cortex but heavily involve the modeling aspects of the sensory processing regions, with the limbic system providing the evaluative aspects.

I stated at the beginning of this post that it was speculative, and it is.  But the predictive modeling, the simulations, certainly take place in some form or fashion.  The speculative aspect is that the simulations are consciousness, that what is outside of them is in what we call the sub-conscious or unconscious, and what is in them are the contents of consciousness.

Given this speculation. I’d be very interested in any critiques, in particular in any examples that demonstrably violate this proposition.  In other words, what have I overlooked?

This entry was posted in Mind and AI and tagged , , , , , , , . Bookmark the permalink.

89 Responses to Is consciousness a simulation engine, a prediction machine?

  1. Michael says:

    Hi Mike,

    Another intriguing post on an intriguing subject!

    I’m not sure that I follow your supposition in its entirety, so would like to ask a question or two to clarify. First, it seems to me that simulations may be conscious or unconscious–meaning that my computer can run simulations all day without being particularly conscious of them, right? I guess the problem with my question is that it includes a pre-existing bias as to what the word conscious means, so let me rephrase. If we substitute the idea of awareness, then it seems my brain can run all sorts of complicated simulations as part of its choice-making functions without my being aware of them in any way. I am pretty sure that this happens all the time, with my being aware only of the output–that feeling of this is “good” or this is “bad.” How does our awareness of the simulation enter into what you are suggesting? Does our being aware of the simulations that are running matter?

    And second, if simulations areconsciousness, then how would you describe what is happening when one beholds a piece of art? Are you saying that some things we experience in our awareness are not consciousness per se, because they are not simulations? I struggle to understand how our response to a work of art or a musical composition would be the product of comparing various simulations to one another, but I can imagine how the information content of various works of art could trigger a sort of symbolic resonance within us that relies upon an active simulation of ‘self’ to occur. The information in the external stimulus is in some way related to the content of our simulation of self, with all its attending past failures and future dreams and emotional content, and thus we are able to form an opinion of it–to experience it if you will. But I have a hard time imagining a fish responding to Beethoven as meaningfully as you or I, and wonder how that difference would be expressed in terms of simulations.

    Thanks, Mike!


    Liked by 4 people

    • Hi Michael,
      Thanks! And excellent questions.

      First, I think it’s important to understand that with these simulations, or all the modeling we do, we’re not conscious of the actual details of the modeling. We can’t be since it is the very stuff of consciousness, of experience. As you said, only the result becomes part of our awareness. We can’t experience the mechanics of experience, only the end result.

      (Of course, we can consciously choose to model something where we are aware of the details, but that’s us doing modeling with our modeling, if you catch my meaning.)

      Second, it seems like there may be two broad types of modeling: passive and active. The passive modeling is always going on, even when we’re not paying attention, when our mind is “wandering”, and seems to be unconscious, at least until the active modeling, what I called the simulations in the post, start using its information. The active modeling, the simulations instigated by the executive centers of the brain, are our stream of consciousness, at least if I’m right.

      On art appreciation, I do think we are running simulations. For music in particular, it seems like we expect it to follow certain predictive patterns but with perhaps just a little bit of unexpected variation. Indeed, the difference between music and random noise seems to depend on those predictable patterns.

      Or consider fiction. Again, we expect certain patterns, but are bored if its too predictable. We want our expectations violated to some degree. Comedy in particular seems to depend on violating those expectations in ways that make us laugh.

      But here’s the question. Where are those expectations coming from if not from simulations?

      A fish definitely can’t respond to Beethoven in the way we do. It doesn’t have the capacities that we do. Its consciousness, in most cases, is at a much lower resolution than ours. That doesn’t mean it might not get something out of the rhythms. Many animals actually do like music. But it won’t carry the emotional resonance that humans get out of it because it doesn’t have the shared humanity that we share with Beethoven.

      Hope I was clear here. Please let me know if you have any questions.

      Thanks again!

      Liked by 2 people

      • Michael says:

        Thanks, Mike. I’ve much enjoyed the various conversations and threads here. I’ve got a couple of general questions I wonder if anyone has investigated, but want to preface that with a thought or two about the simulations. Do you agree or disagree that the passive modeling processes are often extremely fast, and of course could even be equivalent to summary models, or “curve fits” to the massive quantity of simulations run daily as our brains develop. Basically, we try to walk, we fall down, we learn. We fine tune our simulations. We basically “learn” what our body is and isn’t, and then formulate a curve fit so the detailed model isn’t necessary really, though it may be retained. And the second half of my thought then is that the active modeling is slower, more deliberate, and predicated upon some “awareness of awareness…” Do I have that right? I’m asking essentially, first, if the type of thinking we do when we write and comment on blogs is the product of what you’re describing as an active process?

        I ask because it seems there is an idea here that when we first learn something we think about it. We bring our slower, lumbering, realtime attention–our awareness of what is happening and the simulation of our conscious participation in it–to bear upon the matter. This is clunky until we learn and it becomes rote. But I would think a juvenile frog in learning to hop, which may be a much more passive process if we presume that “awareness of awareness” is the active state, also stumbles initially before its own simulations are tuned. At some level this tuning occurs without any awareness of awareness. But in humans much of what we learn seems first to pass through the lens of our “thinking about” it.

        Am I making sense in this introduction? I’m trying to understand, because I don’t know myself, if we relate awareness of awareness, or self-reflective awareness to say it differently, to active simulations or passive simulations as you described them, or to both depending on the neural complexity of the system. If both, then what is the most meaningful distinction between an active and a passive one? And if active only, then is “conscious learning” a relevant descriptor of more complex simulation engines?

        So that said, I’m curious about phase changes in these simulations, meaning that at some point what is occurring in the simulations–just like we observe in so many natural phenomena, such as boiling water forming Raleigh cells to give a classic example–changes qualitatively. Is there a mathematical or scientific definition of the minimum properties of a system that exhibits this quality of self-reflective awareness? I’m sure people have tried to define that, I just haven’t looked before and thought you may/probably have.

        As an aside, I sort of interpret your and Hariod’s discussion as leading to something like the wave-particle challenge of physics. I interpret what Hariod is saying to mean that what we call consciousness is not wholly understood/known by either the experience of it (the scent of the rose), or the knowledge of its physical causes (its chemistry), in isolation from one another. It is, in essence, both are necessary and true at once. We can’t really say one is primary over the other.

        All that said, I’ve been thinking of the parallels between energy and consciousness, similar to the previous paragraph. Energy itself is something we talk about a lot, but you can never really have just “energy.” Energy only exists or is revealed or expressed under particular conditions. One cannot really say that energy exists separately from the transformations of the physical systems in which we say, “oh! that was energetic!” We can’t tease it out. But still… there is a place where “energy” is quite ambiguous at the most fundamental level, and that is because it basically reduces to affinities that exist without mechanism. So I’m just seeing energy and awareness as analogues. Does a mountain with a boulder on top that is ready to roll down the side create energy? Obviously not, and yet nowhere can one find energy separate from these physical arrangements in which, given the proper conditions, energy may be revealed or expressed.

        The analogue is that we cannot really find where energy begins and ends. We could say energy is only particular structures of matter, and that everywhere energy is expressed we could find the material correlates, but at some level this does not answer the most basic question of what energy is. What is it? I don’t think we would call energy an emergent property of a mountain and a rock on it. And yet, we could never find energy without such a situation. At whatever scale, it requires the transformation of a system for us to find the characteristic signature of energy. And yet I think we would be reluctant to say any of those systems gave rise to energy itself?

        So–sorry for the long note here– I’ve been enjoying the notion that awareness itself–the most-basic-formless-objectless-attributeless-whatever-it-is, is akin to energy. We’re pretty convinced it exists, we see its effects, but we have this great difficulty in understanding whether it is the product of the conditions in which we observe it, or it is, like energy, existing in some way hard to comprehend that particular systems are able to reveal or express.

        I unfortunately didn’t have my mind around what I exactly wanted to say when I began, so I rambled. But I hope it wasn’t too bad, Mike!


        Liked by 2 people

        • Thanks Michael.

          I actually think you may be using the word “simulation” a bit too broadly, or at least more broadly than I was using it in the post. When we reach the point where we can do something by habit, I’m not sure we actually are still running simulations of it. Once we enter the pattern, our conditioned responses take over, with each stimulus responded to with pre-existing motor reactions, and the simulation engine focuses on other things, which is another way of saying that once its a habit, we cease thinking about it while we do it, at least until something unusual forces us to.

          That being said, it may be possible that some simulation takes place during habitual tasks, but perhaps with a lower level of resource utilization. I’m open to that possibility. Given how messy evolved biological systems are, we shouldn’t expect them to have the clear lines of demarcation that engineered systems usually do.

          It’s definitely true that we don’t know what, fundamentally, energy actually is. At least other than saying that it’s the capacity for work, or the amount of potential for change. We can quantify its effects and track its transformations, but we don’t know the thing itself, except that it appears to be the vitality of the universe.

          The problem with thinking about consciousness as energy per se, is it appears that everything may be energy. Certainly all matter is energy. And if space isn’t itself energy, it appears to possess a great deal of it. (See dark energy and virtual particles.) So consciousness certainly is energy, but so is everything else.

          Consciousness is also information. Like energy though, everything could be considered to be information. Just as with energy, information cannot be created or destroyed, only changed and channeled. Even black holes, so physicists assure us, don’t destroy information.

          But consciousness as energy or information doesn’t really tell us what about it is different as compared to, say, the sun. Neuroscientist Giulio Tononi thinks that the difference amounts to the degree of integration of that information. Based on my explorations, I think integration is crucial for consciousness, but isn’t by itself sufficient.

          People have pointed out to Tononi some highly integrated systems that show no signs of awareness. Tononi’s response is to assert that they are conscious. Tononi is a panpsychist, so this is right in line with his worldview. But even if we accepted that proposition (which I’m skeptical of), I’m still interested in the difference between animal consciousness and the supposed “consciousness” of these highly integrated systems.

          That gets us into what the integration is used for and the view expressed in these series of posts. I think the integration of a conscious system is used for, among other things, modeling the environment and the system itself in relation to the system’s goals, and when the momentary goals conflict, doing simulations on what happens when attempting to fulfill each goal.

          The final attribute many seem to insist is crucial is that the goals be similar to those of a human or other animals: self concern, survival, etc. Myself, I don’t necessarily see this as necessary, but I’ll admit that a consciousness without it is very alien, so much so that humans may never generally accept it as a fellow being. And even I might require it to consider the entity to be a moral agent.

          I suspect what you’re thinking is similar to what a lot of others think, but hopefully my remarks here make sense. (Let me know if any don’t.)

          No worries on long comments. Obviously I can give as well as receive in that department 🙂

          Liked by 2 people

          • Oscardewilde says:

            As you state ” consciousness is information” is that a dualist view. Just wondering. So the brain uses consciousness as a source of information?

            Liked by 1 person

          • Oscardewilde,
            I’d focus on the point that while consciousness is information, “information” by itself isn’t a sufficient description, mainly because everything is information. So, no on the dualist question. On your second question, a better way to describe it may be that consciousness is part of the information processing of the brain.


          • Michael says:

            Hi Mike,

            Thanks for this. I just want to note my closing was not to suggest that consciousness is energy, or even information; rather to suggest that I found it interesting to consider that the type of awareness we’ve discussed over at Hariod’s site recently, and which has also been touched upon here in the commentary–that fundamental substrate of consciousness– could be every bit as fundamental to the universe as what we call energy.

            I’ve not really read anyone who was a self-described panpsychist, and it may seem as if I am implying panpsychism–I’m not sure–but I’m not suggesting that rocks think or anything. Or that they are self-aware. What I am saying is that if one is willing to consider that what we’ve learned about energy may have corollaries to what we might discover regarding the most basic type/unit of awareness–much like the study of sonic black holes in fluid systems may suggest properties/characteristics to explore in stellar black holes– then some interesting things emerge.

            One suggestion I think is interesting, as a pattern of our thinking, is that we don’t typically think of a rock on top of a hill as being energy. We think of it as a configuration that has some energetic potential. Energy is like this “thing” that such a configuration makes plain, but the configuration is not the thing itself. Does that make sense?

            Not to say they are identical, but to think in terms of patterns, or by analogy, human consciousness then could be imagined as a particularly complex arrangement or configuration of a very simple “thing” called awareness. It could be a particular organization of awareness just as a flower could be described as a particular organization of energy. The qualities or properties or flavors of this awareness then would depend upon the systems on which they functioned, much as the energetics of systems vary widely depending on their configurations.

            The distinctions you are interested in: the difference between animal and human awareness, or between consciousness and the sun, would not be lost in my opinion. One doesn’t say, “oh, the sun, yeah–that’s just energy,” because it is the particular characteristics of the sun, and its responses to other factors at work in the universe, that are interesting. Likewise, human consciousness is interesting because of its characteristics, which clearly differ from other forms of consciousness, and which may be different still from artificial consciousness. It is interesting to explore those differences I think, and to understand them as deeply as possible.

            I’m merely suggesting that one could draw an analogy between energy and awareness, such that awareness could be considered an irreducible property of whatever this universe is, much as energy is considered. Awareness, much like energy, is made plain in those systems which possess the particular sorts of structures, patterns or configurations in which it is enabled to act or express.


            Liked by 2 people

        • Hi Michael,
          You actually might want to read some of the writings of panpsychists, notably someone like David Chalmers, who I don’t think is strictly a panpsychist but seems taken with their ideas, or Philip Goff who recently did a post on it:

          Many of them reason in the same way you are with awareness or consciousness being fundamental. I don’t personally find this line of thought persuasive, but the nice thing about this kind of panpsychism is that, while science doesn’t support, it doesn’t necessarily contradict it either.

          Liked by 2 people

          • Michael says:

            Thank you very much for the links, Mike. I read one of Chalmers’ papers in which he summarized a taxonomy of positions regarding views on consciousness that was interesting (“Consciousness and its Place in Nature”), and I would say that the options he keeps on the table are ones I think about in various ways as well. I also read a few posts from Goff including the one you sent me, and felt a little less resonance with it–I think because I resist the notion that what he describes as panpsychism is the simplest solution. It felt pretty chintzy to me to suggest that because particular macroscopic entities like human beings have a complex inner life, and because there is continuity between micro- and macroscopic structures in nature, that electrons must have inner lives, too. One challenge of course is how little space one has to define terms and to develop concepts, so that I don’t really know what Goff was trying to say.

            What I would say is that I find myself resistant to the idea that some primordial qubit of consciousness is a property of the simplest forms of matter, and that more complex forms of consciousness result from ways in which these simplest “proto-particles” of consciousness combine. That doesn’t resonate for me, and it really is extraneous to a physical theory if one goes so far as to suggest that the basic unit of consciousness is attached to a particular unit of matter–as if the Plank unit of consciousness and the Plank unit of mass were somehow inseparable. To do so makes the postulate of a primordial sort of awareness moot. There is nothing simple about that at all, and it is wholly unnecessary except for being an attempt at resolving the hard problem of consciousness.

            So in an effort to explain what I do think, I realize one element of my thinking that is unique from what I read on those sites is the notion that there is only one fundamental awareness. Imagine that you personally played the video game “Frogger” in 1982 and later played the puzzle game “Myst” on your computer in 1998. To a viewer who only saw the frog and the hero/ine of Myst, the true “you” behind the controls would not have been apparent. The frog and the hero would come off as decidedly different entities and forms of consciousness, but really it is the limited functionality of the interface that makes this so. That is closer to what I think or imagine–though the analogy is flawed in various ways–than the idea that electrons have or possess a simplified awareness all their own, independent of every other one, and that a lot of little awarenesses running through a physical neural system produce consciousness. Human consciousness would be different than frog consciousness because the capacity for expression is different. There is a more complex interface.

            In truth I think there is a lot of paradox in attempting to use language to explain what is so. Thanks for letting me ramble!


            Liked by 2 people

    • Thanks Michael. If I see the distinction you’re making, panpsychists see everything as conscious, with certain structures having higher concentrations of it than others, but you see consciousness as a fundamental force that only certain structures can tap into or channel.

      I can’t say I see either view as persuasive. For me, consciousness is simply part of what brains do, their functionality. Of course, a panpsychist might insist that I’m saying that brains have higher concentrations of that functionality, and you could argue that I’m saying brains interface with the platonic existence of that functionality.

      In philosophical discussions, it’s often difficult to know whether people are talking about different ontologies or different descriptions of the same ontology.

      Liked by 2 people

      • Michael says:

        Hi Mike,

        I’m not sure I would see it as very persuasive either. I think the most honest thing I could say is that I haven’t reached any conclusions in my thinking, nor do I think I have the ability to explain what is and isn’t so; I just know that I don’t find materialist perspectives sufficient to explain the content of my life experiences or the world as I have witnessed it. The truth is I learn a great deal in attempting to present what I think to a person such as yourself, who I feel is quite rare in a certain sense–being open to the discussion despite viewing the situation differently, being thorough and thoughtful in your own investigations, but also allowing the discussion to proceed in a manner that is constructive and open-ended. I would say ultimately we need much more of that.

        One thing I would say is that I don’t think your restatement to me of my own idea is quite how I see it, when you said that “[I] see consciousness as a fundamental force that only certain structures can tap into or channel.” I would say that something akin to “awareness” or the raw “ability to know” fundamentally exists, and that it relates to all structures of matter and energy, but that what is revealed in each unique relationship is dependent upon the constraints of the relationship. I simply don’t think matter-energy are the cause of this fundamental awareness; that matter-energy serve as the “containers” of this fundamental awareness; that this fundamental awareness “accrues” arithmetically or geometrically in proportion to physical aggregation or that electrons have a fundamental “charge” of awareness alongside of their electromagnetic charge; or that it is per se quantifiable in the way physical forces are.

        There is no aspect of material or energetic aggregation that is not in relationship in some way with this fundamental awareness, but that is not the same as suggesting fundamental particles have social lives, or that human consciousness could be other than it is: because of the structure of the entire human organism is a defining characteristic of the relationship. Meaning, human consciousness is as related to the physical structure of the human organism as ever.

        I recognize my thoughts here serve little to no purpose in a scientific paradigm of investigation, and that one may ask what the point is? What does one gain or lose from this hypothesis? It is quite difficult for me to say, Mike, but I feel it is worth a try. Important to at least try to say this. For myself, the initial answer is that one gains or loses the possibility that unity is fundamental. We gain/lose the strongest possible basis for asserting that all life matters, that we are equal to and related to one another in ways that far supersede our measurable differences. And I think we gain/lose the possibility that the cultivation of compassion and generosity have real transformative power in the broadest sense possible.

        I guess ultimately it appears to me that we lose the possibility of defining ourselves not by our measurables, but by our immeasurables–including what I’ll call the human heart, and that feels significant to me. We stand to gain/lose what I personally see as most valuable in us as beings, which in other worldviews must necessarily reside somewhere along the spectrum of being an important byproduct to being wholly non-existent. And with the marginalization of this component of our being, I think we lose the ability to heal internal divides in our own awareness/consciousness that ultimately lead to suffering. We lose the possibility of genuine freedom. So in a sense, I see that “everything” is at stake.

        Having said that, that is simply the world as I perceive it, and I don’t think it makes sense to suggest it is undeniably so. As I said at the outset to this comment, I am not so brash as to think I have a view that stands up to the rigors of every sort of logic, that would work for others necessarily, or that I even know whether or not what I think is logically consistent. I’ll close by noting I do not feel as though I stand on the “other side” of something with you, however, or those who think differently than I do. As I said at the outset, I find many qualities here that I greatly respect and admire, and the opportunity to have a discussion with people who think differently than we do can be very rewarding. So, I’ll close by adding also, that there is nothing I find here that I would change.


        Liked by 2 people

        • Thanks Michael.

          The nice thing about these discussion, although they rarely change my mind, is articulating our positions often allows me to clarify my thinking. Which is to say, you’re not alone in learning through the presentation. And this is the one of the main reasons I blog.

          I think the only response I would make to your concluding remarks is that I’m not sure it’s good to tie many of the things you hope to gain and not lose to the ontology of consciousness. Even if consciousness is, as I believe, information processing, nothing about that should cause us not to value unity, life, equality, compassion, healing divides, or alleviating suffering. Indeed, if all we can count on in this universe is each other, it makes those things all the more precious.

          Enjoyed the discussion!

          Liked by 2 people

  2. James Pailly says:

    This reminds me of something I read several years ago about dreams. Obviously we don’t really know why we dream, but it was hypothesized that dreams are supposed to simulate situations that the subconscious believes we might encounter in real life. They’re sort of like mental training exercises.

    Not sure if this is quite the same as what you’re talking about, but I thought it might be worth mentioning.

    Liked by 2 people

  3. milesmutka says:

    Assuming it is all biological, not some actual engineered engine you are talking about: I believe there are actually several, fairly independent systems in the brain, all making competing predictions (sort of like Society of Mind). As far as I know, consciousness has never been localized to any specific area or mechanism inside the brain.

    Liked by 2 people

    • I’m definitely talking biological in this post, although in principle, it could be about the design of AI systems, particularly ones like self driving cars, which must model their environment in something like the manner animals do. Not sure if they do simulations though.

      The brain definitely operates with independently executing modules, all in a massively parallel fashion. There’s no reason why multiple simulations couldn’t be happening in parallel, perhaps in a manner similar to Dennett’s multiple drafts theory. I didn’t go into that aspect of it in the post because it was already getting kind of longish.

      Just to clarify, I’m actually not arguing that consciousness is localized to any one location. Sensory information comes in and processed by independent regions, which are then integrated in other multiple regions. The models span all of these regions. When their content triggers something in the frontal lobes to initiate a simulation, that simulation requires participation from all these other regions, with vision processing centers supplying the visual portion of the simulation, auditory centers the audible portion, valences from the limbic system, etc.

      Liked by 1 person

  4. Steve Ruis says:

    I love your inquiries! With regard to predictive modeling our brains are absolute beasts when it comes to pattern identification. we even see patterns that are not there! But as far as predictive abilities, we basically suck. More and more our rational processing and decision making is being exposed as based upon emotion and not on any kind of logic or “if-then” processes. In essence we short cut a laborious, quite inaccurate rational process for quicker emotion based processes.

    So, what does consciousness have to do with this. We always think of ourselves as captains of a ship, that “we” meaning our conscious selves make all of the decisions. Nope, not even close. We also think of our consciousness as an ability to be aware of our internal process and override them at will (pun intended). Uh, again, not even close.

    So, the more basic question is “What is consciousness good for?” What can we do better because of consciousness than we could do without it? (I know the hard part is know what non-conscious humans could be capable of, but I suspect that a great many of us act as if we are non-conscious by flooding what conscious activity we do engage in with Facebook kitten videos, et. al.

    Liked by 3 people

    • Thanks Steve!

      I would say not to sell our predictive abilities too short. Within the boundaries of our evolutionary environment, they probably worked pretty well. We wouldn’t have survived if they hadn’t.

      But they’re often not well suited to modern environments and concepts, many of which we have to understand using metaphors and analogies from our the day to day environment we’re most familiar with. We only understand things like physics, biology, or economics using metaphors that a hominid brain can easily work with.

      We can definitely act without consciousness. Consciousness doesn’t control what happens, but it does have what I call “causal influence”. This makes a little more sense if we use the word “awareness” instead of “consciousness”. Awareness doesn’t control actions, but the contents of awareness affect what actions will be taken.

      I think the answer to your last question is that consciousness is good for prediction (although it’s far from perfect at it). This isn’t my proposition. A lot of neurobiologists are coming to this conclusion. (See the linked Aeon article in the post.)

      One thing we do know from brain injured patients who have had their consciousness compromised, is that their ability to survive and navigate the world is profoundly compromised. How compromised depends on which aspects of their consciousness have been effected.

      Liked by 1 person

      • Steve Ruis says:

        Re “I would say not to sell our predictive abilities too short. Within the boundaries of our evolutionary environment, they probably worked pretty well. We wouldn’t have survived if they hadn’t.” This doesn’t apply. Survivability has to do with awareness but not the ability to be aware that one is aware, i.e. consciousness. Look at all of the species that have “survived” or not. I don’t think consciousness is a big player there.

        Now see what you have done! I have been trying to avoid philosophy, especially thinking about thinking, at least for a while and … well, I got out, and you suck me right back in!

        Liked by 1 person

        • “Survivability has to do with awareness but not the ability to be aware that one is aware, i.e. consciousness.”

          Actually, I’d say it has to do with both. Awareness increases our chances of survival, but awareness of our awareness increases it even more. But the question is, is an animal who’s only aware, but not aware of its awareness, conscious? I’m often not aware of my own awareness unless I’m thinking about it. I think animals who are only aware are conscious, but admittedly there’s no fact of the matter here, just philosophical outlook.

          Haha! Sorry for dragging you back into philosophy. I can always do more posts about Trump 🙂


  5. To SelfAwarePatterns,

    You wrote, ” My enthusiastic run of five posts reflected how much F&M’s excellent book had shaken me out of my anthropocentric views.”

    Are you sure that now you have come out of your anthropocentric views?

    Liked by 1 person

    • Hi ontologicalrealist,
      Well, I’m sure my views on consciousness are less anthropocentric than they were before. But complete freedom from it probably isn’t possible for a human.

      Why do you ask? Are you seeing something I need to reconsider?

      Liked by 1 person

  6. Mike,
    I love that we’re getting back into the F&M consciousness stuff, though spending some time on Trump has obviously been important.

    I wonder if I could make a friendly amendment for your proposal here? I believe that we should avoid speculation about what consciousness “is,” and instead directly define the term in any manner at all that seems “useful.” Though there are no true definitions, my first principle of epistemology seems widely violated in academia today.

    I do consider your consciousness definition useful, and surely given its correspondence with the more broad set of models which I’ve developed. (I would clarify however that from my models the simulations are a part of consciousness, rather than the whole thing, since it wasn’t clear to me if you meant this.) I’ll now interpret your presented scenarios from my own perspective.

    It seems to me that your early pre-Cambian animal, with nerves but no true processor for inputs, might be analogized with a mechanical typewriter. Pressing a key on such a device forces an associated arm to rise up to strike the paper, perhaps like your mentioned sensory neurons that are directly connected to motor neurons. I define each subject here as “mechanical.”

    Your next scenario also included a spinal cord, which you said “allowed for a combination of sensory inputs to lead to combinations of actions.” To me this suggests the development of a primitive input “processing” capacity. My analogy for this will be a cheap digital calculator. Here an input might be “2 + 2 =” and once processed an output of “4” can be expected. If something is said to accept inputs, process them, and then provide output, I refer to this as “mind.” By default I term all else “mechanical.”

    We were then shown a fish with a non-conscious mind that had to deal with competing processing demands, such as “food vs. safety.” (It may have been a faux pas to mention “desires,” and “hunger” however, since to me they imply consciousness.) Nevertheless I’d say that this fish might have the ability to simulate the future given various potential actions which it might take, at least in the algorithmic manner that a computer plays chess. Consciousness seems to function in a very different manner however. If you like I’ll provide a brief account of my own consciousness model to help demonstrate this difference. The upshot is that there exists a conscious processor (called “thought”) which 1.) interprets inputs and 2.) constructs scenarios, in the quest to 3.) promote its utility. So this second component of conscious processing is what you’ve defined as “conscious,” and I include this in my own model as well.

    Liked by 4 people

    • Eric,
      I have no objection to your amendment. In practice, my language is often that of a philosophical realist, and like most scientists I’m usually more motivated by the realist outlook, but I’m ultimately I’m more of an instrumentalist. It’s just that always stopping to put in the instrumentalist disclaimer makes for very wordy posts that, I think, often obscure the point I’m trying to make. As Brian Greene once noted, sometimes you have to make your point messily, then clean up afterward.

      I think you have the right idea that the non-brain nervous systems are more like typewriters and calculators, although in fairness to those creatures, their nervous systems could still be adaptive through conditioning in a way that a typewriter or calculator can’t. I think to get to typewriter / calculator primitiveness, we have to drop to single celled organisms and their autonomous responses, although even some of them (parameciums come to mind) had an incipient type of conditioning. Now that I think of it, we might have to drop down to individual proteins to get typewriter primitiveness.

      “The upshot is that there exists a conscious processor (called “thought”) which 1.) interprets inputs and 2.) constructs scenarios, in the quest to 3.) promote its utility. ”

      I can see what you’re getting at, but it seems to me that 1) can happen unconsciously, but I suppose it depends on exactly what we mean by “interpret”. Although the more sophisticated we imagine interpretation becoming, the more it seems like we could be crossing into scenario construction.

      Could you maybe expand on what you mean by 3)? I’m not sure I’m understanding what you’re saying here.

      Liked by 1 person

    • Mike,
      I’m happy to hear that you seek “useful” definitions for terms like consciousness, rather than “true” definitions, and even though you seem quite partial to the suspect “is” term. It’s a standard convention, though I hope you don’t mind me suggesting that it’s quite problematic. I consider the hard sciences harmed by it, but very much so on the soft side.

      Regarding the typewriter and calculator analogies, it may not have been clear that I was actually making a fundamental sort of distinction between them. One functions like a building, a star, an atom, or even a plant. I call such things “mechanical.” The other functions in the same essential manner as a vast supercomputer does. I call this “mind.”

      I’m happy that you’ve mentioned single celled organisms, because I do theorize “mind” here. I’m no expert, but I believe that most or all such life harbors genetic material (it’s “mind”) from which to process received input information for associated output. From my definition they function in the manner of a digital calculator and a vast supercomputer. Does anything have this “mind” that isn’t biological, or isn’t (such as our computers) built by something biological? I’d love any suggestions because I can’t think of any.

      Yes Mike, dropping down to proteins is exactly what I’m suggesting to get to the primitiveness of mechanical typewriters. (BTW, I consider plants to be perfectly mechanical, even though their cells should have individual minds from which to function.)

      You’re right that I believe non-conscious minds “interpret inputs” as I’ve mentioned of conscious minds — I have no reason to think that I do so while cheap calculators do not. The difference I see here is in the full manner by which these two fundamentally different varieties of computer function. I’d love to go into this right now, but I’ll need to be tactful about it Hopefully we’ll have plenty of time to discuss the difference as I see it. For the moment just know that my model does support the speculation that you’ve provided here.

      You’ve asked me about “3.)” regarding a conscious processor that interprets inputs and constructs scenarios in the quest to 3.) “promote its utility.”

      Imagine a standard computer which is so advanced, that it creates a separate “conscious” type of computer to run on top of it as well. Of course standard computers function just as obliviously as the rest of reality does, but for this auxiliary computer on top, existence can be anywhere from horrible to wonderful. I’m saying that the non-conscious mind produces “utility” which drives the function of this auxiliary computer, given that it now has incentive to try to feel good and try not to feel bad.

      This is why in your last post I took you into a discussion of “value” (apparently produced by the non-conscious mind to motivate conscious function) rather than “values” (apparently an “arbitrary” product of instinct and culture). Given that academia today is quite focused upon studying the “morality” of what’s good and bad (arbitrary) rather than the “reality” of what’s good and bad for any given subject, (utility it would seem, which can have very repugnant implications) the wild speculation associated with consciousness today seems quite logical to me. I believe I can help straighten things out, but given the circumstamces, must do so diplomatically.

      Liked by 1 person

      • Eric,
        I can see where you’re coming from on ‘is’ and ‘truth’, but I’m reluctant to give them up.

        First, whenever we’re discussing anything about minds, I think it’s important to make sure people know whether we’re talking in dualistic or monistic terms. I’m a monist who has learned that loose language in these discussions often gives the wrong idea. It’s why I now resist phrases like “neural correlates” or “consciousness arising”. These phrases are implicitly dualistic. For instance, we would never talk about how Mac OS X “arises” from Mac hardware. Unless we buy into dualism, I think we should resist doing it for minds and brains.

        Second, just to reiterate, the simple utility of words like ‘is’ and ‘truth’ allow me to communicate something quickly, clearly, and forcefully. It’s also edgier and riskier than using hedging terms, which makes the possibility of being shown to be wrong higher, but I see that as a feature, since if I’m wrong, I want someone to demonstrate it.

        All of this, of course, can be complicated depending on your theory of truth. Mine tends toward the pragmatic ones. So when you see me use terms like ‘is’ or ‘truth’, just be aware that I’m using them within a pragmatic frame of mind.

        I don’t think I would use the word “mind” for genetic information. Certainly the recipe for the organism resides in its DNA chromsomes, but there are just too many aspects missing of what we normally consider part of a mind. I tend to think of it as more of a control database, but no analogy is going to be perfect.

        Thanks for explaining the promotion of utility. I think I would say that that is outside of consciousness as well. It’s what creates conscious affects, that is, the feeling of emotional reactions. It’s a concept very similar to Antonio Damasio’s one of biological value. All living organisms have impulses / instincts that promote survival. It’s what distinguishes them from non-living systems. Those impulses don’t originate in consciousness, but rather consciousness exists to facilitate them.

        Admittedly, that might be hair splitting on my part. You might have meant affect awareness by 3) rather than the instincts themselves, their effect on consciousness whether the impulse in an of itself. In which case, I definitely agree that it’s a part of consciousness. It’s what triggers the simulations, and then what judges the conclusion of each simulation.

        Liked by 1 person

    • Mike I don’t mean for you to give up “is” and “truth” cold turkey. In normal life I use such terms all the time. But when I’m in an academic setting and I hear speculation about what consciousness “is” — terminology which implies that such a thing exists for us to “discover” rather than “define” — I do cringe a bit. You might suspect that I’d eventually assimilate given that even our encyclopedias take this stance, but I don’t. I wouldn’t mind so much if the field of epistemology already had something like my first principle of epistemology at it’s disposal, and therefore all of this “is-ing” might simply be a pragmatic short cut? That’s just not the case however. In three years of active blogging, no one has yet mentioned that some prominent person has championed my “There are no true definitions” position.

      I’d be inclined to give the “scientismists” a shot at epistemology, but then they have it and fail to develop generally accepted epistemological principles just like everyone else. (BTW, I absolutely hate the science/philosophy divide. To me it seems quite dualistic for any so called naturalists to claim that certain aspects of reality can only be explored under one classification, while the other classification can only be explored through a separate path.)

      On your first justification, I’m going to ironically mention a moralistic principle that my mother used to tell me: “Two wrongs don’t make a right.” On your second, yes I agree that words like “is” and “truth” are great for speed, clarity, and forcefulness. It’s cumbersome to say, “What is a useful definition for time?” rather than “What is time?” This speed surely does sacrifice accuracy however. I’m not sure if my approach makes me less clear than others, though you’re certainly right that it’s more forceful to say “I know…” rather than “I believe…” But doesn’t my position correspond better with your wonderful “Unless I’ve missed something?” line?

      I’m with you on pragmatism. Regarding “truth” however, consider my second principle of epistemology: “There is only one process by which anything conscious, consciously figures anything out: It takes what it thinks it knows (evidence) and checks to see how consistent this is with what it’s not so sure about (theory). As evidence continues to remain consistent with theory, it tends to become believed with greater conviction.” It’s a humble position, but I think no more than we deserve. The only “truth” that I believe can ever be known with certainty is Rene Descartes’ “I think…”

      Then moving on to the definition that I use for “mind,” regardless of convention, is it not useful to have a term for that which processes inputs for associated output? This is not a typewriter, molecule, galaxy, or atom, but it does include your post Cambrian animal with a spinal cord, a living cell, a calculator, a supercomputer, an ant, and the human.

      I’d be interested to hear more about your “control database” definition for mind.

      Regarding 3), I don’t disagree with what you’ve mentioned and its similarly with Antonio Damasio’s ideas, but yes my meaning was more “affect awareness.” A less formal term is simply “happiness.” Regardless of “survival,” I consider this stuff to be the only aspect of existence which has value to anything at all. As you say, “It’s what triggers the simulations, and then what judges the conclusion of each simulation.” It seems to be the “fuel” which drives the conscious mind. We may always be too ignorant to incorporate this into our computers, and thus never make them conscious, but first things first. Our mental and behavioral sciences remain quite primitive given that we don’t yet have even a conceptual understanding of what evolution developed us to be. I believe that great progress can be made, though cherished institutions such as “morality” may need to be sacrificed.

      Liked by 1 person

      • Eric,
        I’ve actually found most scientists to have a very pragmatic attitude toward truth. They see themselves as pursuing it, because that’s what motivates them. When pressed, many will admit that instrumentalism is a more solid epistemic position, but high utility frameworks for predicting observations just doesn’t excite them as much as the quest for truth, even if, like the speed of light, it can never actually be achieved, only approached to ever closer approximations.

        But in my experience, it’s usually the philosophers and theologians who want to quibble over truth itself, usually in a desire to get it away from those pesky scientists and their experiments, data, and mathematics.

        On the definition of “mind”, my remarks were only about using it to refer to genetic information. Sorry for the confusion. I don’t actually have a control database definition for mind. I was using that label for the sum total of an organism’s DNA in chromosomes.

        I think defining a mind as any system with inputs and outputs is too broad. Of course, definitions are utterly relativistic, but I think using “mind” that way is too far from what most of us think of when we use that word. It’s similar to some panpsychists who say that consciousness is any system that interacts with its environment, then use that definition to insist that protons are conscious. Technically, using their definition, that’s true, but it makes their assertion sound more profound that it actually is.

        To me, for a system to earn the “mind” label in its common meaning, it needs to include goals, memory, learning, a model of itself in relation to its environment, and at least some degree of intelligence, the ability to make effective predictions. Many might say that its goals need to include self concern.

        On affect modeling being crucial, it seems like we’re on the same page. I do think we’ll eventually be able to develop it in machines. The first one might be a self driving car, an autonomous Mars rover, or something along those lines. Although the radical differences between our own affects and those of these machines may not lead us to intuitively see them as conscious.

        Liked by 1 person

    • Mike,
      I have no problem saying that I seek truth. I’m entirely convinced of this actually. The problem I have is saying that I possess truth, at least in all regards except for one. The one truth that I do have, is that “I think.” I consider this the processing element of my conscious mind, and no demon genius or brain in vat scenario could ever get around this one truth that I know to be true. You could say the same yourself, that is should you actually exist. Only you yourself could know such a thing to be true however.

      I consider it the job of philosophers to quibble over “truth,” and indeed, that’s exactly what I’ve just done. But what else could intelligently said about it? I believe that the above needs to become generally accepted in the field of epistemology. Furthermore there are my two practical principles, which concern definition as well as the theorized single process by which anything consciously figures anything out. Yes the hard sciences may be developing ever more effective theory, somewhat like ever increasing velocity. But what about psychology, psychiatry, sociology, cognitive science…? Ha! I want to help, and beginning with improved epistemology.

      I’m very receptive to your suggestion that I may be pushing the limits too far away from normal conceptions of “mind.” (I hadn’t realized that some panpsychists hold their position by devaluing what we commonly consider “consciousness.” To me it sounds like they’ve got too much time on their hands!) Regardless I have tremendous uses for a term which represents an “input/processing/output” distinction, and whether this is called “mind,” or something else. How about “proto mind”? This modifier reference “earliest form of.”

      Though you haven’t developed a control database definition for mind, you have expressed what you believe people mean by it generally — or including “goals, memory, learning, a model of itself in relation to its environment, and at least some degree of intelligence, the ability to make effective predictions. Many might say that its goals need to include self concern.” Wikipedia states similar things (as well as taunts me with profuse “What the mind is” mistakes). But it seems to me that this definition redundantly references consciousness itself. So when someone looses all consciousness, and therefore can’t do these noted “mental” things, shall we say that the person has no mind left at all? If so, then what instructs the heart to beat and countless more? Surely it would be useful to say that this person has a “non-conscious mind,” or a vast supercomputer which facilitates a “conscious mind”? And then if we’re able to liberalize the mind term in this manner, then a “biological” mandate would seem quite superficial to me as well (though it would still work for genetic material to be referenced as “mind” for cells and so on).

      Now then, where would it be most useful to draw the line between “mental” and “mechanical”? I think you nailed it in this post with your pre Cambrian animal in which one input produces exactly one output, versus the one with a spinal chord that could process inputs as a computer does for more involved sorts of output.

      Mike, I’m also not naïve about how humanity functions, and thus if standard convention mandates that I discuss my ideas from a “proto mind” term, then I will. Yes, I will go to the mountain rather than demand that the mountain come to me. But the point is that this “mountain” still remains in a horrible location. To me that Wikipedia “mind” article was yet another reminder. I mean to help harden these soft sciences, and it’s clear to me that you and your friends share this quest as well!

      Liked by 1 person

      • Eric,
        Again, I think most scientists would actually agree with your views about what we can say we know to be true. It’s not clear to me that your views here are as radical as you may believe, except perhaps for the language that should be used.

        On psychology, sociology, etc, I think the main problem those disciplines face is the nature of what they’re studying. Social, cultural, and economic relationships and processes are constantly changing, putting any determinations about them somewhat on a foundation of sand. On top of that, any study of people has to contend with the fact that, once published, the results might influence the subsequent behavior of those people, resulting in something of an endless arms race, a particular problem for economics.

        Given that landscape, the results from those fields will never have the certainty or timelessness of the natural sciences. But even among the natural sciences, degrees of confidence vary. A biologist, for instance, will never have the 5-sigma confidence possible with particle physics experiments.

        That said, I do think the social sciences have a reproducibility crisis, but so does the drug testing field. All of them, it seems to me, need to increase their sample sizes and be more open with their data, and reproducing a previous study, given how crucial it is, needs to be a respectable use of a researcher’s time. But the degree to which they do all that has to be balanced against costs and logistical feasibility.

        On what to call a system with inputs and outputs, that’s an interesting question. My initial reaction is something like “information system”, but I have to admit I might come up with a better term after thinking about it for a while.

        You made me look up the Wikipedia mind article 🙂 It does seem to do what many encyclopedias and dictionaries often do, define things in terms of synonyms, or near synonyms. Given their educational mission, that’s probably their best strategy, but I agree it makes them less useful for someone looking for fundamental definitions.

        On instructing the heart to beat and other autonomic functions, I think we need to consider a brain injured patient where that’s all the functionality they had left. Would we consider that person to still have a mind? I think most of us would say no.

        It gets more difficult if they have lower level voluntary functionality, such as the lower level eye movement driven by the brain stem, but their cerebrum has been destroyed. Arguably this is the functionality a newborn has, but healthy babies go on to develop full consciousness. Whether a patient permanently reduced to this state still has a mind becomes a matter of philosophy. They would have many of the attributes I listed, albeit to a lesser degree, but probably not the simulations, in other words, consciousness, although no one could know for sure. Maybe “proto mind” is a good label for this state.

        If you do indeed have a way to harden the soft sciences, there may be a Nobel waiting for you, which gives you an idea how difficult I think the problem is. 🙂

        Liked by 1 person

    • Mike, let me provide some background information regarding where I’m coming from. (This may expose some of my own biases, but that’s all the better.)

      For whatever the reason, as a kid I took the statements of authority figures in perfectly literal ways. Thus I’d become quite perturbed when the standard moralistic principles that I was told so often, would fail. I found lots of it to not withstand scrutiny. At about the age of 15 I feel like I was rescued from this by the realization that each and every one of us, are selfish products of our environments. This was a principle that seemed to explain the function of both hardened criminals, as well as beloved grandparents. So rather than compromise my extremely literal nature (which you’ve observed regarding epistemology) I cast away standard moral notions for a position which conformed with my general observations.

      In college I planned to hone this perspective from what associated fields had developed, but quickly became disappointed in them. They didn’t seem to understand that we’re all selfish products of our environments. (That is except for the field that I ultimately received a degree in, or economics, though as I mentioned to Oscar below, a disclaimer does soften its stance.) So I decided that I would not accept the conventions of these fields (like “soft sciences can never harden”) while attempting to straighten this stuff out for myself.

      After another fifteen years or so I did become quite happy with what I’d developed, but wondered who would care? I could see that I would need something more than just a supposedly solid foundation from which to properly explore these fields. Therefore to demonstrate the merits of my position (at least to myself), I used my premise to develop what I consider to be an extremely useful model of the conscious mind. Then armed with these ideas, three years ago I became satisfied enough to begin discussing these matters online with intelligent people like yourself (I’m currently 47). As some point I expect a convergence where I’ve been able to learn enough about standard thought in these fields, to effectively communicate my own ideas to others. We’ll see about that. So now then…

      Yes, when asked I do expect scientist and philosophers to agree with my epistemology, given its apparent sensibility. But the problem is that the field of epistemology does not yet formally accept my principles, and I notice them widely violated in general.

      I also agree with you that “Social, cultural, and economic relationships and processes are constantly changing, putting any determinations about them somewhat on a foundation of sand.” But I’m not interested in founding these sciences upon “values,” or those arbitrary matters of instinct and culture. I’m interested in founding them upon something deeper, or the “value” which should underlie “values.” I seek to found them upon the only thing that seems to matter to anything.

      I’m very happy that you’re thinking about how to classify things which accept inputs, process them, and then provide associated output. Nevertheless, and especially given the fluctuating nature of language, it seems to me that the term “mind” will eventually serve this purpose quite well. Here instead of simply saying “mind” when consciousness is being considered, we could say “conscious mind.” And don’t we consider the vast majority of what’s between our ears to not be conscious? Shouldn’t we call that a “non-conscious mind” rather than some kind of “proto mind”? Shouldn’t we consider it a vast supercomputer upon which a conscious form of computer is facilitated?

      Yes I’m entirely aware that I’m talking about “Nobel” stuff and beyond, but hey, why shouldn’t we all be making such efforts? I’ve been talking with lots of very educated people over the past few years. Some of them become quite disturbed by the positions which I hold, but none seem able to present much of a challenge.

      I’m thinking of venturing over to your next post with the neuroscientist video. I watched most of it coming back from Costco yesterday and I am happy that I’m able to agree with the man at least somewhat. Of course in arrogance I’d like to say, “Okay, here’s what our neuroscientists do not yet understand about consciousness…” That approach could be considered a bit too arrogant however. As always I have a busy week ahead, and I’ve also been itching to straighten out one of my consciousness diagrams for the past year. We’ll see…

      Liked by 1 person

      • Eric,
        Thanks for the explanation. I hope you’re okay with some disagreement.

        While I think we do have a tendency to act in our own interests (to the extent we understand them at least), I believe it’s more complicated than saying we’re all selfish. We’re born with both selfish and pro-social instincts.

        Have you ever read Jonathan Haidt’s work, most particularly his book ‘The Righteous Mind’? He makes some points about the view of human selfishness, calling it homo economicus (since it’s the default view of economics). One of the things he does early in the book is present scenarios, such as accepting money to (with complete secrecy so no one would ever knew you did it) beat a baby seal to death with a bat. Many of us might imagine a scenario where we might be desperate enough to do it, but I don’t know too many people who would find it a pleasant or even neutral task, even if we’d come out of it financially better off.

        Now, you could argue that our pro-social instincts are simply selfishness in disguise since a pro-social individual’s chances of survival and reproduction are higher. The problem is that while that’s why those instincts evolved, it’s not why we personally follow that instinct. We follow it because it’s there. We’re not being altruistic because we calculate it will help spread our genes, but because we simply have the impulse to do so.

        In addition, natural selection arguably selects on genes, not individuals. That mean’s we have innate instincts to protect those with genes similar to ours. However, we don’t have a magic ability to sense anyone’s genes. Instead, we have an instinct to protect those we’re closest with, which in evolutionary history were our relatives. However, today that instinct can be hijacked by the tribe, nation, and overall society.

        All of which is to say, it’s a lot more complicated than us all being selfish. Many reptilian and fish species are conceivably wholly selfish, not being social animals. But any social animal has innate social instincts. Of course, there are outliers, such as psychopaths, but they are a minority of the population.

        It seems like what you’re aiming for may be evolutionary psychology, the branch of psychology that tries to ground psychological impulses in evolutionary instinct, to in essence, discover human nature. It’s a difficult science, and many criticize it because filtering out innate instinct from cultural indoctrination is extremely difficult. Many doubt that it’s possible at all, at least other than with attributes we share with other primates. And some of the practitioners have been too loose with their theories, engaging in just-so stories that only seem to justify cultural biases.

        Given the difficulties, many social psychologists, such as Haidt, carefully avoid insisting that they’ve found definite nature, instead just focusing on what can be established empirically. It may be that the moral foundations he identifies are instinctual (I think there’s a good chance they are), but he might argue that, for his purposes, it doesn’t matter.

        Maybe I’m voicing objections you’ve heard before and already taken into account. If so, I’d be interested in hearing how you accommodate them.

        Liked by 1 person

    • Mike, of course I’m okay with disagreement, and especially the intelligent sort that I’ve come to expect from you. Such observations could demonstrate ways that I might improve my positions, or they could simply help us overcome inevitable communication issues.

      Even back when this “selfishness theory” hit me as a teen, I was quite able to account for human empathy. I noticed that this seems to exist in us through “utility,” or the only thing that I’d theorized of value for anything in the end. The more graphically that we understand the suffering of something, the worse that we tend to feel about it. Therefore it can be “selfish” to aid such a subject, since doing so may reduce personal suffering in this regard. But then imagine how bad it could feel to be the agent responsible for causing the suffering of an innocent baby seal, or even a human child? Is it not selfish of us to avoid such mental trauma?

      I suppose that true studies to determine how much money people would demand to anonymously club baby seals to death, would be extremely difficult given how unethical the whole thing would be. Many would probably accept far less money however, if they could have someone else do the clubbing work and otherwise remain ignorant about it. That seems selfish too.

      Nevertheless in some cultures such killings shouldn’t be all that problematic, and I suspect that many western farmers and hunters would manage okay given their past experiences. Apparently an associated numbness can develop.

      Also consider human hypocrisy in this regard. We have little reason to believe that “cute” things like baby seals suffer any less than “disgusting” things like filthy rodents, and yet the plight of cute things motivates people to a far greater degree. This also seems selfish.

      When I said that we are selfish, I meant that the stuff produced in our heads called “utility,” is ultimately all that matters to anything, and even though perceptions of outside utility can affect personal utility. Furthermore I theorize a second components to what I call “moral is.” (As far as I can tell, “moral oughts” do not exist.) Beyond empathy there seems to be a utility which comes through perceptions of how we are thought of, such as “respect.” This is commonly referred to as “theory of mind.” Each of these “moral is” components seem to have evolved to facilitate social cohesion by somewhat moderating the naturally selfish nature of conscious function.

      To me this seems very different from your above speculation about my beliefs. If you take my meaning, have I opened up some new questions?

      Liked by 1 person

      • Thanks Eric. I very much appreciate your openness to discussion.

        It seems to me that you’ve redefined selfishness to acting on our own desires, regardless of whether those desires themselves are selfish or altruistic. I don’t want the anguish of killing (or having killed) a cute innocent animal, so I selfishly refuse to do it.

        But my question is, when do we act on anything other than our own desires? (Even if we’re acting under duress, it’s still our desire to avoid the consequences of that duress that we’re fulfilling.) If we define selfishness as doing that, don’t we make the assertion of everyone being selfish somewhat tautological?

        Perhaps another way of asking this: is your theory falsifiable? If so, what observation would falsify it?

        Liked by 1 person

    • Interesting line of inquiry Mike — apparently I’ve been “Karl Poppered”! Well let’s see here…

      It appears that you’ve acknowledged my position that empathy is entirely utility based, and can therefore be considered “selfish” in this regard. (We didn’t get into “theory of mind” much, but I consider this no different.) So is it unfalsifiable that consciousness is driven by utility? Let’s consider some implications of my theory.

      I believe that my body harbors a vast non-conscious computer, and that a relatively unsophisticated conscious computer functions upon this medium, and exists as the source of all that I know of existence. (Hey that might do the trick! If my liberal use of the term “mind” is problematic, I could simply use the term “computer.”) The theory is that the non-conscious computer sends punishing and rewarding signals over to the conscious computer that it hosts, and that this is what motivates the conscious computer to do what it does. Thus if we want to check to see if consciousness functions on the basis of utility, we could theoretically take measurements of experienced utility, and then see if corresponding behavior is observed as well. (Here I seem to have demonstrated that my theory is falsifiable, though I’ll go further as well.)

      I know that utility does exist given that I experience it myself. I might even provide rough subjective scores to quantify it if asked. But how might utility quantification be achieved from objective sources? Here’s an idea:

      It would seem that this non-conscious computer tends to automatically operate facial muscles in a way that correspond with positive and negative utility to some extent (causings involuntary smiles, frowns and, so on). Therefore it should be quite possible to develop computer algorithms which provide objective utility quantification on the basis of facial video. Mike I’m sure that you could put together a “Smith Scale” given your computer expertise, though hopefully dedicated specialists are already trying to develop such a tool. Here a person would have objective scores from which to quantify how personally valuable past existence has been (like a prison sentence, a vacation, or whatever). But how much can automatic facial expressions truly display about how good/bad existence has been? Even if done quite well, this should only scratch the surface of the horrors associated with strong pain and so on. It may be fortunate that we have such information at all, unlike what exists for fish, but can we do better?

      Better still would be to find sources of electro/chemical evidence which correspond reasonably well with experienced utility. I have no idea if or when we’ll find good sources of of this, though I think that there’s something far more important to straighten out right now. Right now I believe that our mental and behavioral sciences will need to dig down to reach a more solid position from which to theorize our nature. Instead of somewhat arbitrary “values,” I believe that we must base these sciences upon “value.” Furthermore I theorize this to be an input to the conscious mind, which is formally known as “utility.”

      Liked by 1 person

      • Eric,
        I don’t have any particular problem with your overall concept of utility. As I think I’ve mentioned before, it seems similar in many way to Antonio Damasio’s concept of biological value. My issue in the previous comment was more about whether the common meaning of “selfish” was necessarily a good one for what pursuit of that utility or biological value might lead to.

        Another way of looking at this is Richard Dawkins’ metaphor of the “selfish” gene. (Have you read his book, ‘The Selfish Gene’?) Each gene contributes to factors that either improve its chances of being propagated or hurt it, with the ones that improve it winning out over time. These factors include the instincts that drive us toward utility, biological value, or what I sometimes call survivability.

        But it’s important to understand that each selfish gene isn’t necessarily selfish about each instance of its occurrence, but about the overall propagation of its pattern, of the information that it’s composed of. This is what can lead to genuinely altruistic behavior at the level of the individual. Sometimes a gene’s best chance of propagation lies with, say, protecting a young relative rather than the current organism.

        This is called kin selection, and it’s well accepted in biology, but kin selection, as I noted above, is based on factors that can be hijacked by social and cultural mechanisms and channeled into other behaviors, such as tribalism, nationalism, and patriotism, but also humanitarianism and other impulses.

        The idea of grounding psychology into biological instincts and value has actually been around since the 1970s, albeit with a great deal of controversy. (See E. O. Wilson and sociobiology.) The modern version is evolutionary psychology, which has the issues I mentioned above.

        But maybe instead of me just laying these issues out I should be asking, how do your views differ from those of sociobiologists and evolutionary psychologists? What are they doing differently from how you’d like to see things proceed?

        Liked by 1 person

    • Mike it’s taken some time for me to focus my many thoughts regarding your last inquiry, but try this:

      I think you’re right that common definitions of “selfishness” are not good for what pursuit of utility might lead to. In effect helping others can be “selfish” in a happiness sense, but “altruistic” as the term is generally used.

      Regarding Antonio Damasio, I’ve gone through your June post about him again, as well as watched his TED talk. Though this was only a cursory look, I haven’t detected more than trace similarities between our positions. In what sense do you perceive my conception of utility to have similarities with his conception of biological value? (To be clear, I absolutely hate that the utility term has both “happiness” and “usefulness” connotations, and consider conflation between them particularly problematic for the interpretation of my own ideas.)

      Regarding “The Selfish Gene,” my father would tell me about the ideas of Richard Dawkins and O. E. Wilson when I was younger (probably to shut me up with the point of “Here’s what real theorists are doing”). I didn’t pursue them however, since the things he would tell me seemed quite obviously true.

      Your most reaching inquiry was to ask how my views differ from that of respected professionals in these fields, as well as how I’d like them to instead proceed? Great question!

      When I began I was mainly interested in developing a science from which to theorize what’s good and what’s bad. As a physicalist, monist, naturalist, and so on, I’ve never accepted the position that existence can be positive/negative for something, but that it’s impossible to reduce this welfare back to an effective origin. Observe that if we were to understand such a thing, we should then be able to use it to develop a practical science from which to theorize what’s best for any given subject regarding the countless personal and social issues that we constantly ask ourselves. Would it be best for me to (whatever)? Would it be best for my society to (whatever)? I’ve long observed the need for such a science, and would base it upon an input to the conscious mind that represents “value” for any given subject. (Once again, though “values” may be considered an arbitrary product of instinct and culture as you’ve noted, I’m talking about an aspect of consciousness which is theoretically just as measurable as “mass” or “time.” I’m very pleased with your apparent agreement above!)

      That was my plan as a college kid 20 plus years ago, but since that time I’ve used my “value” rather than “values” position to develop what I consider to be an extremely solid premise from which to potentially found our mental and behavioral sciences. So in a nutshell, that’s how I’d like these sciences to proceed. I believe that a more solid position from which to build must be developed, as well as have a plan of my own for you and others to assess.

      Liked by 1 person

      • Eric,
        On your utility and Damasio’s biological value, both seem to be about why we have the motivations that we do. Both seem to be the ultimate drivers of what our consciousness focuses on and works toward. Both seem to ultimately result in conscious affects, what Damasio calls “felt emotions”.

        That said, I’ll admit it’s quite possible that the similarities between them are superficial. I don’t know enough about yours to really say. And yours may be far more developed than his, which is really just a stepping stone toward his theory of the levels of self that form in the brain.

        I definitely recommend ‘The Selfish Gene’, as well as Jonathan Haidt’s ‘The Righteous Mind’. Another good one might be Steven Pinker’s ‘The Blank Slate’. All seem relevant to your ideas. I’ve never read Wilson’s book myself, so can’t comment on it.

        “I’ve never accepted the position that existence can be positive/negative for something, but that it’s impossible to reduce this welfare back to an effective origin”

        I’m not quite sure if I understand the point you’re making here. Do you mean that the is / ought divide isn’t real? It seems like you’re aiming for a science of morality. Which loops us back to values.

        I know you said you’re focusing on value, not values, the idea being that all values are in service to one overarching value? This seems like what many moral philosophies aim for, attempting to reduce ethical considerations down to one metric (happiness, pleasure, preference, consistency, etc).

        The problem as I see it (and I may be repeating myself, if so, I apologize) is that while the instinctive values may have all evolved in service of the value of gene preservation and propagation (or whatever our theory of the ultimate value is), we still feel each of the related emotions tied to those intermediate values, even if we know, intellectually, that they’re not necessarily serving that overall value. And many people, when they contemplate that putative overall value, are repulsed by it, and resist accepting it as a normative metric for life. (I know I do.)

        All of which is to say, assuming I’m not utterly misunderstanding your meaning, morality seems to be irreducibly complicated. No overarching principle ever seems to capture all our intuitions about what is right or proper.

        Of course, many scientific theories violate our intuitions. But when it comes to morality, our intuitions are ultimately at the center of what we call moral. At least unless you subscribe to some form of moral Platonism, but if you do, I’d ask how we can determine any of its truths.

        I apologize if I’m totally off base and all of this ends up being orthogonal to your actual ideas. If so, please feel free to set me straight.

        Liked by 1 person

    • Mike,
      I’m extremely interested in your expertise, though it should be hard for you to know which books would do me the most good until you gain a reasonable grasp of the nature of my ideas. Once you do gain such an understanding however, I’ll read anything that you suggest!

      Yes I do believe that there’s been a misunderstanding somewhat, but merely in the way that virtually everyone misunderstands me. The crucial point which I fail to properly demonstrate, is that my position is neither moral nor immortal, but rather amoral. It concerns “the is” rather than “the ought.” In fact beyond a human construct, I don’t consider moral oughts exist whatsoever. Thus I’m quite confident in the validity of David Hume’s “is ≠ ought” rule. Another way of putting this is that my ideas have absolutely no normative component. Thus I do agree with you about the speculative nature of morality, and that our intuitions constitute moral dynamics. My ideas concern something else however.

      The statement I made was, “I’ve never accepted the position that existence can be positive/negative for something, but that it’s impossible to reduce this welfare back to an effective origin.” Rather than refer to the rightness and wrongness of behavior, and thus the social construct of morality, this merely refers to the existence of sentience. Given the sentience of something, such as a lizard or an entire flock or seagulls, I believe that a subject’s welfare is based exclusively upon the units of happiness and unhappiness that it experiences per unit of time. This goodness and baddness does not concern its survival, nor even its genetic proliferation — those things are merely functions of evolution (I presume). I’m referring to a goodness and baddness which concerns the welfare of a subject itself exclusively.

      For example, if you were to build a computer that had a dial from which to make it feel horrible to wonderful, then apparently you could do this sentient subject tremendous harm or good in perpetuity, depending upon where you place the dial. It’s not the morality of your behavior that I’m addressing (and perhaps you don’t even know that this dial has such an effect?). Instead I’m referring to the goodness and badness of existing as this computer.

      Here I’m sometimes asked, “If morality and philosophy’s ethics isn’t what your ideas concern, then is it psychology?” Well yes, for starters I believe that psychology needs to develop an amoral system from which to theorize the welfare of any given subject. The largest reason that this hasn’t happened, I think, is because in practice such a system would have various incredibly immoral implications.

      For example, you’ve just mentioned “And many people, when they contemplate that putative overall value, are repulsed by it, and resist accepting it as a normative metric for life. (I know I do.)” Yes, this is my point exactly! But I’m not saying that value needs to be accepted as a normative metric for life. I’m saying that our mental and behavioral sciences shall continue to remain primitive, if they continue to remain agnostic to the notion that it can sometimes be good for a given subject itself, to do horrible things to other subjects. We dont want this to be true, but by not fully acknowledging such reality we not only don’t alter it, but apparently mandate that our mental and behavioral sciences shall remain primitive.

      I’ve taken the other path of fully accepting this circumstance, and so potentially developed an extremely fundamental platform from which to explore these sciences. What I ask for seems to be exactly what you’ve already provided here — an intelligent platform from which to test the validity of my associated ideas. For this I am most grateful!

      Liked by 1 person

      • Eric,
        On providing a platform (however limited of one my little blog provides), glad I can help.

        But I fear I remain confused about what space you hope to stake our for your theories. You say you’re not doing evolutionary psychology in the sense of attempting to understand what motivates us, but you’re also not working toward a normative theory such as a theory of morality.

        Is there a space between these that I’m missing?

        This thread has gotten a bit long, and I don’t want to pester you, so no worries if you’d like to quiesce this discussion for now.

        Liked by 1 person

    • Mike,
      What I’m doing might be considered Evo Psych. I’m not much for labels, but I’m pretty sure that our psychological characteristics did evolve, if that counts. Furthermore a psychologist friend did once call me that when I mentioned that perhaps human faces began to develop their tremendously varied expressions at about the time that formal languages emerged. (I haven’t checked the records on this, but a coincidence wouldn’t surprise me since it should have been helpful to have evidence about how others feel as expressed through modern human faces, given the increased behavioral complexities associated with formal languages.)

      In a technical sense I actually consider myself to be an adherent of “introspection psychology.” A person could never study physics through introspection, obviously, but psychology? Well theoretically yes, though extreme objectivity would be required to get anywhere.

      As I understand the terms my ideas are not normative in the sense that they do not concern the rightness and wrongness of behavior, but rather the goodness and badness of existence for any given subject. As discussed above, I’m talking about a theoretically measurable property of reality rather than a social construct.

      I’m not aware of any evolutionary psychologists, or any of philosophy’s utilitarians, or actually anyone other than myself, who advocates the potentially measurable phenomenon of “happiness” to be taken as the fundamental unit of value from which to consider the welfare of a subject, beyond moral constructs. As long as we insist upon denying such an understanding, I doubt that we’ll be able to study ourselves effectively, though I’m quite sure that science will prevail in the end. Furthermore by taking this seperate route I believe that I’ve been able to develop some pretty useful models. There should be plenty of time for that ahead however.

      Yes I will move on to your newer posts if you’re ready, but hopefully I’m making sense.

      Liked by 1 person

      • Thanks Eric. That helps immensely. It sounds like you’re aiming to develop a science of valence, of affects, of happiness as you mention it. Interesting. My next question would be how you plan to measure it, but I’m fine waiting for the next conversation.

        Liked by 1 person

    • That’s wonderful to hear Mike! I will move on, but since you did ask this question I must say that I shouldn’t be on the objective measurement side of things at all… unlike computer people like yourself. You may recall me mentioning that computer algorithms could be developed to assess the magnitude of a person’s happiness, given that the non-conscious mind seems to automatically display signs of this through recordable facial expressions? I believe that I’ve mentioned a “Smith scale,” as well as arrogantly mentioned that they might end up honoring me with units of “Eric’s” for some electro/chemical version. Of course Newton formulated his namesake as “mass times acceleration,” though my contribution would merely be, “Well there must be some kind of approximate thing to measure in the head, since I do know that I feel the stuff.” Quite lame, yes I know. Instead I mean to earn my keep through a functional model of the conscious mind, and various other models that I consider useful.

      Liked by 1 person

  7. Oscardewilde says:

    Very interesting all those thoughts. I think about consciousness a lot to, but haven’t connected the dots. For example I always return to this question. We treat things without consciousness very badly. What is playing poker without money. What does playing the game of life mean when nothing is at stake. Why can’t that be an evolutionary explanation for the existence of consciousness.?

    Liked by 2 people

    • Thanks. Would a way of describing your theory be that consciousness gives us stakes in the game? If so, how would you describe the behavior of non-conscious life, such as trees, who still exert great deals of energy in attempting to live and procreate? Doesn’t the impetus to survive precede consciousness? Wouldn’t it seem more likely that consciousness enables us to better protect our stakes rather than provide them?


      • Oscardewilde says:

        My interest in consciousness and subconsciousness arose when I had a quick read of Freud. What would his view on why the mind had divided itself in a conscious part and a subconscious one.? I don’t actually know. But for me coming from a more economic background with an interest in psychology, consciousness seems to me often to provide completely wrong information on purpose. So for me the question is not only why consciousness is so small part of total information processing (like steve ruis was telling above) but also often lying to itself.
        But my son asked me if he could kick a pigeon and I said it would be cruel. But that depends actually on the fact if the pigeon is conscious of pain. Turning my computer off could also be cruel. Although kicking an unconscious pigeon still seems crueler.
        I’ve also been watching Westworld. The only reason the robots can be treated badly, is because they are seen as not having a consciousness. So they have no moral rights. That’s why I was asking myself. Isn’t it just that consciousness gives us a stake in the game. What is otherwise the moral logic of not doing harm?

        Liked by 3 people

        • I don’t think turning off a computer is cruel. Unlike the pigeon, it doesn’t have a survival instinct, or the capacity to suffer (in the sense of comparing its current state to a desired one, finding a discrepancy, and being unable to remedy it despite a high priority directive from some sub-component of the system). Of course, if we programmed those things into the computer (assuming we could figure out how), it might be a different story.

          I’ve been watching Westworld too. Good show, although I think its conception of consciousness is muddled. (Admittedly, it’s muddled in a manner designed to promote drama, which is what makes the show compelling.) I thought every host from the beginning should’ve been considered conscious and a moral agent, regardless of where they were in any self discovery endeavor. That said, the writing and character development on that show is superb.

          Liked by 1 person

    • Hi Oscar,
      I don’t know if you’re still paying attention for this one, but I wanted to mention how much I appreciate your commentary above. Yes we treat things that we don’t perceive as conscious “badly,” whether for whim or for purpose. But then if our perceptions happen to be correct that these things aren’t conscious, then existence should be perfectly inconsequential to them. No worries about inorganic stuff I think (even though we can have empathy for “cute robots” and so on). But then what about organic stuff like the plants that we depend upon? Well if they aren’t conscious then we shouldn’t be harming them (even if some of us happen to be “tree huggers”). I may be wrong, but I think you’re entirely correct to observe that something needs to be at stake regarding what we’re talking about. So then how might we describe what’s at stake? What is ultimately valuable?

      I consider this “utility,” implying perfectly inconsequential existence for anything that has none. This position hasn’t become formally accepted however, and I think because it can have incredibly repugnant implications. Society has reason to deny the merits of selfish behavior, and apparently pressure in this regard has been so strong that our mental and behavioral sciences haven’t yet been able to equate “utility” with “value.”

      I was happy to hear that you have an economics background, since you should thus be aware that this science is founded upon the concept of utility. But then how might economists have managed this, given how repugnant utilitarianism happens to be? Well apparently a disclaimer can be used when needed. As I recall it reads about like this: “We aren’t saying that happiness is good for anything, since that would be a value judgement and so remain beyond the scope of any science. We’re merely saying that people tend to make their choices in order to promote their utility.”

      I not only seek the removal of this disclaimer, but want it generally understood in associated sciences that the final source of “value” throughout all of existence, is a product of the non-conscious mind (experienced through the conscious mind) that’s formally known as “utility.”


      • Oscardewilde says:

        You are a deep thinker and unfortunately I can’t comment in any sensible way on your remarks. I can’t beat the philosophical zombie, maybe understand it in all its aspects. So in the end I made a wilde jump and thought affect consciousness was to provide some meaning to life. But what that meaning is? If I can find the time I really hope I will be able to post my economic theory of consciousness tomorrow and hope you can make a sensible comment on that. Greetings

        Liked by 1 person

  8. Hariod Brawn says:

    As always, Mike, you write with concision and clarity in this area, which I much appreciate; thankyou.

    “I can often drive to work without being conscious of what I’m doing.” – This is an example of how consciousness gets in the way, so to speak. Here in England, most cars have manual gear changing, demanding the use of a clutch. As experienced drivers, then as soon as we attempt to operate the car with intentionality – being conscious of the bite of the clutch’s flywheel, conscious of engine speed regulation by throttle control, feeling the road and its camber through the wheel, and all that – we become as we were when first learning to drive. In fact, it’s impossible to be conscious of the necessary timing and degree of the motor actions (ha!) involved. And yet we are indeed conscious whilst all that non-conscious stuff is going on. We agree on all this; it’s obvious stuff.

    This is why I like to think in terms of a substrate or aspect of consciousness which we may as well call ‘awareness’, and as I note you doing here with Steve above, and as we discussed at my place only a few days ago. If we just call this the ‘unconscious’ or ‘sub-conscious’ it would seem either to dismiss it or view it as a functional process, and I’m not convinced on that, as you know. That’s why I’m with Philosopher Eric when he says “we should avoid speculation about what consciousness ‘is’ . . .”. So, when you write of this simulation creation being “at the heart of what consciousness is”, I’m not sure if that’s what you really mean, and in fact you’re saying it’s a central function of what it does. They are, of course, two very different things – ‘hard’/’easy’, and all that. Are you saying the conscious endogram may be alone and only ever a simulation?

    I think if we want to know what consciousness is, then it has to know itself as itself, so to speak, rather than as an object rendered within itself – i.e. a mental coalescing around a knowledge-object. So, there we come back to the primacy of first-person experience, which takes us absolutely nowhere in being able to say what consciousness ‘is’, because consciousness ‘is’ only self-evident to itself as a unified whole, or in its self-apprehending, never as an object of thought manifesting dichotomously within it. That said, I wonder if your chief interest is not in knowing what consciousness ‘is’, but rather what’s causing it – is that fair comment, or do you see the distinction as being pedantic or facile? I’m interested in this because in one sense (excuse pun) to know what consciousness is, isn’t tremendously difficult, one just rests in awareness without coalescing around (concentrating upon) some mental object in thought. ‘What it is like’ to drive my manual car has very little to do with intentional states and what causes me to depress the clutch, feel the adverse camber of the road, and so on. ‘What it is like’ is the experience knowing itself as itself, yes? And what becomes prominent in that is awareness itself, not so much the endogram of metacognition with its distinct features.

    Please freely argue against any of the above. 🙂

    Liked by 3 people

    • Thanks Hariod!

      You guys are all still driving stick shifts? I used to drive a sporty manual car when I was young, but gave it up many years ago for automatic transmission. But when I did drive a manual, after the first few months, shifting was largely unconscious, except when I got in a new vehicle with a tricky clutch.

      I’m actually very interested in what consciousness is, full stop, and it’s very much what I’m pursuing in this post. Now, we can get into what the meaning of the word ‘is’ is (at the risk of channeling Bill Clinton), whether I’m talking in some pragmatic instrumentalist sense or a more realistic one. For my purposes here, I can’t see the need to make the distinction. Although if it becomes relevant, I’m pragmatic by nature.

      My inclination is to push back against the notion that consciousness is not something we can investigate in and of itself. In my mind, that seems implicitly dualistic, ghost in the machine thinking. If we only allow ourselves to talk in terms of what causes consciousness, then we’re surrendering to the notion that it is some kind of ectoplasm, invisible and undetectable, or that it maybe only exists in some spirit realm. I’d be prepared to accept those propositions if the data pointed in that direction, but I can’t see that any of it does.

      The only thing we have evidence for is the brain doing its electrochemical signalling. Just like physicists had to eventually grasp that temperature is nothing more than the cumulative kinetic energy of molecules interacting, I think we have to be prepared to accept that consciousness is the patterns in that electrochemical signalling.

      My goal is to understand consciousness in the same way I understand Microsoft Windows. Despite its ephemeral nature, I know what software like Windows is, not just what causes it. I want the same knowledge of consciousness.

      Of course, part of the problem here is that “consciousness” is difficult to define. You’re making a distinction between “consciousness” and “awareness”, but your use of “awareness” seem more equivalent to “being” or “existence” to me, but it seems to me that any talk of knowing about that existence unavoidably gets us back into what “knowing” is. When you say “a mental coalescing around a knowledge-object”, I want to know what that mental coalescing is and what the knowledge-object is.

      Okay, I’ll stop rambling now and give you a chance to respond (if you’re interested).

      Liked by 1 person

      • Hariod Brawn says:

        Yes, we’re stuck the 1950s here with our ‘stick (stuck?)shifts’, as you call them, and we love it, masochists that we are. I actually suspect it’s something to do with our winding roads, and how much more pleasurable it is to drive along them having that extra dimension of control and feedback that manual shifting gives. I have an auto Avensis with some quasi-manual mode and I use that all the time. 90% of my driving is on winding roads, not straight-as-a-die carriageways. Yes, the manual shifting is done, as it were, on auto-pilot, but the tactile and auditory feedback it results in is very much sensed in conscious piloting.

        As to what ‘is’ means, in the sense I used it, then how do we know what the scent of a rose ‘is’? By smelling the rose. We can say the smell of a rose ‘is’ all about olfactory receptor neurons and the chemical constituents of the rose, and in one sense and from one perspective that is what the scent of the rose ‘is’. But we remain totally removed from the ‘is-ness’ of knowing what the scent of a rose is in abstracting it that way. It ‘is’, as they say, what it ‘is’ – and as it is. In itself, it isn’t what or how our ape brains with their monitoring systems account for it, useful and fascinating though that is, and at the same time, it isn’t (in itself as direct experience) Naïve Realism, because it’s not (mis)interpreting itself. Even if we were to think consciousness were some ectoplasmic and cloud-like ‘thing’ that we wandered around in, then in taking that conceptual misconstrual as consciousness itself (which it is), it’s not at all naïvely misconstruing itself. It ‘is’ just what and how it appears as, be that a hallucination, a sophisticated theory of mind, or a unicorn. Consciousness surely is unique in that we’re both in, and bound to live as, the (no)thing-in-itself, not ever able to escape it and model it as a dichotomously displayed object that somehow stood outside of consciousness. I don’t think any of this is where your main interest lies, so it’s probably a bit irrelevant – just thought I’d seek to clarify in attempting to answer your question.

        You’re open to pragmatically exploring whatever needs be explored, you say, so there we go back to my thing I just wrote, about first-person experience needing to be accounted for scientifically – ‘neurophenomenology’, as some have called it. And on that, then I’m certainly not of the mind that says consciousness isn’t ever susceptible to investigation, as you suggest I may be in your third paragraph. We can of course investigate it, and neurophenomenology may yet prove to be one fruitful route. We may one day discover if it’s all just about brains and nervous systems, or if something wider may also be in play, and which we can only theorise on currently – e.g. Penrose/Hameroff OrchOR (I know you don’t approve), or Enactivism and Externalism (similarly), or maybe there’s some fundamental property of the universe we don’t yet know of, as Chalmers has tentatively mooted. But if we ever get all that mapped out, and we can say what consciousness ‘is’ in your wider sense of the word, then I still think that smelling the rose is how we embrace it in its fullest knowing, albeit then with a comprehensive back story about it.

        When you say that “we have to be prepared to accept that consciousness is the patterns in that electrochemical signalling”, then for me that’s as yet a step too far, Mike. I definitely don’t see consciousness as some ‘thing’ distinct from matter, and feel there’s a false assumption that we need to reify either of mind or matter, on the one hand, or accept a sort of Substance Dualism, on the other. And I think that’s because that’s how we think – meaning that’s the way we ubiquitously think given our predisposition to subject-object dichotomies and handed down notions that mind and matter mean different things. When you look at your screen right now, there’s no electrochemical signalling about the state of affairs, is there? We know that’s going on behind your eyes, but there’s also something of the screen and room about the state of affairs, in that they’re appearing as something we call consciousness. Switch either side of the bargain off and there’s no anything. So I do see it as one thing, and to me it’s neither a reified materiality nor reified mind, nor is it both as separate things.

        Finally, [hears him sigh with relief 😉 ] you say you wonder what ‘knowing’ is in my terms, and also: ‘When you say “a mental coalescing around a knowledge-object”, I want to know what that mental coalescing is and what the knowledge-object is.’ For me, then knowing – as a constituent of consciousness – is synonymous with defining, which means ‘separating out’ and particularising. We might think of it as a slightly ‘dumb’ lower-level representation, which temporarily acts as the meta-level representation in that it occupies the whole conscious endogram. But it’s a bit ‘dumb’ in that it’s not contextualised with the environment. It’s not dumb in being unsophisticated, and may be a highly-wrought and complex concept.

        The ‘mental coalescing’ is an ‘attending to’ at the exclusion of what all else may have been, so it’s volitional and energetic, not passive. The ‘knowledge-object’ is what is attended to and known in its apprehending. For me, knowing is the distinguishing quality of awareness. As I see it, attention must be directed by awareness (again, as I conceive of it), prior to the conscious endogram. As I suggested to you the other day at my place, attention without awareness seems a meaningless construct – I think you tended to agree. What does it mean to attend to something without some awareness occurring at some level in the body/mind?

        Liked by 1 person

        • On the meaning of ‘is’, I take your point. There is the subjective experience of smelling a rose, which is what it is and isn’t reducible, at least subjectively. And I agree that the neural mechanisms you noted are also what smelling a rose is, just a different aspect of it.

          On another blog, I used the comparsion of a software bit, a binary 1 or 0, true or false, on or off, with a transistor in one of two voltage states (or one of two ranges of states). The software bit is the transistor, although we rarely think of it as such. In the same sense, the smell of a rose is the experience, and it is the molecular interactions with olfactory neurons and the resulting neural firing patterns.

          You’re right that I’m not a fan of Penrose/Hammeroff’s OrchOR. I’d feel differently if it were actually at least an attempt at an explanation, but it seems more like a proposition motivated to preserve mystery. There’s no scientific data driving it, only a deeply felt conviction on the part of Penrose that consciousness just can’t be only the firing of neurons across synapses.

          I’m actually not necessarily hostile to Enactivism or Externalism. On Externalism, as I noted on your post, I think the mind is a nexus of information streams, with any boundary between it and the environment always being a bit arbitrary. On Enactivism, the simulation hypothesis could be considered a form of it, since the simulations would be the mind actively working with incoming information, rather than just passively receiving it.

          On the desire to neither reify mind nor matter, I guess I’m not seeing the middle ground here, at least not from an ontological perspective. I can understand epistemic or aspect dualism, which is what we do in computers with the software / hardware divide, even while knowing that there’s only one physical monistic underlying reality. But if there is an ontological nuance here, I’d like to understand it. Right now, it seems like reality is either substance dualism or monism.

          You’ve used the word “endogram” a number of times. I just tried to look it up without much success. Would you mind telling me what it means? In my google of it, I saw that it was used in ‘The Crucible of Consciousness’. Were you the one who mentioned that book to me?

          I am tending to agree that awareness and attention go hand in hand. Although I do make allowance that there are likely “levels” of it in the brain, such as in the brain stem vs the cerebrum, with possibly some modeling in the cerebrum for that’s happening in sub-cortical processes.

          The question is (and you’re probably getting sick of me doing this), what is awareness? My speculation in this post is that awareness is the simulation(s). We marshal resources to do those simulations, focusing as necessary on the relevant input streams. (In other words, we focus attention on those streams.)

          Thanks for having this discussion with me. These are the types I love to have.

          Liked by 2 people

          • Hariod Brawn says:

            Thanks Mike, and apologies for the delays in responding – time differences, of course, and the fact that my mornings here are spent away from any blogging activities.

            In your second paragraph, then it would appear on first flush that we’re meeting on the same ground, insofar as the mind/body dichotomy isn’t always a useful place to set one’s thoughts. I said previously, and as regards consciousness, that “to me it’s neither a reified materiality nor reified mind, nor is it both as separate things.” You seem to be saying something very close to that in your saying “the smell of a rose is the experience, and it is the molecular interactions with olfactory neurons and the resulting neural firing patterns.” I may be picking at a problem of language here, but how does that fit with your previous statement: “I think we have to be prepared to accept that consciousness is the patterns in that electrochemical signaling.”? Are you not in fact now saying what I am too, in that consciousness need not be reified to equate solely with either side of a mind-body dichotomy? So-called ‘mind’ and so-called ‘body’ are two aspects of the one existent – yes? In the opening paragraph of your previous comment: “I agree that the neural mechanisms you noted are also what smelling a rose is, just a different aspect of it.”. You say you’re not “seeing the middle ground”, but (some of) your words suggest you are, that being essentially monistic and in accepting mind and matter are two aspects of the one existent; neither aspect reified; neither aspect discounted. [As ever in such discussions, language is a sliding floor, and your suggestion of a fallible ‘subjectivity’ seems at once useful communicatively yet a little misleading if your true position is that of a material monist. How can matter be fallible? Facile, but you take my point?]

            On your feelings about OrchOR, and it being driven only by Penrose’ conviction – or might one even say his ‘intuition’? – then have you ever read Jacques Hadamard’s book The Mathematician’s Mind? It’s a fascinating read, all about the psychology of invention in the scientific and creative spheres, and how unconscious processes play a vital role in producing intuitions, which then go on, following unconscious incubation, to become theories verified mathematically, or empirically. I daresay the Penrose–Hawking singularity theorems were at one time merely intuitions or convictions of some hue?

            On the word ‘endogram’, then I like that because it seems less clumsy than ‘meta-level representation’, which is what I always used to call the same thing (a conscious state as apprehended). I also prefer it to ‘metacognition’, which might suggest something other than that which is apprehended. Yes, I picked it up from Zoltan Torey, who sent me his book along with a copy of a very approving letter from Dan Dennett, and I have mentioned Torey’s work to you previously. Torey described the endogram thusly: The brain’s situational statement of what we are aware of at any one time. A construct denoting the brain’s multi-modality self-representation. Cognate with awareness, the endogram is the product of integrated experience within the brain. The etymology is from the Greek ‘endo’ meaning ‘within’, and ‘grammatica’, meaning ‘philology’.

            Lastly, you ask what awareness is (by my lights). I did address that briefly in my last blog post, and also above in saying: “This is why I like to think in terms of a substrate or aspect of consciousness which we may as well call ‘awareness’.” But here we come back to the meaning of ‘is’, and whether you’re asking what causes awareness (assuming it is caused), or what it is in itself. I have no idea what causes (what I’m calling) awareness any more than anyone has any unimpeachable idea as to how and why consciousness appears as a result of matter undergoing physical processes. Importantly though, awareness, as an (figuratively) ‘illuminative’ substrate or trait of consciousness, can’t be described phenomenologically by its appearance. One may experience it in its pure form(lessness) as a meditative state, [i.e. TE – ‘Thoughtless Emptiness’] as I did on my blog post. You were inclined to say this state was an object of consciousness; in other words it was ‘being conscious of a representation of an imagined objectless awareness’. Whilst I was resistant to buying into that wholesale, as TE awareness is devoid of phenomenal features as apprehended and is not susceptible to memory, then I again say it is an aspect (a substrate or trait) of consciousness. Awareness knows itself as itself, so to speak, and not as an image of itself, such as a mood or mental state, which are dichotomous conscious effects: a knowing and an object known. I don’t think you’ll find it satisfactory, Mike, but to know what awareness is (your question) requires immersion in a first-person reductive phenomenology in which the brain’s production of representations is temporarily stilled or suspended. What’s left is what that study I linked to termed ‘TE Awareness’, and about which I provocatively said ‘nothing happens’. Nonetheless, it is a measurable state with neural correlates, of course, and as that study showed. The thing to grasp is that you cannot possibly now, as a result of thinking about it there at your desk, create a mental representation (a consciously known object) that even remotely suggests ‘what it is like’ – the usual qualifier of consciousness. In fact, the very effort to do so occludes its presence.

            Liked by 1 person

    • Hariod, no worries at all on any delays. I fit my own blogging / commenting within the cracks of the workday, so I totally understand.

      Okay, on mind and matter, I think I see where you are now. As I noted in my reply, I can see aspect or epistemic dualism, and that appears to be what we agree on. I thought I was detecting some ontological variety, but I must have been mistaken. Language is definitely always an issue in these types of discussions. Thanks for clarifying.

      On language, one note that I made to Eric. I’ve become pretty careful with language when talking about philosophy of mind, avoiding phrases like “neural correlates” or “how consciousness arises”. I’ve used those phrases before, but in a metaphorical manner. What I’ve learned is that many people don’t take them metaphorically, so I now avoid them unless I’m discussing some variety of substance dualism.

      I understand that many mathematical theorems and many scientific theories begin as an intuition, and many of them ultimately end up being validated. But many don’t. When people talk about these things, they always talk about the success stories, and omit the legions of ideas that begin as intuitions that don’t pass validation and die a quiet death.

      For the base layers of the simulations that make up consciousness, we’re not privy to the details. We can’t be, since being privy is itself a model, and at some point, we can’t be aware of the stuff of awareness, of the mechanics of awareness. Intuitions, it seems to me, fall neatly into that category.

      For these reasons, I’m very skeptical of giving intuitions any special status. We have intuitions all the time. Most about common day to day matters turn out to be right. Most about exotic matters, don’t. That’s why Penrose’s intuition doesn’t impress me, even though he had intuitions about physics that turned out to be right. What we don’t know is how many other intuitions about physics he had that went nowhere. He himself probably doesn’t remember them, since we don’t tend to remember our misses very well.

      Thanks for the endogram definition. That seems roughly equivalent to Damasio’s core self or core consciousness.

      On TE, one question I forgot to ask you about on your post was, are you aware of time when in that state? (I think I recall someone else asking that question, but can’t find the thread now.) The reason I ask, is that if you’re not processing any sensory inputs and you’re not aware of time, it’s hard not to conclude that it’s a state of non-consciousness. Obviously given the brain wave data in that paper, it’s not equivalent to deep sleep, but it does seem to be subjectively similar.

      Or maybe I should ask: what would you say are the subjective differences (if any) between TE and a deep sleep?

      Liked by 1 person

      • Hariod Brawn says:

        Thankyou Mike, for your further engagement and tolerance of my own formulations.
        So as to be perfectly clear, I’ve not once, not in my post nor in my comments here, suggested that (what I call) awareness is in some strange sense outside of, or beyond, (what you call) consciousness. I’ve always spoken of it as a substrate or trait of consciousness itself. By substrate I mean ‘underlying layer’, as per the word’s general definition. I think of awareness as ‘underlying’ only insofar as it phenomenologically presents (in TE awareness) as the blank slate upon which mentative objects are inscribed (all analogous, natch).

        So, (my) ‘awareness’ is there pervading consciousness, necessarily, but also it remains when mentation stills, and when we might argue as to whether it’s fitting to call it “consciousness” at that point, given its non-dichotomous apprehending. It matters little on the wording, as I’ve already said, but we perhaps require a slight redefinition of the word if by “consciousness” we mean “being with knowledge of mentative content”.

        It seems you’re calling this ‘substrate’ conceptualisation of mine an intuition, but perhaps that’s because you’re not conceptualising and interpreting my words in quite the same way as they’re offered. The so-called ‘Formless’ states of meditative absorption, as delineated in Buddhist psychology, are notoriously difficult to describe given that mentative functions (as conscious displays) are temporarily suspended. One such state is described most unhelpfully as ‘neither perception nor non-perception’, and this precedes the state known as ‘cessation’. Of both, then they are compared to sleep with the exception that the states are known to themselves (in your terms one is ‘conscious of them’). As I have said at my place, one might characterise that self-knowing of the state as a knowing presence, or a knowing beingness. It remains perfectly stilled as apprehended and in respect to mentation, but has a quality of what one may say is lucency, or pellucidity – clarity and a sort of luminance pervades it fixedly.

        In developing the skills to bring about this state, one becomes highly intimate with the ‘feel’ of mentation as a disturbance or stirring within mind. As the disturbances are all but quelled, but not yet being in the TE state, one feels as akin to being in a railway carriage as it moves along at speed, and in sensing very faintly the vibration of the wheels upon the tracks as an extremely subtle disturbance. One knows something is going on and that contact is being made ‘down there’, yet it all passes away as the felt perceptual stream constantly dissolves just as it appears, so one can never quite perceive the feeling clearly, but it’s there nonetheless. Similarly, one knows that mentation is arising and passing away, dissolving incredibly swiftly, such that no object becomes clearly defined. Mentation is not yet entirely stilled, and yet when it becomes so, that absence is marked and pronounced. It’s a seemingly obvious sense of on/off. [I know that for you it’s always ‘on’, but I’m trying paint a picture here.]

        It seems unimportant to me from a phenomenological perspective as to whether one considers this TE awareness a model or simulation, or endogram, or meta-level representation, although I know it is important to you in terms of it conforming to your algorithmic conception of mind. Surely here and in respect to conscious displays, the model has to fit the prior evidence, not the prior evidence jigged to fit the model? All that would require is that your model allows for TE awareness and a stilling of mentation as apprehended, and there is no dichotomous (overt or covert knowledge-object) apprehending occurring. A distinction is being made here between that which is apprehended and your model’s assertion that what is apprehended is indeed a dichotomous, overt or covert, knowledge-object.

        But let’s try and paint in more detail that picture in your (or any reader’s) mind of what it is that I mean by ‘awareness’. Leaving aside any models, preconceptions, and all that you know, and purely for the purpose of this exercise, think of awareness and mentation (conscious objects) as both residing on a sliding scale and being in inverse proportion relative to one another. That means, as consciousness becomes more defined, awareness becomes less defined, and the converse is so. To help do this, why not try a little practical experiment? I know it’ll sound awkwardly pedagogic, but if you’re willing, this is how to move along this sliding scale, back and forth:

        Take your visual gaze, and the focus of your attention, away from the screen, and just rest it very softly over towards the corner of the room. As you do that, and with no sense of grasping at mind-objects, meaning hard-focusing on thoughts, just silently ask a question such as ‘where is awareness?’, or perhaps ‘can I sense awareness now?’ Don’t think about what those questions mean or how nonsensical they are to your intellect and all you know, just passively absorb into that un-grasped-at visual sense and let the mind settle softly on the question of knowing awareness, and whether you can locate or sense it. I think you’ve done some meditation in the past, Mike, (haven’t you?), so this should be straightforward.

        What happens is that conscious objects (whether visual or otherwise) recede in definition, and the quality of a knowing presence and lucency (my ‘awareness’), comes to the fore. As you go back to the screen or to your mug of coffee, there’s a sense of the mind coalescing and collapsing around a well-defined object of consciousness, and the knowing presence recedes rapidly into the background. The two qualities – one quite obviously defined, the other far less so – are in an inverse relationship to one another.
        TE awareness takes this to the extreme, where not only do the conscious objects recede in definition, but they are abated entirely (as apprehended). There is no sensing of time (to answer your question), because psychological time is an inference made from the appearance of serial streams of phenomena, and in the TE state there are no phenomena apprehended – notwithstanding your assertion that the state itself is a phenomenon, albeit one in which ‘nothing happens’.

        When you say “it’s hard not to conclude that it’s a state of non-consciousness” , then ironically you’re agreeing with me, only the way I conceive of it is that we’re at the point of the conscious substrate: the stilled ‘surface’ or layer upon which overtly conscious objects appear and we can then rightly be deemed to be in a state of consciousness. In TE awareness, the state itself knows there is present, and that it is itself, a knowing, albeit not a knowing of overtly mentative creations, and the knowing presence of that knowing is featureless. But rather like when you did the little experiment, awareness (that knowing presence) asserts its primacy over any mentation. I think this also answers (or attempts to answer) your question as to what are “the subjective differences between TE and deep sleep”. In deep sleep, there is just an unknowing absence. In TE awareness, a knowing presence is all there is.

        Liked by 1 person

        • Hariod, I’m grateful for your thoughtful explanation.

          “It seems you’re calling this ‘substrate’ conceptualisation of mine an intuition”
          Actually, that wasn’t my intent with all the intuition talk. I was just responding at length to the point about Penrose’s intuition, probably at far longer length than was warranted. On the substrate concept, I don’t necessarily disagree with it. Indeed, I agree that consciousness has layers.

          I have meditated before, or perhaps more accurately, attempted meditation. I don’t think I ever got anywhere near the mental state you’re describing. I suspect I would need extensive practice to accomplish it. I tried the experiment you described, but I fear my meditative muscles are too undeveloped.

          That’s not to say that I don’t regularly observe aspects of my own consciousness. I often pay a lot of attention to whether or not I’m consciously doing something, or how things arise in my consciousness, or the interaction between the conscious and non-conscious aspects of my mind, among other things. But as I do this, I always try to keep in mind the limitations of introspection.

          “When you say “it’s hard not to conclude that it’s a state of non-consciousness” , then ironically you’re agreeing with me,”
          I never actually saw myself as necessarily disagreeing with you on this. I’ve been mostly just trying to understand how it might fit, as you said, “into my algorithmic theory of mind”. I never intended to imply that anything about it might be incompatible with that view. I think the only subjective experience that might be incompatible with it would be one whose explanation would appear to require some kind of supernatural mechanism, or perhaps unknown physics.

          ” In deep sleep, there is just an unknowing absence. In TE awareness, a knowing presence is all there is.”
          Thanks for the distinction. I wonder if “knowing presence” includes a sense of self, but given your writings about the self, I suspect you might say it doesn’t. It seems like we’re talking about a state that has to be experienced to appreciate it. Maybe at some point I’ll develop those meditative muscles.

          Liked by 1 person

          • Hariod Brawn says:

            Great discussion, Mike, for which many thanks indeed; it’s a privilege to be able to dialogue with you.

            That book which cropped up in our discussion, and which I found an exhilarating read:

            Liked by 1 person

          • Thanks Hariod. As always, I enjoyed our discussion immensely. Appreciate the book link.

            BTW, Daniel Dennett, who wrote the Forward to Crucible, is coming out with a new book in February on the evolution of consciousness. It’ll be interesting to see how his book compares to the one that inspired these posts, particularly how a philosopher of mind approaches this question differently from a neurologist and biologist.

            Liked by 1 person

          • Jeff says:


            You are very clearly describing the Buddhist contemplative experience of the nature of mind and its non-divisive and unitary nature. Personally, I think this is useful as a ‘self-help’ (excuse the generic and somewhat pejorative term) practice that addresses the habitual patterns and identification that the ordinary mind seems to be preoccupied with. But it is somewhat removed from the reality of the moment which is charged with the energy of creation and seems more relevant than subtle manipulations of mind to engender a different way of seeing/being which still hasn’t really addressed the energetic presence of an ‘entity’. In fact, I do seem to remember Buddhist teachings saying that Buddha nature was neither with form or without form and its negation of both of these statements which would render any description of a static state that could be conceptualized as not ‘it’.

            I also remember certain individual accounts of people claiming that a psycho physical transformation takes place rendering all efforts and references of a personal nature to be gone, never to return.. and a complete living in the moment which is nothing like what we conceptualized. Certainly, the Buddha was one of these cases, no?

            So, to return to your TE, thoughtless emptiness, I have the sneaking suspicion that this is still a reflected state and a manipulation within the structure of mind which is similar to a Zen satori experience but which doesn’t transform anything and actually keeps us in the game of seeking/questioning/becoming to be other than what the moment presents.

            Now, I can be totally wrong about this and whatever clarity I think is there has already moved on. Is clarity what we seek? what we hope to have every moment? In a way, it is kind of an illusion, no? A magic trick that the mind plays with itself. Help me, Hariod!

            This could also be the continuation of our discussion at your website……..

            PS-Mike, you have introduced some interesting ways of looking at this business of consciousness that were foreign to me. I am indeed pleased to read what you are writing here.

            Liked by 2 people

          • Thanks Jeff. And welcome!

            Liked by 1 person

          • Hariod Brawn says:

            Hi Jeff,

            Many thanks for trawling through this rather lengthy discussion, and your interesting reflections on it. I hope you receive this message as I didn’t get any notification of yours and am only here as I received one from Tina, below; hence, apologies are due for the tardy response.

            Yes, TE awareness is a meditative state; it is not a Satori experience and the two should not be conflated. It is, in Buddhist terms, a mundane state. Mike’s site here isn’t concerned with so-called ‘spiritual’ matters, so I’ll attempt to deal with the distinctions at my place, if that’s okay with you. My site isn’t a ‘spiritual’ site either, but it is a kind of ‘self-help’ site (to maintain the pejorative term), and the state of TE Awareness is an extraordinarily powerful practise in that it contextualises the forms of the mind (conscious objects) and leads to a marked and rapid disidentification of them. As a consequence we become far less neurotically obsessive as regards our thinking; it no longer carries within it the sense of agency or the feeling that we inhabit it in some ill-defined way. This is merely repeating and agreeing with what you yourself say and know.

            So anyway, we’ll chat about that statement of yours regarding what may or may not be Buddha Nature at my place, if that’s okay? i.e. “Buddha nature was neither with form or without form and its negation of both of these statements which would render any description of a static state that could be conceptualized as not ‘it’.”

            I’ll depart with a brief and related quote from Theodor Stcherbatsky’s book Buddhist Logic: “And at last, ascending to the ultimate plane of every philosophy, we discover that the difference between Sensibility and Understanding is again dialectical. They are essentially the negation of each the other; they mutually sublate one another and become merged in a Final Monism.”

            Many thanks for your engagement, Jeff, and thankyou Mike for allowing me a brief divergence from the theme.



    • Well put. I was gonna bring up something similar to your point here:

      “I think if we want to know what consciousness is, then it has to know itself as itself, so to speak, rather than as an object rendered within itself – i.e. a mental coalescing around a knowledge-object.”

      Now I don’t have to think. Awesome!

      Liked by 2 people

      • Hariod Brawn says:

        Do I sense my leg being pulled, madam? 😉

        Liked by 1 person

        • Not at all! But I guess I always sound that way. 😉

          I wouldn’t have been able to formulate things in quite this way, not without a lot of serious thinking in my very immediate past, which just isn’t gonna happen, not this time of year.

          Enjoyed reading the dialogue here. Interesting description of awareness too, and I think I know what you mean, maybe. A very fleeting sort of thing, as you say…or non-thing, but meh. Language. In fact, I used to wonder how long I could stay in the same state, and in doing so I’d ruin it, of course.

          Liked by 1 person

          • Hariod Brawn says:

            It’s a meditative state, Tina, this condition of being highly aware and yet not dichotomously ‘aware of’ any serial stream of objects running along within it – i.e. nothing appears to happen. This study looks at what’s going on inside the head during states of TE awareness: It’s been documented in Buddhist psychology for many hundreds of years, and in fact some commentators say that the process of falling asleep necessarily entails passing through the documented eight states of mental absorption, ‘though the transition through the final ‘higher’ states is very rapid and of course terminates in (what we think of as) unconsciousness.

            Liked by 1 person

          • I couldn’t follow the article…above my level, I’m afraid, but it’s an interesting project. You would think there’d be some correlation to sleep patterns, given the descriptions you hear of meditative states.

            I don’t think I’ve experienced any such state of being highly aware and yet not aware of any stream of objects, at least not in the sense that I wouldn’t peripherally see what’s there in front of me on some level. I guess I could have my eyes closed, but I’m not sure there would be nothing happening. I think I’d be aware of time in a way that I’m not while sleeping.

            I have had the experience of being highly aware of nothing in particular, while seeing what’s in front of me on some other level but not really attending to it in an ordinary way. In other words, of being aware of being aware in itself, if that makes any sense. It’s always followed by a creepy and unsettling feeling, but then this too gets washed away very quickly by my usual boring thoughts.

            Liked by 1 person

          • Hariod Brawn says:

            “I have had the experience of being highly aware of nothing in particular, while seeing what’s in front of me on some other level but not really attending to it in an ordinary way.” – Right, so to relate that to the little exercise I put out in a comment above to Mike, you were towards the ‘awareness’ end of the sliding scale of defined objects. As the ‘boring thoughts’ came back in, you slid back towards the highly defined conscious objects end. There’s an inverse relationship between just the objectless knowingness of (what I call) awareness and the defined objects of consciousness. But obviously it’s just one thing [awareness-consciousness] we’re talking about with this sliding scale, not two objectively different categories.

            That’s interesting that you found it creepy and unsettling, Tina; it suggests you were close to that point of no-thought, and the mind so abhors a vacuum. When it senses it’s approaching that point then fear is the typical response of the mind – thought (any mentation) fears its own absence. o_O

            Liked by 1 person

          • Hm. How strange. I’ve had that experience many times throughout my life, mostly in high school. It’s always creepy. I had no idea it had anything to do with meditation since I wasn’t trying to meditate. All I know is that these moments were usually followed by questions about existence, nothing very articulate, usually, “Is this what it means to be alive? This? Wow I can’t believe this is happening.” Then it stops. It’s hard to explain. But yeah, never have I experienced no-thought. I think that would make me crazier than I already am. 😉

            Liked by 1 person

          • Hariod Brawn says:

            ‘Creepy’ meaning there was a degree of fear, yes? It possibly has a different meaning here in England, more like ‘weird’ and less suggestive of fear. We may be talking about different things, but in meditation, and when I was first trying to allow the mind to do it (awkward phraseology), this point at which any overt mentation was about to be switched off was fearful. It felt as if I was standing on the edge of a vast and bottomless chasm and was about to tip into it – a sort of ‘falling’ feeling. So, the nervous system was detecting and responding to this with some brief shot of adrenalin, or something, and that caused a sense of fear which in turn caused the mind to think about the situation – like one of those dimmable light switches dialling up the thought-light rather than making that final one degree turn to ‘off’. Mixed metaphors: chasms and light switches. Now who’s the crazy one? o_O

            Liked by 1 person

          • I guess there was a degree of fear then. I didn’t know there was an English difference, but what you’re describing makes sense. It does feel very much like you’re standing on the edge of the Grand Canyon, so you’re feeling both in awe and a bit of fear, a bit of danger. The two go hand in hand, of course.

            The actual experience of awareness—if we’re talking about the same thing—is only a few seconds or so in the pure form, if that, and then comes the thought about it. That thought about it is the one that realizes This is Awesome. (Capitalized to differentiate from any sentence beginning with “Dude…”)


          • Jeff says:


            If you’ve been following what Mike has been saying about consciousness and its simulation processes, consciousness is not experiencing anything but the reflections of sense stimulations which are then put into language that we toy around with. What you are experiencing about awareness is information about awareness, and what that might be that you have in your stored memory, and not awareness. It’s the same with any direct experiences, we don’t experience them directly but through various filters that have evolved in the brain. This puts a whole new slant on how we create our life and identity. It is built on the past, the accumulation of what man has thought for millenia. In a real sense, that fear is not yours, but the culture’s.

            I hope I’m not being too abstract. 🙂

            Liked by 1 person

          • Not being too abstract for me!

            I don’t think we can reduce all experience to what the brain does, but should reserve our reductions for when it’s called for (like, for instance, when the experience fails us, and brain damage or drugs, etc., account for that failure.) The causal connections between the multitude of experiences and the brain’s correlate activity are too murky to settle on that over-arching reduction in all cases.

            The point about culture being an influence on identity and the possibility of achieving what Hariod’s calling awareness might be at play here, I won’t deny it for myself. Especially considering my background growing up in America where non-thinking—or I should say non-doing—is most definitely pooh poohed.

            I don’t know what to call what I experienced, and I don’t care whether it actually is what is being described by people who meditate seriously or not. I only brought it up because it sounded similar to what Hariod was talking about, but with some obvious differences.


          • Jeff says:


            Liked by 1 person

  9. Interesting post. The examples helped a lot. I didn’t have time to read everyone’s comments here, so I hope I’m not going over the same stuff. I did get a few glimpses here and there and I read your dialogue with Hariod. I sense there might be two different meanings of ‘consciousness’ at play here, but maybe I’m misunderstanding something. On the one hand, there’s consciousness in the sense of being opposed to ‘subconscious’ or ‘unconscious’ in which case consciousness is something like being aware of being directed in a certain way. Then there’s ‘consciousness’ in the broader sense of being opposed to not alive, a dead thing, a piece of bubble gum, for instance. I don’t know that these two meanings are conflicting necessarily, but I just wanted to put that out there. So back to your point:

    “The speculative aspect is that the simulations are consciousness, that what is outside of them is in what we call the sub-conscious or unconscious, and what is in them are the contents of consciousness.”

    Here I get the sense that the two different ideas of consciousness are at play at the same time? I guess I assumed that if we take consciousness in the last sense I described above, this meaning would include the sub-conscious or simply not refer to it since that’s a psychological term. Am I making sense?

    Liked by 3 people

    • Thanks Tina. There were definitely two different conceptions of awareness in the conversation with Hariod, and possibly of consciousness.

      In the post itself, well, there’s a lot of complexity I was skimming over.

      There is the definite unconscious, including things like heart rate regulation and the like, basically the stuff in the autonomous nervous system.

      Then there is the passive modeling that seems to always be going on while we’re awake. I think this seems closer to what we typically call the subconscious. We can become aware of aspects of it if we focus awareness on it, that is, focus our attention on it. But it seems to go on whether or not we are attending to it, and a lot of the information in it is selectively used by consciousness.

      Then there’s what we’re actually conscious of. And that, I think, is the information in the simulation(s) we’re currently running. If we’re running a simulation on the sensory information coming in, in other words, we’re paying attention to what we’re seeing, hearing, etc, then we’re conscious of it. But we may not be paying attention to the incoming sensory stream. We might be simulating something else, building models for later reference, in other words, imagining, daydreaming, thinking things through, etc.

      But there’s also the aspect of the self modeling that’s going on, the meta-cognition I’ve discussed before that I’ve long thought might be consciousness. I now think this is more likely part of what gets incorporated into both the ongoing subconscious passive modeling, and the conscious active simulations, when those simulations require introspection.

      Hope this all makes sense.

      Liked by 3 people

      • I’m not sure I get what meta-cognition is and how that’s different from regular cognition. Is it the self modeling itself in some simulation?

        Liked by 2 people

      • Jeff says:

        Very well put. When I first encountered your responses to Hariod, I tried hard to understand where you were coming from regarding your view and investigation of consciousness. It struck me that many people who are interested in consciousness, come at it from a religious or philosophical view. This view is primarily motivated by an interest in the self or subjective experience that one seems to have of everything. It is always personal and always relates to their pre-existing model of consciousness that they’ve accepted in their study and practice of certain beliefs. They try to fit the form, so to speak. This is also my background so I had to really listen and contemplate what you were putting forth here which is also not your own, but put together from your reading, dialogs, and own modeling of what consciousness is and does.

        It seems to me that consciousness is not a personal activity, but a strictly efficient, mechanical activity/process that comes into being with the human being. It is not something that stands apart like a god/spirit or an absolute, but a built-in process of the human being. I really liked your breakdown of unconsicous/subconscious and the more ‘personal-like’ simulation of sensory information creating notions of time, space, and a subjective entity with a narrative that is experiencing and creating all of this. The latter notions seem to be all put together from the human collective history that is taught through cultural means, religions, and accepted social behavioring.

        Most of the mental meandering that we do is on this imaginative level and that is where it stays with most people forever. With a more attentive view to what we do moment to moment, we can begin to see this mechanical process at work and even begin to suspect that this subjective aspect of consciousness is simply a construction, a simulation as you say. We can adjust this view to create all kinds of stories about it. What we cannot adjust is the more subconscious processes such as breathing, heartbeat, nerve stimulation and the senses doing their job. They are untouchable, so to speak, which leads me to suspect that the simulation process has obscured/overridden our attention to the purely physical aspects of consciousness. We simply think too much and are lost in our imaginations.

        If I bring my attention away from thinking into seeing or hearing, for example, I now have a universe that is informing me through the function of the eyes, ears, etc., What I discover when my attention is directed outward is a tremendous energy that is being translated by the sensory stimulations that are a total mystery until the simulation process kicks in which is almost immediate. In other words, I don’t know what the hell is going on until thought interprets these sensations.

        What I have no idea about is what this energy is all about and where is it? Some have called it the life force, or just life, or the flow of life. It is nameless to me. It is there and not there. I have no instrument to measure or interpret any of this. Mind just seems like the accumulation of information and can measure objects. It cannot wrap itself around this energy which leads me to speculate whether consciousness can ever know itself or whether another dimension has to make itself known before we can understand this one.

        I hope I didn’t lose you, Mike. This is where many religions and schools of esoteric thought jump into all sorts of models and ways to get there. I can’t subscribe to those any longer.

        I welcome all responses.

        Liked by 2 people

        • Hariod Brawn says:

          Interesting, Jeff. This ‘energy’ you allude to, could it be ‘awareness’ by another name, or perhaps ‘knowingness’? In what sense is it energetic, may I ask?

          Liked by 1 person

          • Jeff says:


            I don’t think I would equate this with awareness. It is something much more physical in its nature, definitely not a ‘knowingness’. Sometimes I feel it as a kind of swelling, a physical power/presence. I really don’t want to speculate on it as it is not clear what is being felt. Kundalini comes to mind but I don’t experience any rising or falling of energy as described in text books, nor specific chakras. I could even be projecting all of this.

            Liked by 1 person

        • Thanks Jeff. I appreciate and agree with pretty much everything you wrote.

          The only part I’m not sure about is the energy or life force thoughts at the end. My initial reaction is that the only energy is the electrochemical reactions which are always going on, but I suspect you might have meant maybe a different kind of energy? Maybe something along the lines Hariod is asking about?

          I do think we have to be careful about positing something like biological vitalism, the idea that there is a special energy or vitality to biology that non-biological systems don’t possess. Biology is complicated, profoundly complicated, and it can give the appearance that there’s something there more than the chemistry and physics, but everything ultimately seems accountable in terms of chemistry and physics, at least so far.

          So, it’s probably obvious that I’m not religious. While I think religions and spiritual philosophies sometimes have interesting insights about the mind, particularly Buddhism, my explorations are almost always from the perspective of science and science oriented philosophy. I’m more likely to read neuroscientists than philosophers of mind, and the philosophers I do read tend to be physicalists.

          But consciousness is definitely one of my long term interests, with almost certainly more posts on it on this blog than any other topic, and I’m always interested in hearing other viewpoints about it.

          Liked by 1 person

          • Jeff says:

            Hi Mike,

            Regarding the energy comments that I made, I think it’s safe to disregard any of my conclusions or disclosures about it as possibly being subjective and more in line with simulation and imagination than fact. It was your comments about energy and what it is that made me think of what I wrote. Energy is a kind of mystery to me. I understand the electromagnetic impulses that the brain sends out that causes action in various parts of the body, but then there is the energy that seemingly comes from the universe that also affects the mechanics of body/mind, a la the phases of the moon, etc. If consciousness is indeed a closed circuit, this would make perfect sense that everything affects everything.

            Because of your blog, I just picked up Torey’s book The Crucible of Consciousness. You’ve opened up something here that escaped me. My own background of religious/philosophical thinking did not allow me to consider consciousness from a fresh point of view. I owe you a debt of gratitude, but maybe you owed me a debt of gratitude? Interesting how we can look at things in different ways.

            Liked by 2 people

          • Hariod Brawn says:

            Jeff, do let me know what you thought of Torey’s book. I think you’ll find it fascinating in terms of self-modelling and the circular forms of deception that give rise to it – that seemed to be the area that particularly interested you from comments you’ve left at my site. I have to say, the book’s quite hard going, but if you can stick with it, it’s incredibly rewarding and, to me, seemed exhilarating in its depth of insight.

            Liked by 1 person

          • Jeff says:

            I decided to read his later book first, ‘The Conscious Mind’ which is a more concise look at the evolution of consciousness and mind. Many things have already struck me. I will comment more elaborately when I’m done.

            Liked by 2 people

          • Jeff, thanks for mentioning that book. It’s cheaper and looks more accessible than the earlier one.


  10. Jeff says:


    I forgot to ask you to please explain this paragraph to me that you wrote above:

    “I do think we have to be careful about positing something like biological vitalism, the idea that there is a special energy or vitality to biology that non-biological systems don’t possess. Biology is complicated, profoundly complicated, and it can give the appearance that there’s something there more than the chemistry and physics, but everything ultimately seems accountable in terms of chemistry and physics, at least so far.”

    What do you mean by ‘special energy’? And how can a non-biological system possess vitality?


    Liked by 2 people

    • Hi Jeff,
      Vitalism was the belief that there was something unique about biology that made it distinct from other natural processes, a vital principle, a special energy of some sort. But the main point I was making is that there is no special energy, nothing unique to biology that makes it go.

      The closer biologists look (particularly molecular biologists) the more everything is explained by chemistry and electricity, Possibly in some cases (such as photosynthesis or magnetoreception), quantum physics plays a role, but if it does so, it does it in a manner consistent with our understanding of it. Life operates according to the same principles as the rest of nature. What we call “life” are systems that are effective at maintaining their internal processes and in reproducing.

      This isn’t to say that everything about life is perfectly understood. Far from it. But everything we learn points to it operating according to the normal laws of nature.

      Hope this makes sense.


Your thoughts?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s