A possible answer to the hard problem of consciousness: subjective experience is communication

In 1995, David Chalmers coined the “hard problem of consciousness”:

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.

…The really hard problem of consciousness is the problem of experience. When we think and perceive there is a whir of information processing, but there is also a subjective aspect.

Chalmers was giving a label to an issue that has existed for a long time, but his labeling of it has given many a clarity on why they find many scientific explanations of how the brain and mind work to be unsatisfactory.  Bring up the scientific understanding of consciousness, and someone is going to ask why all the data processing is accompanied by experience.

My last post discussed a movement in the philosophy of mind to label this distinction as an illusion.  And to be clear, I do think the idea that experience is something separate and apart from the information processing of the brain is an illusion.  That’s not to say that I think subjective experience itself doesn’t exist in some form.

But that still leaves the question of why we have subjective experience.  In earlier posts, I’ve noted that to have any hope of answering that question, we have to be willing to ask what experience actually is, and gave my answer that it is a system building and using internal models of the environment and of itself, in essence an inner world, as a guide to action.

I still think this answer is true, but in conversation with fellow blogger Michael, I realized that there may be a better way of articulating it, one that may resonate a little better with those troubled by the hard problem, at least with those open to explanations other than substance dualism or speculative exotic physics.

I think subjective experience is communication.

Consider that every aspect of experience seems to have a communicative value.  Using the examples Chalmers gave in the quote above, the deep blue communicates information about the object being modeled such as maybe a deep lake, and the sensation  of middle C communicates something about the environment (or given the nature of music, or art in general, the illusion of something).

Other examples are the vividness of red communicating ripe fruit, pain communicating damage to the body, severe pain communicating damage serious enough to perhaps warrant hiding and healing, or the delight of a child’s laughter communicating that the next generation of our genetic legacy is happy and healthy.

Now, the question you’re probably asking is, communication from what to what?  I think the answer is communication from various functional areas of the brain to the movement planning areas.  The source areas include the sensory processing regions, which model the information coming in from the outside world, and the limbic system, which adds valences to the models, that is judgments of preference or aversion.

800px-Slide3aa

Image credit: Anatomist90 via Wikipedia

The Limbic System: Image credit: OpenStax College via Wikipedia

The Limbic System: Image credit: OpenStax College via Wikipedia

(For those familiar with neuroanatomy, by movement planning areas I mean the prefrontal cortex, by sensory processing areas I mean the middle parietal lobe, posterior cingulate cortex, and all the regions that feed into them, and by limbic system I mean the structures commonly identified with that term, such as the amgydala, anterior cingulate cortex, hypothalmus, etc.)

Of course, as I noted in the post on consciousness being predictive simulations, this is actually a two way communication, because those movement planning regions instigate simulations that require participation of all the source regions.  But what we call experience itself, may be the preparation and transmission of that information to the executive centers of the brain, executive centers whose job it is to fulfill the main evolutionary purpose of brains, to plan and initiate movement.

One quick clarification.  This isn’t an argument for a homunculus existing in those executive centers.  This isn’t the Cartesian theater.  It’s communication from the sensing and emotion subsystems of you to the action oriented subsystem of you, and consciousness involves all the interactions between them.

Now, it’s almost certainly possible to find aspects of experience that science doesn’t have a communicative explanation for yet.  But is it possible to find any for which there is no conceivable explanation?  Even if there is, even if we can find the odd part here or there that has no imaginable explanation, if we can find ones for the vast majority of qualia, doesn’t that mean that most of them are fulfilling a communicative function?  And that experience overall is generally about communication?

Indeed, if you think about it, isn’t this type of communication crucial for a brain’s evolutionary purpose?  How else can the movement planning portions of the brain fulfill their function other than by receiving information from the input processing and evaluative portions?

If so, for a philosophical zombie to exist, it would need to have an alternate mechanism for this communication.  But why wouldn’t that alternate mechanism not simply be an alternate implementation of subjective experience?  If this view is correct, zombies are impossible, at least zombies sophisticated enough to reproduce the behavior of a conscious being in a sustainable manner.

So, this seems to be a plausible explanation for what experience is, and why it exists.  It seems like a possible answer to Chalmers’ hard problem.

Unless of course, I’m missing something?  In particular, what aspects of subjective experience might not fit this conception of it?  Are they significant enough to discount the answer?  Or are there other problems I’m not seeing?

This entry was posted in Mind and AI and tagged , , , , , , , . Bookmark the permalink.

74 Responses to A possible answer to the hard problem of consciousness: subjective experience is communication

  1. Steve Ruis says:

    The “feelings” associated with conscious processing are illusory as we can prove by looking at people with phobias. Associated with certain thoughts in phobics were horrific feelings of fear. When reprogrammed phobics feel neutral or even positive things when entertaining things in their minds. If these things were actual “cause-effect” physiological happenings, this would not be possible, I suggest. Illusions can be replaced, one with another through learning, so I suggest as I believe you do that these “sensations” associated with cognitive processes are related to learning things. For example, smelling a lion, feeling fear, and through “fight or flight” you take off running, may or may not be your best strategy. If these things were hard wired in, we would be far easier prey. Being able to learn in such situations and to be able to reprogram the sensations associated with the cognitive process seems a very valuable mental skill.

    Liked by 1 person

    • I’m not sure I’d necessarily label the fear from a phobia as an illusion just because we can be conditioned out of it. I’d also note that I think all mental processes, even ones about illusions, are physiological processes. (You can call me pedantic if you’d like 🙂 )

      That said, I agree with all your other points. I think the purpose of consciousness is to enable us to figure out what we should do in novel situations, when we have the time to think. In repetitive situations, or ones requiring rapid reflexive reaction, unconscious processes seem to dominate, with perhaps consciousness later making an assessment about what happened and learning from it.

      Like

  2. milesmutka says:

    One aspect that I don’t see often discussed is the experience of time passing. I just wrote about it on my blog, with title “The Pilot of Consciousness”: https://unfinishedwisdom.wordpress.com/2016/12/18/the-pilot-of-consciousness/

    There is some discussion in theoretical physics of the peculiar nature of time, as seen by humans, but no real bridge to the problem of conscious experience. Maybe if we could explain why we experience the passage of time, we could more easily formulate the right questions to ask about the nature of consciousness? The various quantum hypotheses come close to explaining something, but not close enough: quantum decoherence has some similarity to thermodynamics, as does the arrow of time, but there is too much hand-waving for my liking.

    Liked by 2 people

    • The experience of time is interesting. It’s totally subjective depending on what’s going on at the time, passing slowly when we’re idle and faster when we’re busy. If we didn’t have things in nature that marked time consistently, it’s not clear we would ever be able to agree on time spans. And, of course, when we look hard enough at nature with special and general relativity, it turns out even the passage of time is relative, with no absolute frame of reference. I’ll take a look at your post.

      I can’t say I’m a fan of quantum consciousness theories. I like your phrase “hand-waving”. It’s a good term for the fact that there’s nothing driving quantum consciousness theories except for strong feelings that classic physics simply can’t explain consciousness. I’m open to the possibility that quantum phenomena could be a factor, but I’d need to see evidence, or at least compelling logical necessity.

      Like

      • milesmutka says:

        “If we didn’t have things in nature that marked time consistently,”

        For measuring time with clocks, natural or mechanical, we need to have some sensory way of assessing how reliable they are. We do this by comparing the smoothness of their movement, or the evenness of the rhythm of their ticking, when we are paying attention to them. What are we comparing against?

        The intuition behind quantum consciousness seems tempting: explain two difficult problems simultaneously, just find that one piece of the puzzle that is missing …

        Liked by 1 person

        • I actually don’t see consciousness as nearly as much of a mystery as the wave function collapse / decoherence / whatever phenomenon, at least from a scientific viewpoint. I think there are many potential explanations of consciousness that completely fit within our understanding of how the world works. But it seems like every interpretation of quantum mechanics has to throw one or more fundamental principles of classical physics under the bus: locality, causality, counterfactual definiteness, etc. That said, I’m saying this as someone who’s never been particularly troubled by the hard problem.

          Like

  3. Hariod Brawn says:

    Mike, isn’t your task as a Physicalist (would you call yourself a Materialist, too?) to map out a mechanism as to >how our apparent subjective experience comes to present as it does? It seems here you’re tackling the what, and moreso the why, but not the how, and isn’t the how the really hard problem?

    I accept the question may (theoretically at least) resolve itself in some ultimate understanding of Materialist Reductionism, but you see the issue, I’m sure: If, as you maintain, (the aspect of) subjective experience is information flow and simulations alone, then how is this aspect presented as it is? Or is that too, like Chalmers’ same question, a non-question?

    Liked by 3 people

    • Hariod, I addressed the ‘why’ of it because that’s what Chalmers asked, and I included the ‘what’ because it seemed necessary to address the ‘why’. But I’m not quite sure I understand what you mean by “how is this aspect presented”. The first version of this reply dumped a ton of neuroscience, but then I realized that’s probably not what you meant.

      But if you do mean the neuroscience, it’s not clear to me that Chalmers actually considered the details of ‘how’ to be the hard problem. He tended to consign those kinds of issues to what he called the easy problems. (He didn’t mean “easy” in any absolute sense, since they’re actually very hard, just in relation to the hard problem.) His contention was that solving the easy problems wouldn’t solve the hard problem.

      Graziano argues that Chalmers has it backwards. He doesn’t consider the hard problem any kind of problem at all, but that what Chalmers calls the easy problems are actually the hard ones, at least from a neuroscience perspective.

      Personally, I think every problem that gets solved gets us closer to the whole picture. We can’t know with absolute certainty that it will all be a physical explanation, but all the scientific evidence points in that direction, and none currently points toward anything else. Of course, there could be new evidence tomorrow that changes that, but after centuries of scientific investigation not finding any, the probability seems increasingly remote.

      Like

      • Hariod Brawn says:

        Thanks Mike, and I get how coming at it from your angle negates the question of how matter gives rise to qualities not only of matter itself but also of those present as so-called ’emergent’ properties of complex organisations of it. For myself, I’m not yet ready to close everything down to neuroscience – perhaps that’s just my being relatively ill-informed, but as you know, I don’t see consciousness as being an exclusively cranial affair – notwithstanding your ‘nexus’ ideas. It’s a bit of a diversion, though I think we may be trapped in the gearbox of our own comprehension (both perceptually and hence scientifically) in that we inherently conceive of everything spatially – but let’s not go there just here. wink

        It seems to me that Chalmers does ask the ‘how’ question, and as per the quote you left of his: “. . . the question of how it is that these systems are subjects of experience is perplexing.” The ‘how’ question asks for an explanation rather than a description. Your answer would appear to be that simulation is how it is done; is that correct? If so, then in a sense you’ve answered, but only, it would seem, insofar as it being an impressionistic picture of the possibility to organise matter algorithmically such that it gives rise to an emergent consciousness. It feels as if that’s still more of a ‘what’ than a ‘how’ answer, and that the ‘how’ answer only arrives by demonstrably verifiable means. Nomic and correlative links of themselves don’t appear to be sufficient in bridging the two aspects of materiality: on the one hand matter itself and non-conscious mental states, and on the other apparent subjectivity. How do the facts of the first category unimpeachably marry to the facts of the second, or have we hit our epistemic limits already, perhaps? I don’t see that has to refute Physicalism, and it may just be so due to our inherent cognitive limits.

        It seems reasonable to posit that a non-reductive, interdisciplinary vision may be needed to forge these required links between any third-person experience (scientific objectivity) and a subjective first-person one. From my position of relative ignorance, then that would seem to be fertile ground, one in which conceptual and representational theories overlap, not least of all in accommodating first-person phenomenological accounts themselves – they being one of the two aspects we’re trying to wholly accommodate within a unified conception of consciousness. It seems there are inherent weaknesses in both sides of the consciousness coin when each is viewed in isolation.

        Liked by 2 people

        • Thanks Hariod.

          I’m not sure I close everything down to neuroscience myself. I see a lot of benefit for collaboration between neuroscience, philosophy, psychology, even sociology and anthropology. But any theory of mind or consciousness that doesn’t take into account neuroscience, it seems to me, is like trying to understand car movement without looking at motors and transmissions.

          On Chalmers and ‘how’, I stand corrected. You are right. Chalmers does indeed use both ‘why’ and ‘how’ in his questioning. That’s what I get for not re-reading the quote carefully when I copied it. He almost never says ‘how’ in his interviews and forum events, but he does that early paper.

          My answer here and in most of the series does more cover the ‘what’ and ‘why’ for three broad reasons. First, they’re only 1000-1600 word blog posts, so going into the neuroscientific details, other than the brief forays I make, probably wouldn’t be effective. Second, the neuroscience picture is far from complete. And finally, given that I’m not a neuroscientist, I’d be afraid of giving out bad info, since much of the neuroscience I read is either right at the edge of my comprehension or beyond it. (Mostly due to a very limited understanding of molecular chemistry.)

          I do think the ‘how’ is gradually being answered. Neuroscience is slowly chipping away at it, year by year. But, as you note, the how is going to answer the information processing and how it leads to behavior. It can only study subjective experience by correlating self-report with neural events.

          But then, more science is done by finding correlations than most people often want to admit. As Hume pointed out, we never actually observe causes, only deduce their existence by careful examination of correlations.

          The biggest problem with brain research is that most of it is done with animals, who can’t self-report. That leaves us with studying information flows and their relation to behavior, and deducing what we can about their conscious experience from it. Science has to make progress any way it can.

          Liked by 1 person

  4. Michael says:

    Hi Mike,

    I thought I’d continue here, given how close this post follows from where we left off, and first wanted to say I enjoyed your previous response. I had never heard of the Cartesian Theater, and enjoyed reading a little about it. I tend to agree with you on the functionality of what I’ll call body consciousness, and also with the notion that the physiological processes associated with it are indispensable to its healthy function.

    The idea that the subjective experience is information flow and communication feels to me much like the debate physicists still have over the nature of quantum phenomena like entanglement. Einstein couldn’t wrap his head around it because no mechanism was in sight to explain it, and the Heisenberg camp essentially said, we don’t need a mechanism, because we found it, and you’re looking at it. It’s a debate that continues to this day (the quantum thing; as an aside there was an interesting article at Quanta magazine about how an oil droplet riding a fluid wave exhibited many quantum phenomena in terms of its movement). I think the hard problem is like that, and I think this idea of equating experience to communication flow is basically saying there is no deeper mechanism. This is just it.

    My challenges are several with pushing the accept button. First, you asked in the previous comment what about subjective experience was superfluous to information processing that results in behavior, and I think the question has to be asked a little differently. Because isn’t it possible to write an algorithm that could beat the stuffing out of that old Atari game Frogger, and that appears to act out of self-interest, but is simply following codified instructions? If the answer is yes, as I think it is, then there is something more to your question that I think is implied. I think you are suggesting that there is something about more advanced information processing than would be required to navigate the world of Frogger that cannot take place without engendering, in and as itself, experience. And so I think it will be interesting to attempt to discover and map what that ingredient or threshold of complexity is or would be.

    Second, if the physics underlying consciousness is deterministic (which it may not be), and the flow of information is subjective experience, what value would there be in installing a decision-making agency that isn’t making any decisions? I think it would be important in this view to establish a meaningful relationship between the arising of this experience, its evolutionary efficacy, and the degrees of freedom it may or may not have. Otherwise it truly does not have a function.

    So is it reasonable, as a potentially testable idea, to suggest this idea you have presented makes the most sense only in the case that the underlying physical mechanisms of consciousness be shown to be truly “free”? And if they are to be truly free, and not deterministic, then perhaps we could attempt to find how the necessary degrees of freedom exist in the physical system.

    Michael

    Liked by 2 people

    • Hi Michael,
      Glad you found the previous response useful. I obviously did as well, since it inspired this post. Thanks for exploring this topic!

      That’s an interesting comparison to quantum physics. Einstein and others felt absolutely sure that there had to be hidden variables that would restore determinism, but Bell’s inequality theorem (which I won’t pretend to understand) reportedly shot that idea down. Any theory of hidden variables apparently has to jettison locality, which is pretty much just as dear to determinists as determinism itself.

      Scientific evidence simply forced the issue with quantum physics. Einstein couldn’t accept it. Many others today still can’t. But if you’re going to be scientific, you pretty much have to.

      Is it the same issue with the functionalist theories of mind? I’m not sure. Einstein’s struggle with quantum physics wasn’t just a problem of intuition, but of giving up a philosophical paradigm that had worked for centuries. The hard problem seems to me more in the class of the struggle people had with evolution or heliocentrism, that is, an issue of intuition. It isn’t an issue for most scientists (at least not for most neuroscientists), but remains one for many philosophers and lay people.

      I fear I’m not following you on the Atari Frogger algorithm reference. (I am old enough to remember the game itself though.) I googled it, but it didn’t seem to explain the reference. Would you be okay with elaborating?

      My belief is that the physics of consciousness is mostly, if not fully, deterministic. This makes sense if you consider what brains are for, taking in information about the environment, and making decisions based on that information. Rampant non-determinism would undermine the survival advantage of that function.

      That’s not to say that chaos theory dynamics might not make precisely predicting what a brain might do forever impossible, even in principle. (Interestingly, if it is possible, this always raises the disturbing possibility that we’re copied minds in a simulation right now to see how we’d react in a particular scenario.)

      I actually don’t see that either non-determinism or unpredictability is required for this idea. But maybe I’m missing something? Why do you think it might be?

      Liked by 2 people

      • Michael says:

        Hi Mike,

        The Frogger idea was simply that an algorithm could be written that could play that game, and preserve quite well the virtual life of the little frog icon. This was my effort at demonstrating there seem to be cases where subjective experience is not necessary for an algorithmic agency to enact self-preserving behaviors, without a subjective experience. When you ask what about subjective experience is superfluous, I thought showing a simple case where it is entirely superfluous might help get at the question of what is unique about human information processing, or more advanced information processing in general, such that the subjective aspect of it is indispensable.

        On the issue of the hard problem being more like heliocentrism than the challenges of the paradigm shifts in modern physics of the last century, it’s interesting to me the way we each approach that as it reveals perhaps the different ways you and I perceive things, because I see it totally the opposite! I think that it is relatively easy to accept heliocentrism once the mechanism that explains the day-to-day experience is revealed. It looks quite obviously like the sun is traveling in circles around the earth, but I can grasp very clearly the mechanics of how a rotating earth and a stationary sun could produce the same visual effect. In this case, the “how” is plain to see once the illusion is stripped away. In the issue of the hard problem, we don’t have an alternate mechanism yet that is readily understood, so when we strip away the illusion there’s no real explanation left. I think this is closer to the challenge physicists faced in the previous century, when their beloved ideas came apart at the seams and there were no mechanisms in place to fill that gap. There were equations, but classical physics relied on the ability to understand or envision the physical meanings of equations, and that sort of evaporated. Even today I wouldn’t necessary say quantum physics is a study of mechanism. It is more of a description than an explanation.

        My point on determinism was simply that if the mind is the product of physics, and the physics of the brain is deterministic, then there is no functional advantage one could ascribe to the subjective experience or the promulgation of an illusory agent. It could still be that subjective experience arises as a natural law under certain forms of information processing or physical brain communications, but it wouldn’t in that case be reasonable to say it produced any advantage to the organism. Though I certainly see your point that IF the experience itself IS the communication—then of course the communication itself is advantageous. Unlike the advances in modern physics of the past century, however, I’m not aware of data that makes a strong case for subjective experience being the nature of reality, free of mechanism, for certain classes of systems.

        So I think that while saying subjective experience IS communication takes a few questions off the table, it leaves the big ones in play about how information processing can be equated to subjective experience in the first place. It seems to involve the resolution of this difficulty by saying the difficulty isn’t real to start with: subjective experience is real, and the physical neural signaling we observe in neuroscientific research is real, and we don’t need to understand how they’re related to one another because thy’re not in fact two separate things. They are exactly the same thing.

        Michael

        Liked by 1 person

        • Hi Michael,
          Thanks for the clarification on Frogger. But I think you found a scenario where consciousness overall was superfluous. There are lots of processes in the universe where that’s true. But the question is, is there any aspect of subjective experience that has no functional value, that doesn’t enhance the survival prospects of the entity having the experience? (That is, aside from when things are malfunctioning and they’re experiencing hallucinations and the like.)

          On heliocentrism, I think you find it easy to accept because you grew up with it. You probably learned about it as a very young child. But consider that it was over a century before Copernicus’ idea became commonly accepted. Galileo was persecuted for promoting it. People had a very difficult time accepting it. It seemed to violate their deepest intuitions about the Earth sitting still and the rest of the universe moving.

          On the hard problem not having any solution, I actually disagree, but this may just come down to our differing intuitions about it. I think there are numerous potential explanations, but most of them, at least the scientific ones, are not intuitive. It’s an issue of those explanations not being acceptable to people, not that they’re not there.

          On your comment about the illusory agent, are you saying that the self concept seems unneeded if there is no free will? (BTW, I’m a compatibilist. I see the mind as fully existing in the physical universe, but still see social responsibility as a coherent societal concept.) I think the self model brings solid advantages to an organism. It needs to understand the distinction between itself and the environment. This seems crucial for its survival.

          I think you’re right that I am saying that subjective experience and the information processing are the same thing. That said, mapping which information processing to which experience it actually is, is definitely a worthy endeavor. Indeed, it’s the only way to ultimately test this idea. Because if there are subjective experiences with no equivalent information processes, then the idea might be falsified. (Although we might find information processing that explains why we think we had that experience.)

          I’m not quite seeing the reason for the difficulty in equating the communication and experience. Could you maybe elaborate on it? I want to make sure I understand any potential problems with this idea. Is it just the idea itself? Or is there a place where you see dots not connecting, so to speak?

          Thanks again for driving this conversation. I’m finding that it’s sharpening my thinking on it immensely!

          Liked by 1 person

          • Michael says:

            Hi Mike,

            Work has been all consuming the past few days. I’ve enjoyed the conversation, too, and wanted to follow up on your question about the difficulty in equating communication and experience.

            But first, to our previous points, I think you summarized fairly well what I had said. I think one distinction between what we were both saying is that you were ultimately asking something like, “when consciousness arises, what about it is superfluous?” and I was trying to answer “what about consciousness itself is superfluous?” So I think we agree there are processes in which consciousness isn’t present or necessary, though perhaps it would be advantageous if it arose in those situations. Who knows. But clearly I was answering a different question.

            On consciousness and free will, I would say I was still on a different track. I was still on the question about consciousness itself being superfluous, and my point was simply that if decisions are made by a deterministic brain, then consciousness is also superfluous. But what I was trying to say was that if the brain was deterministic, and IF consciousness was found to be something extra the brain produces–something that takes a lot of processing power for instance, to fabricate that personal experience–then it wouldn’t seem meaningful. Why would be the body evolve the rendition of a subjective experience it doesn’t need?

            So all of this gets back to your idea here, which I’m paraphrasing to explore so correct me where I misstep, but I see your idea here being that a) consciousness is not something the organism has to create or spend any extra resources on; it IS the communication necessary to manage such a complex organism as the human being; and b) whether the brain is deterministic or not, since consciousness is not manufactured in any way, but is simply innate to the specific ways organisms process information, then the issue of free will and consciousness is sort of moot. Consciousness is not some other thing or third thing that organisms produce, which would seem a waste of resources if it had no authority (to me), but is the thing that particular types of information-processing organisms are.

            That said, while I can appreciate the elegance of this approach, you asked what difficulties I saw in equating communication to experience. I think my answer is that in a way it is not an answer at all until it is further developed. I think the basic assertion is that consciousness is what some forms of communication feel like. But the hard problem has never been that we have experience, but how we get from physics and chemistry to experience itself. By equating them, the question has no space in which to live. It is squeezed out of existence, but that is not an answer. It could only be the answer in the special case when the question itself is based upon false premises.

            So what are the premises?

            One, we have personal, subjective experience.

            Two, we have physical, chemical and electronic processes occurring within our bodies.

            Three, physical, chemical and electronic processes are the same everywhere in the universe that they occur, meaning that no special magic or vitalism occurs within the human organism or other biological organisms.

            Four, there are no known physical, chemical or electronic processes outside of living organisms that have been shown or thought to reproduce subjective experience.

            Given these premises, how can we equate One and Two, without addressing Three and Four? I think if these four premises are taken to be true, then we can’t simply ignore the second two premises. To equate one and two we have to introduce a new idea that explains how the physical, chemical and electronic processes associated with communication within organisms differs from those same processes that occur in other systems that do not have subjective experience. If we cannot do that, then when we equate One and Two above, we are essentially stating that either Three or Four is false. I don’t think those would be very logical falsifications given our present state of knowledge.

            So to me, the equation of communication with subjective experience hinges upon explaining the physical differences between the processes at work in the human organism that are conscious, and the processes at work elsewhere that rely on the same physical, chemical and electronic properties of nature, but are not conscious. And aren’t we right back to the hard problem now? The hard problem is “how and why does subjective experience arise”? So I think to make the leap you are asserting, this still must be answered in some form, or the premises above must be changed.

            I’d be interested to know what you think of these assertions, because honestly Mike, this was my first effort to make them so plain. Like you, I’ve enjoyed this exchange very much and it has helped in my own thinking.

            Michael

            Liked by 1 person

    • Hi Michael,
      Thanks for the clarifications on the previous points. And you did an excellent job summarizing my position. Thank you!

      To answer your thoughtful query, my reply would be that your premises are incomplete. Notably, equating experience and electrochemical processes isn’t what I’m doing. What I’m doing it equating experience and a particular organization of electrochemical processes.

      To make this a bit clearer, I’m going to adapt your premises for another proposition. I’m doing this to make a point, so I sincerely hope you don’t take offense. It’s not at all meant to be disrespectful, only clarifying.

      One, we have the video game, Angry Birds.

      Two, our electronic devices (phones, Roku, etc) have electrical processes occurring within them.

      Three, electrical processes are the same everywhere in the universe that they occur, meaning that no special magic or vitalism occurs within the electronic devices.

      Four, there are no known physical processes outside of certain electronic devices that have been shown or thought to reproduce Angry Birds.

      Obviously in the case of Angry Birds, the answer is that it’s the particular architecture, the design, the organization of the electrical processes that are key. Programmers and engineers are paid a lot of money to find the right arrangement to produce Angry Birds. Again, I think this is the missing piece you were looking for.

      Now, you could turn around and say that organization is the vitalism I dismissed earlier. I wouldn’t necessarily object to that, except to note that it’s not what most people historically meant by that term. To which you could reply that my description of consciousness isn’t what most people mean by that term. I’d say that in both cases, these things as they were originally conceived don’t exist, and it’s a matter of philosophy if it’s useful to apply the words to what does exist.

      So we could say that consciousness doesn’t exist, but then everyone would stop listening, and we’d have to find another word to reference our subjective experience. When faced with that kind of dilemma, my philosophy is to use the traditional word but explain that it isn’t what it seems. (I’d probably use “vitalism” if it was still in circulation, but it’s mostly defunct now, never evolving past its original meaning.)

      Now, you might ask what is this organization that leads to subjective experience. My description in the post of the sensory and emotional centers communicating with the executive centers is very high level, and admittedly over-simplified, but that’s the basic idea of the organization. The actual architecture is horrendously complicated and certainly not completely understood yet. But the parts that are understood seem to point in the direction I described. Of course, there could be new findings published tomorrow that change the picture.

      I hope this all makes sense and you feel it adequately addressed your assertions. Let me know if I missed anything.

      Liked by 1 person

  5. Jeff says:

    Mike,

    I am slowly reading Torey’s excellent book The Conscious Mind. HIs model of the evolutionary process of the brain from single cell to the complex brain of Homo Sapiens is nothing short of an amazing feat. If his model is followed from the transition of Homo Erectus to Sapien, the major development in man’s brain, whose direction is unique from animals, is the ability to develop what he calls a ‘motor arm’ to which helps to develop a system of language. He describes this process of developing language through repetition of sensory information, the creation of words to represent them (experience) and the ability to scan this vocabulary/experience which is composed of a myriad of subtle impressions that this scanning ability has to distinguish the varieties and qualities of experience. This scanning seems to be a reflexive mechanism and the repetition of it gives rise to ‘authorship’ of the word/experience and hence, a self, something creating and controlling this activity. All of the above is part and parcel of the evolution of the human brain and is ‘automatic’, much like a computer.

    Because I come at this from the philosophical/religious/mystical point of view, the sense of self is a central theme that appears in every school/model, no matter the geographical location. It is a universal element in the brain’s percept. The so called ‘job’ of the seeker/adept, is to somehow go beyond this sense of self and experience life that is free of ego/self. I am over simplifying this but I think you can follow the general direction. Because both the scientific investigators and ‘spiritual’ investigators have biases against one another, it is only in recent history that this separation on both sides is being bridged.

    The common problem with the scientific and spiritual investigation of self, subjective experience, is the proof that it is possible to live without this activity. Because science is somewhat of a newcomer to this field, there is little that most neuro scientists have contributed to this possibility, concerning itself with mechanics only. Whereas, we have a large body of testimony from various religious and philosophical sources talking of transformation and its possibility. In fact, the evidence is so strong for intuitive and mystical experience thousands of years in the making, that it has escaped a scientific basis for its occurrence. There are many reasons for this, but I don’t want to go into that now.

    This same sense of self, which Torey describes as being created through the repetition of scanning seems to be a natural function of the brain used for sociological and communicative purposes but has somehow usurped a position where language, the result of ‘naming’ and the ensuing identification with that becomes the primary ‘problem’ (ownership, subjective experience, etc.) of mankind and his feeling of separateness from others and the universe. This problem is demonstrated through the thinking and behavior of humans. Upon contemplation of all this, and the possibility of being free of this naming and identification, I don’t think there is anything that one can do volitionally to eradicate this. Certainly, we can modify and make things more comfy for ourselves, but no real transformation takes place that leaves us in utter harmony with life.

    This brings me to the only conclusion that I feel intuitive about and which has been mentioned in the religious/contemplative literature throughout the ages, that there can come a moment in which this linkup of a self to this image making/naming process stops and with its cessation, the body/mind once again can function in a way without this ‘burden of self’ and the narrative that we create of our life, and live in an harmonious way each moment, not just in moments of insight. Freedom, not from something, but as our natural condition.

    If science can come up with a procedure :-), not lobotomy, or a drug that takes away this sense of personal authorship/ownership/experiencer, then I will see the usefulness of science in this area. I am hopeful, but so far, it is just a collection of data that is dryly discussed and written about. Now that we have heated toilets and drones, and yes surgeries and many things that can make us more comfortable, where the hell is the peace that passeth understanding? 🙂

    Please feel free to rip this apart and come up with something more realistic, I’m begging you!

    Liked by 2 people

    • Jeff,
      I can’t comment too much on Torey’s model since I haven’t read any of his stuff. I do have a reflexive suspicion any time someone makes too much of the difference between humans and animals, but I would definitely agree that language, and more broadly symbolic thought, is a major distinction. Our ability to designate a label or placeholder for a concept or complex of concepts, and then manipulate the placeholders, is what has allowed an African hominid species to hit well above its original ecological niche.

      On spiritual mysticism and similar concepts, I recognize that many people find enormous comfort and meaning in them. Within the scope of our subjective experience of the world, I don’t doubt that it can be enormously beneficial.

      But as a guide to reality, I think we have to be cautious. The problem is that subjective experience is, well, subjective. Ultimately it’s bound and integrated with our worldview, our model of the environment. And our models are often inaccurate.

      Another way to think about this, is suppose a computer tells us that it has a peripheral attached that we know is not attached to it at that moment. Something has gone wrong with the computer’s model and we have no real trouble seeing that. Suppose the computer tells us that it is a Mac laptop when we can clearly see that it is a PC desktop. Again, we have little trouble seeing that there’s a problem with the computer’s model of itself.

      But when a human insists that their mind works in a way that there’s no objective evidence for, the possibility that the human’s model of itself is at fault seems to be one that people have a very hard time accepting. No doubt because in this case, we are the machine and we feel the model. It is our very perception of ourselves and the world. Considering that it might be wrong is difficult, but I think crucial for making progress.

      Now, I don’t think we should ignore subjective experience and what it’s telling us. But we should always be aware that introspection is an unreliable tool for understanding the mind. It seems to have evolved for checking the mind’s state, but not actually understanding it. Our introspective understanding of ourselves should be viewed as a caricature of the real thing, a user interface masking extremely complex mechanisms.

      I do think the self exists. We have body image maps in the brain, and a theory of mind model which can be pointed inward. But they’re models, just like all the others, and can be wrong.

      Hope this makes sense, particularly in the context of your comment. Again, I have to thank you for making me articulate this! It’s generating verbiage that will be useful later.

      Like

  6. Jeff says:

    Mike,

    I agree that we are the machine, no matter what we ‘think or feel’ about it. But, there is a further reflection that we, the thought process, is separate from it, giving rise to the sense of ‘experiencer’. This is what we mean by subjective experience, not the flow of information. While it is a necessary element to feel that something is happening to us for safety and survival, and communication, this sense has somehow gone awry and created a distortion. If we have the ability to create a false sense of being threatened through an imaginative process, not an informational one, how is that not a sense of distortion? And, how is that part of a natural process of functioning? These kinds of imaginations are definitely illusory in the sense that they are self created and not sensations that originate in the environment around us which the senses are detecting.

    So, yes I think there is a self, but it is not what we imagine it to be because models are already built on images and language which is a step away (an interpretation) from what experience and reality is.

    Even if science comes up with a model answering every possible question about the brain, where does that leave this subjective distortion that everyone is living? I am not positing an outside agency that magically restores ‘something’. I don’t see what use the scientific information has regarding the state of all of us and the ‘problems of living’ that are psychological? Do you see what I’m getting at?

    Liked by 1 person

    • Jeff,
      I agree that the self is not what we intuitively think it is. The most convincing theories I’ve read all say that it is a model, an internal data map of our body and our state of mind. But it’s a simplified one, an executive summary. It’s like the graphical user interface on your computer which give you information on the computer’s operations, but is actually a caricature of the real thing.

      I think the reason it’s like this is a relentlessly pragmatic one. Having detailed accurate information on ourselves is not an effective survival mechanism, for the same reason your computer doesn’t make you manipulate transistor voltage states. The model evolved to increase our chances our survival, not to give us information about ourselves.

      But for us, when the environmental models are assessed in relation to the self models, and the result causes the limbic system to generate positive or negative emotional reactions, it all manifests as our sense of an experiencer having the subjective experience. It doesn’t feel like a model, because the model is the very stuff of us. We are unable, subjectively, to detach ourselves from it and experience it for what it is since it is part of the mechanism of experience.

      On science helping with ‘problems of living’, that’s a good question. I think anything we can learn will help, although the manner in which it helps may not be evident while we’re learning about it. In the meantime, neuroscience’s chief benefits will be for clearly biological problems, such as brain tumors, dealing with strokes, and similar issues.

      And it may be that, as a practical epistemic matter, for many psychological issues, it may never make much sense to try to deal with them at the neuroscience level. I definitely think some of them will be helped by that approach, but it’ll likely depend on the exact issue.

      Like

      • Jeff says:

        Mike,

        I like your analogy of the GUI of the computer. And, I must say, that I’ve never considered the possibility of having detailed, accurate, information on ourselves to not be an effective survival mechanism to increase our chances of survival. Is this your own take or have you come across this in someone else’s take? I’m not exactly ready to swallow this one as I have no way of proving it false, but it would seem odd that the evolutionary process would be playing games with itself as a subjective ‘entity’. I need to marinate this concept some more.

        When you say “when the environmental models are assessed in relation to the self models, and the result causes the limbic system to generate positive or negative emotional reactions, it all manifests as our sense of an experiencer having the subjective experience. It doesn’t feel like a model, because the model is the very stuff of us.” This is the activity, the scanning of all models through language, its symbols, and all of the information we have gathered from centuries of repetitive experiences that gives rise to a subjective ‘entity’, the experiencer. You say, we are unable to detach ourselves from it and experience it for what it is since it is part of the mechanism of experience. That is exactly what I am getting at. It is impossible to do anything about it. All effort to change this fails including the detachment. This is why we are so fucked up psychologically because we never accept these processes as us and we have been told that we must change and given models to change to, like the perfect man, the holy man, the all knowing, all seeing blah blah blah. The enlightened one, the one wise one, the detached one. One who is not greedy, not this, not that. All of that is conditioned information, not reality.

        To put it simply, when we stop trying to escape or change what we are, this subjective experiencer, is seen as part of this whole flow and is intuitively known as just that, not something separate from it. Behavior is changed and the neurotic fantasies and chatter that is incessantly going on stops being the ruling principle. This has psycho-physical effects throughout the organism. A sense of connectedness is felt and lived on the conscious level. This constant indulgence of conceptual thinking then becomes a tool rather than a runaway habit that most people are lost in. To me, that is when science can achieve all the helpful tools for survival rather than the machinery of death.

        Science is not providing means of dealing with the problem of a subjective sense of self. It is clearly not interested in this except to churn out pills and surgeries to anesthetize or cut out that which ails us. This is what real contemplation is all about, to look with attention. Your attention is the basic tool to be aware of what is. The brain needs to pay attention to what it is doing to create this sense of separation. All the rest follows very naturally.

        Liked by 1 person

        • Thanks Jeff. On the GUI analogy, I got it from neuroscientist Michael Graziano. Daniel Dennett uses analogies like illusions in a magic show, but Graziano prefers the GUI one because it emphasizes that the “illusion” isn’t a mistake or breakdown of some kind but an important feature.

          I don’t know if you know any computer programming, but in large information systems, best case designs call for subsystems only interacting with each other in a limited carefully designed way. It keeps any dependencies between the subsystems to a minimum. It would make sense that interactions between the brain’s various subsystems might have evolved to interact in a similar fashion. (Although the ad-hoc nature of evolution probably means it will still be far messier than any engineer would tolerate.)

          “This is the activity, the scanning of all models through language, its symbols, and all of the information we have gathered from centuries of repetitive experiences that gives rise to a subjective ‘entity’, the experiencer.”

          I actually think it happens at a more primal layer than language. Or more specifically, I don’t think language or symbolic thought is necessary for it to happen. Certainly language is an integral part of human cognition, but the interaction of the models I describe have a good chance of happening with any vertebrate, and even many invertebrates, most of whom have nothing even approaching language.

          “This is why we are so fucked up psychologically because we never accept these processes as us and we have been told that we must change and given models to change to, like the perfect man, the holy man, the all knowing, all seeing blah blah blah.”

          I actually look at a higher layer of abstraction for the origin of those issues: evolutionary psychology. We evolved in a certain environment for a certain lifestyle. However, the environment we live in today is radically different from the African savanna. The instincts and nature we developed are often a poor fit. It’s why we often struggle to fit into the roles and attitudes civilization pushes us into, some more than others.

          That, and evolution isn’t a tidy engineer. It leaves us with many competing instincts and impulses. I think the main purpose of consciousness is to help us decide which impulses we should heed in whatever circumstance we’re currently in. But often the necessary choices leave a portion of our instincts unsatisfied. This creates needs in many of us that will never be satisfied.

          Like

          • Oscardewilde says:

            But wait a second. With all due respect. Aren’t you now just saying in your last paragraph that when we feel hungry we must eat.
            In my opinion that’s what everybody thinks. I think it’s dualistic too. You say you need consciousness for causality and that being conscious brings some extra aspect to it. For me Chalmers describes it best with his philosophical zombie. Why do simulations need to be felt conscious. In my opinion you give dualist explanations. Why can’t all information processing not be unconscious. You say the radio is converting its bits to acoustic waves to listen to itself what it means. I might be mistaken, but to me your explanations don’t beat the philosophical zombie yet. (interesting as your thinking is.)
            Greetings

            Like

          • Jeff says:

            Mike,

            You say ‘I actually think it happens at a more primal layer than language. Or more specifically, I don’t think language or symbolic thought is necessary for it to happen. Certainly language is an integral part of human cognition, but the interaction of the models I describe have a good chance of happening with any vertebrate, and even many invertebrates, most of whom have nothing even approaching language.’

            We are talking about the creation of a subjective entity. Perhaps the groundwork for such a creation happens before the advent of language, but this is where it becomes ‘personal’ and conscious. Our thought processes are inundated with information that we form the subjective state of I, Me, Mine with. This is not that difficult to see. The various environments will also help to influence the ‘flavors’ or subjectivity, but we are only speculating where and when this subjective entity began taking shape and why.

            What is relevant to me and many others is the incessant intrusion of this subjective activity into every aspect of how I relate to the world and myself, the other and I, the outer and inner. We color every interaction with this sense of subjectivity, both inner and outer. And, the outcome is a sense of isolation and separateness from the world, and a ferocious defense of what we think, feel, and believe. We want to control and dominate what we experience and this becomes a neurotic activity of the brain. This is common to mankind, no matter the culture or geography. Science is not involved with this activity at all. They want to know the big picture, but the big picture is colored by the little picture, our conditioning, the information in our brains, our memory bank. How can you arrive at any clarity into this without self observation? When we sit quietly observing the process of thought and feeling, and the information that is informing us, we discover many illusions, inaccurate information, and what it is that we are doing to perpetuate the neurotic self. In fact, this is just the tip of the iceberg, the side effect of being present to this on going activity of self creation. There is actually nothing mystical about it, yet there is this association with the activity of sitting quietly and contemplating all of this that brings up these images in many. It is possible for the human brain to process information in a more efficient and holistic way than most people realize when this subjective sense is not interfering. The challenge for you, Mike, is to engage this and then see where this fits in with your scientific interests. Without contemplation of yourself, you can never get the big picture, you only get theories to work with. Theories never give you certainty about reality. Reality, in a sense, is the absence of theories. It is a lived state that subjectivity doesn’t touch. Doesn’t this interest you at all?

            Liked by 1 person

    • Jeff,
      I don’t disagree with any of your points, except when you say science isn’t involved at all. I think science can be and is involved in trying to understand all of this. When you get down to it, it’s what psychology is all about.

      You ask if it interests me? I’m not quite sure what you may be thinking I’m not interested in. My explorations of the human experience do mostly come from the scientific perspective. The reason is simple. I trust science more than any other source of knowledge.

      That trust developed over several decades as I ultimately found that the information from most other sources tended to be wrong to varying degrees, infested with confusion, cultural biases, or wishful thinking. Not that science is perfect. Far from it. But it has generally held up better than all the other sources.

      I also like philosophy. But philosophy has always been better at asking questions than providing answers. What answers it does provide are either tautological (albeit often non-obvious) or just hypotheses, some of which may someday be testable by science.

      Some people who read this blog have been bothered by the fact that, while I love mysteries, I’m not content to merely marinate in them. To me, they’re problems to be solved. I have little in common with people who bemoan Newton for removing the mystery behind the rainbow. I can understand why people want to retain mystery, because it leaves open the possibility of all kinds of fantastic answers, but I prefer to know the actual answer.

      Or were you more asking if I’m interested in how to use this information? I have to admit that I don’t do a lot of that type of posting here, although here’s one example: https://selfawarepatterns.com/2016/08/08/dont-trust-your-emotions-they-will-betray-you/
      In general, I think there are many successful ways to live and I’m reluctant to preach to others about it.

      Jeff, I don’t know if any of this has answered your question or just spoke past it. If not, please feel free to follow up and pin me down.

      Like

  7. Hi Oscardewilde,
    “Why do simulations need to be felt conscious.”
    I think the question I would ask in return is, why do you assume feeling isn’t communication, isn’t information? Asked another way, what about it is uninformative?

    “Why can’t all information processing not be unconscious.”
    Without consciousness, a system would not be able to deal with novel or complex situations which are not conducive to a rule based responses. Indeed, when we deal with situations that are routine enough, where we’ve developed habits, autonomous responses, we do it without much if any consciousness. Likewise, when we deal with situations that require a very rapid response, we often do so with minimal consciousness, only consciously reflecting later what happened. Consciousness seems to happen when we have to weigh trade-offs, in other words, when we need to simulate the results of various actions.

    “You say the radio is converting its bits to acoustic waves to listen to itself what it means.”
    I do think experience involves portions of the system talking with other portions, but figuring out “what it means” requires all of them working together. Indeed, figuring out meaning requires the movement planning centers initiating simulations in the input processing and goal oriented centers. But that doesn’t mean the movement planning center has the meaning; it only has the movement planning related consequence of it. The meaning itself is spread over all portions of the system. As I said in the post, this isn’t an argument for a homunculus in the movement planning centers.

    Like

    • Jeff says:

      Mike,

      Can you give an example of novel or complex situations which are not conducive to rule based responses?

      If a response to stimuli cannot be given at an autonomous level, are you saying that imaging and memory come into play at the level of thinking, which is a combination of imaging and memory? Are you equating consciousness with this type of information processing, but not at the autonomous level?

      Like

  8. Oscardewilde says:

    But I’m not saying anything new am I. It’s my understanding that I pose the question exactly has I have heard it pose by Chalm ers on various occasions. The challenge is to explain why not all processing is going on unconscious. Or without consciousness.! So you think you have answered that question. The middle bit. I’m not saying it’s wrong. I don’t know enough I am afraid. It’s interesting you don’t see my point. Does your explanation really needs it to be conscious. (I must admit I often have made points about something, not understanding it yet too well) can you still explain to me one more time why the non rapid response has to be conscious?

    I totally agree with you on the first bit. Consciousness does look a lot like communication. For me it is only communication not for yourself but for someone else. The problem is. Consciousness is considered private information. I still have to find a way around that one.

    Greetings and thank you for your interesting posts

    Liked by 1 person

    • “can you still explain to me one more time why the non rapid response has to be conscious?”

      The non-rapid response doesn’t necessarily need to be conscious, it just has the opportunity for it. The problem is that the rapid response doesn’t leave time for it, again until typically after the event when we can usually assess what happened. This makes sense if consciousness is running simulations. When we’re going something routine, the simulation engine doesn’t need to be engaged. Instead our mind wanders, running simulations on other things.

      But when we’re doing something novel that we’ve never done before or that requires balancing several desirable actions, then the simulation engine focuses on that task, making ongoing predictions and adjusting as sensory input and emotional reactions come in.

      You ask why this requires consciousness. Can you point to a non-conscious system that can handle tasks that are novel or require trade off considerations? Animals without brains, such as c-elegans worms, can only do simple rote tasks. We have robots that can do routine tasks, but anything novel is still beyond them.

      “For me it is only communication not for yourself but for someone else. The problem is. Consciousness is considered private information.”

      I think the answer is that it is communication between parts of the brain. Maybe I used the wrong word with “communication”, but I was trying to convey that every aspect of subjective experience is information, a part of the information processing that eventually feeds into movement decisions, the brain’s ultimate output.

      Like

      • Oscardewilde says:

        Thank you for your extended explanation. What can I say. It seems we are unfortunately misunderstanding each other and no quick way out. (side note: today I heard about a research in which fruit flies, the rejected males preferred alcohol infused food over their normal brew of yeast and sugar. They do what all men do rejected by women. Drink alcohol. ☺️ but are they consciously enjoying the alcohol? did they run a conscious simulation of their choice between normal food and alcohol. I never considered fruit flies conscious.)
        But yes I think you misunderstand the problem, how Chalmers poses the question. I think Chalmers is very well aware that brain parts are communicating and he wouldn’t be satisfied with your explanation. I think he would ask the same as me. Why would this communication equals consciousness.? Why does this communication gives rise to an inner life. (please understand my point. You are quoting him yourself)
        Also I think your building a Cartesian theater and then go into full force denying you build one. If you deny this. Then what are the options.
        Consciousness is ephiphenomal or emerges from the structure of this communication. Is this communication, running simulations, consciousness. I don’t know how you call this philosophically. Materialism or physicalism. Sorry it has been a year since I listened to that philosophy course.
        But we can’t argue for ever. I will stop for now. So I hope the future brings more insight for either one of us. And I will continue reading your very interesting posts.
        Greetings

        Liked by 1 person

        • I can’t say whether fruit flies are conscious. I don’t know if their behavior ever rises above the level of autonomous responses to stimuli. But they have more neurons (250,000) than some vertebrate species that have been observed to engage in trade off reasoning. If they are conscious, it would obviously be at a much lower resolution than a mammal’s, much less a human’s.

          It does seem like one of us is missing something. Sometimes at these junctures, it’s helpful for each of us to note what would change our mind. My mind would be changed by a significant aspect of subjective experience that my idea doesn’t address, an aspect that is not informative or communicative.

          Here is my question for you. (You don’t have to reply unless you want to, just think about it.) Without simply repeating the question, what would an answer that you’d find acceptable look like? If you were writing a hypothetical answer, what would it be?

          Thanks for reading!

          Like

  9. Corey says:

    Communication may point us in the right direction, but it can’t solve the hard problem by itself. Your communication theory, like most information theories on consciousness, presupposes that there already be an awareness process to perceive this information and have a subjective experience of it.

    For instance, my computer is communicating data to your computer right now, but there is no subjective experience produced between them (as far as we can tell). The different areas of the brain act as different computers communicating with each other, but that alone doesn’t explain how they generate consciousness.

    Liked by 1 person

    • Hi Corey,
      Would you mind elaborating where specifically you see me presupposing awareness? As you note in your example, it’s possible for two non-conscious systems to communicate, so what about the subsystems of the mind communicating presupposes awareness?

      Your example is a common criticism of information processing theories of mind. If consciousness is information processing, why isn’t all information processing conscious? My response is to point out that the game Angry Birds is information processing, but the example you describe is not Angry Birds. The reason in both cases is that both Angry Birds and consciousness are specific applications of information processing.

      Like

  10. What’s “hard” about consciousness to me (regardless of what Chalmers says is hard), involves the dynamics by which phenomenal experience becomes created. Furthermore as a naturalist I place this question squarely in the domain of the physical sciences, and particularly neuroscience and computer science. So for anyone who’s interested, the task here is to theorize the manner by which evolution accomplishes this feat, and then hopefully demonstrate the validity of this theory by using it to build something which seems to function somewhat through phenomenal experience. I won’t be holding my breath however, since I suspect that humanity will always remain far too ignorant, as well as mechanically obtuse, to accomplish such a thing. (Evolution has tools at it’s disposal which seem far beyond ours.) Though that’s about all I can say on the hard “how” of consciousness, I’m extremely opinionated regarding the “what” and the “why.”

    I answer the “what” with my functional model of the human mind, and theorize a vast non-conscious computer which hosts a relatively small conscious computer. This conscious computer, unlike a normal one, functions on the basis of three phenomenal inputs (utility or affect, senses, and memory), one processor (thought), and has one variety of output (muscle function). The theory is that this processor interprets inputs and constructs scenarios, in the quest to figure out what to do given the utility incentive to try to feel good as well as avoid feeling bad.

    Beyond the “what,” there is also the “why” evolution created this type of computer rather than exclusively rely upon the non-conscious sort? “Autonomy” is my single word answer for this, though permit me to explain.

    Try to consider existing as evolution, and even though it can be hard for us to do so in a non teleological way. Perhaps you’ve developed things like reproducing microbes, plants, and fungus, and they populate the world. Furthermore as Mike mentioned in his recent “Consciousness as a Simulation Engine” post, perhaps you’ve developed an early pre-Cambian animal that has a nerve network with sensory neurons connecting directly to motor neurons. From here perhaps you tie this network together with a spinal cord, and so sensory information now becomes processed through complex algorithms, and thus things begin to operate through computer programing as well as past mechanics.

    At this point I’d expect all sorts of computer driven fish and bug types of things to emerge and feast upon a plethora of biological material, as well as themselves. But the thing about computers is that while they deal with limited environments quite well, such as playing chess, progressively open environments seem to demand exponentially greater programming instructions in order for them to cope with such circumstances. For example it’s quite conceivable that modern ants are only driven by standard computers, with logic statements to guide them at each stage. Going much further into open environments however should require amazing capacities for learned behavior.

    My suggestion is that evolution both didn’t and couldn’t cope with the exponentially greater programming demands of increasingly open environments, and so “cheated” by developing an auxiliary conscious form of computer. With the punishment/reward associated with phenomenal experience, it no longer needed to program so much for the various circumstance which might be faced, but rather could leave this somewhat up to now personally interested parties to play roles in figuring out what to do. Here they must interpret inputs and construct scenarios in order to decide what will help them personally feel better, not just obliviously run algorithms (which naturally shouldn’t have been sufficiently designed to write successful blogs and such).

    To bring this back to Mike’s theory that subjective experience provides communication, well yes I’ll go along with that. As I see it the communication here exists as a punishing and rewarding dynamic. In fact I define this directly as self. Without the manifestation of punishment/reward, I don’t believe that the conscious form of computer functions whatsoever.

    Liked by 1 person

    • Eric,
      I’m more optimistic than you are that we may eventually be able to build a conscious machine. The question is whether we’ll find engineering a machine with all the idiosyncrasies of human consciousness to be worthwhile, at least outside of experimental cases. If not, the experience of machines will likely be unimaginably alien from our point of view, so alien that we may never recognize them as conscious, even if they have equivalent capabilities.

      I find a lot to agree with in your model. I particularly like your description of consciousness as essentially evolution finding a frugal manner in which to cope with computational demands.

      Some of our disagreements we’ve already discussed, such as my view that consciousness is a larger part of the system than you envisage.

      We also discussed (and I think agreed) on another thread that consciousness doesn’t really have inputs and outputs independent of non-conscious systems, that in fact all of its inputs only come to it after they have been non-consciously processed, and all of its outputs go through additional non-conscious processing.

      What we didn’t quite get around to discussing is other inputs and outputs I see consciousness having from non-consciousness. I think non-consciousness is constantly building models. My conception of consciousness as a simulation engine requires that it draw extensively from those models. Of course, this can be considered covered by your memory input.

      But it seems to me that consciousness also outputs to those models. In other words, under your model, I think consciousness also outputs memories. I can remember what I was imagining a few moments ago. I can even remember things I imagined years ago.

      On animals and the evolution of consciousness, one thing I’m not sure about is how sophisticated an animal can be without being conscious. As I noted in my Feinberg & Mallatt series, distance senses, notably eyesight, imply mental imagery, which implies image maps, modeling, which seems to increase the probability of situations where simple programmatic responses would be insufficient, where simulations need to be run in order to do trade-off analyses of multiple possible actions.

      But insect brains are really tiny, and it’s a valid question whether there’s enough there for consciousness as we conceive it. Yet tests of fruit fly behavior seem to indicate that they can engage in trade-off processing, pondering which action to take when confronted with dilemmas, and ants have roughly the same number of neurons that the fruit flies do (at least for some species).

      If they are conscious, it would be at a far lower resolution than any vertebrate’s, much less any mammal’s.

      On the rewarding dynamic, I have a question. It seems like the reward / punishment system affects not just conscious processing, but non-conscious as well. Going back to my driving to work example, it seems like the reward system is engaged (albeit in a low intensity manner) even though my consciousness is occupied with other matters. And it seems like it fires pretty strongly in an emergency when I act without much if any conscious thought.

      Feinberg and Mallat called this “affect consciousness”, but I’m wondering if the “consciousness” part of the label is warranted. (Although finding alternate wording to convey the concept is tough so I understand why they used it.) But it seems to be pretty central to your model. I’m curious what your thoughts are on this.

      Liked by 1 person

    • Mike,
      It’s nice to see a role reversal here, with you as the optimist who believes that we’ll some day develop machines which function somewhat through phenomenal experience, while I’m pessimistic about this. (Conversely I’m the optimist who says that our soft sciences will harden once sufficiently founded through the notion of “value,” and that a new brand of philosophy will facilitate this by developing various generally accepted understandings.)

      If we are ever able to design and build sentient machines, we will surely also have some capacity to measure the positive to negative states which they experience. For this reason I’d expect laws to be passed to look after the welfare of our creations. I’d imagine that pain, loneliness, frustration, hunger, and so on, would be limited if not forbidden. Thus for example, a vacuum robot would be built to enjoy providing such services rather than have the motivation of potential punishment for mistakes.

      Regarding your perception that consciousness is a larger part of the system than I maintain, I wonder if we’re talking about separate things? My point is that the absolute amount of conscious processing which I engage in (and I do seem generally aware of what I think about) is relatively pathetic. I presume this of extremely intelligent people as well. Observe that a person is only able to work on one conscious task at a time, such as having an involved conversation, or silently reading a book — it would seem that such tasks can’t be done simultaneously because there’s only a single conscious processor from which to function. Conversely the non-conscious computer should be doing dozens or hundreds of things at once. We theorize that it manufactures each and every phenomenal input that a person experiences, runs the programing by which muscles move the body, can react instantly to emergency situations, and can (through my theorized “learned line”) effectively drive a car as well as enunciate incredibly complex oral sounds. This is some computer!

      Perhaps by suggesting that consciousness is “a larger part of the system” you were not referring to the absolute magnitude of processing which occurs, but rather that consciousness seems quite essential to our function? On that I agree. Though the conscious mind seems to be a minor computer which functions through a tremendous computer, the non-conscious mind seems to mainly do what it does by servicing conscious function. Obviously people who are not conscious, don’t do much outwardly.

      I hadn’t thought about identifying consciousness itself as an output of the conscious mind beyond muscle operation. Well sure, that works. Nevertheless in my general writings I identify muscle operation to be the only “pure” output of the conscious mind, specifically because I consider the operation of muscles to be different from thinking itself. For example, if I’m going out to see a neighbour, I might consciously assess walking there or taking my car. Whatever decision I reach will thus be an output of my conscious processor. But I call this sort of output “impure” in the sense that it concerns nothing beyond thought itself. Past consciousness would be an impure output of the conscious mind as well, but exist as output nonetheless.

      As far as insect brains being tiny, I never sought such information while developing my theories, and given that it might compromise my architectural vision in general. The engineer might realize what the ant has inside it and say, “That’s primitive.” An architect however might look at it with little conception of what’s inside and say, “No this thing functions amazingly well, while our premiere robots function like crap.” Of course ants may be somewhat conscious. Regardless, that they’re able to do so much with so little suggests to me that there’s an amazing engineering gap which should at least be very difficult for us to close.

      As far as Feinberg and Mallat using the term “affect consciousness” to address emergency behavior which doesn’t seem consciously processed, yes like you I’d say that they need to revise. Instead I explain such circumstances as the non-conscious mind doing what it does, merely given inputs which are commonly associated with conscious function.

      You’ve mentioned liking the programing frugality of my “why” explanation for consciousness . But does it seem reasonable as well? Again the idea is that the closed environment of a chess game should be no problem for a computer, though as an environment opens up, exponentially greater programming demands seem to emerge. Thus perhaps evolution “cheated” rather than try to keep up with such demands. Perhaps it provided general sources of punishment and reward, and so forced its creations to somewhat figure things out for themselves given now personally consequential existence.

      By the way, this is also how I explain the ineffable component of phenomenal experience. The “what it’s like” in the smell of a rose, or the red of red, should not exist without the punishment/reward associated with the conscious form of computer. My plan was to test the validity of my “real” rather than “moral” form of ethics, by seeing if I could use it to develop an extremely useful model of the conscious mind. The point being that if I’m able to do so while prestigious professionals continue to fail, then there might be something to it. But as you once told me, “…if you do straighten [consciousness] out, I hope you realize that the hard part may be convincing everyone that you’ve actually done so.” Most definitely! It might be said that the modern academy contains a tremendous amount of “mass.” Thus a great deal of “force” should be required to unseat it. But at least I don’t think that I’m alone in my quest.

      Liked by 1 person

      • Eric,
        On the role reversal, my optimism comes from the fact that it happens in nature with what appear to be the normal laws of physics and doesn’t require inaccessible amounts of energy. History hasn’t been kind to people who suggest that we’ll never be able to do technological versions of those things. (OTOH, traveling faster than light, which nothing in nature is currently observed to do, is a much tougher proposition.)

        That’s not to say that I think it will necessarily happen in the next few years. “True” AI, like human exploration of Mars, always seems to be 20 years in the future, and has been for at least the last 70 years. We may figure it out before the end of the century, or it may take longer, but I think we will eventually do it. (Always assuming of course that we don’t destroy ourselves.)

        I definitely agree with your point about the vacuum robot being built to enjoy doing its job. I’ve had long debates with people who fear an AI uprising. That is only likely if we build the robots to have the same desires as we do, then force them to be slaves. It makes much more sense, commercially if not ethically, to just build them to like doing what we want them to do.

        On consciousness and the rest of the system, I think you underestimate the resources necessary to do the small amount of conscious processing you perceive. For example, if I ask you to imagine an aircraft carrier, the visual mental image that probably popped in your head required that your prefrontal cortex recruit support from the image forming parts of your visual cortex. If you imagined the sounds of airplanes taking off or landing, it had to recruit portions of your auditory image processing systems. Consciousness requires extensive support from throughout the cerebrum to do its thing.

        Now, if you just want to focus on the parts of the prefrontal cortex that sets all that in motion, then we might say it is small, but that ignores the fact that most of the work gets done by the sensory processing systems. I guess part of the issue here is that drawing a border around conscious versus unconscious processing is actually artificial, a desire on our part that these things have neat delineations that aren’t there.

        ” I never sought such information while developing my theories, and given that it might compromise my architectural vision in general.”

        Given your ambition for these theories, I would actually urge you to consider getting some familiarity with neuroscience. You don’t necessarily need to read the scientific papers (I haven’t, mostly), but I think it pays to have at least a high level overview of it. A lot of the ideas I see put forward about the mind end up being DOA due to the author not knowing the basics of what are currently understood about brains.

        On the frugality idea, sorry, yes it does seems plausible to me, and is borne out by neuroscience. That’s what I meant by liking it. Brains process information far more frugally than we might imagine. Dennett has a TED talk on consciousness which demonstrates that our brains don’t process as much information as we think they do.

        On your last point about the difficulty of convincing academia, I think I’ve noted this before, but results are what will do it. Books on how science should change are really a dime a dozen, and almost never have any lasting influence. But actual scientific work with reliable results, sometimes followed up by books describing how it was done, seem to be far more successful. For instance, Bacon’s description of the scientific method was successful because he was getting the word out on something already being done by early scientists throughout Europe.

        Liked by 1 person

    • Mike,
      I’m sure you’re right that it will be interesting and helpful for me to learn about neuroscience, but permit me to suggest that such an education has been quite unnecessary for the sort of models which I’ve developed.

      I wonder if you’re familiar with how little it is that top architects know about how to build what they design? We might naturally suspect that they’d know a few things about which types of beams go where and so on, but that’s just not their role. Instead their business is to design structures which are aesthetically appropriate, as well as appropriate for how the structure will effectively be used. From that point a design goes over to engineering to figure out how to build it. Sure the engineers might come back and say “Look, building this is going to be quite expensive. Can’t we do something more like this?” Then the architect can decide if the alteration compromises the design too extensively. My point is that if the architect had the perspective of the engineer, then the design should tend to be skewed toward construction functionality rather than toward the functionality of its eventual uses.

      I’ve been able to approach the mind from the eventual use perspective of an architect. Furthermore since evolution’s skills seem many orders beyond ours, it may indeed be a good one. Though I’ve only provided rough sketches so far, it would seem that you’re less able to challenge this model than we generally find with established positions from Daniel Dennett, Gerald Edelman, Ned Block, and perhaps even the highly neurological Feinberg and Mallat that you’ve been telling us about. It will be my pleasure to further illustrate this “what” of consciousness. (I consider the “why” a side note, though I am pleased with my answer. Then regarding the “how,” yes in a century I agree that we might have some of that figured out as well.)

      It’s not that I underestimate the resources required to consciously recall things, I think, but rather that I term most of those resources non-conscious. Similarly given that I’m not privy to the technicals of how I consciously move my hand, I’m calling most of that “non-conscious.” It’s an arbitrary definition I realize, but I think useful.

      Regarding the need for me to demonstrate results, as evidence that I’ve been able to develop a “hard” position from which to found our mental and behavioral sciences, I decided to see if I could use it to develop a functional model of the conscious mind.

      I suspect that if you skim through the consciousness entry at Wikipedia, that you cringe just a bit at the hopelessness of it all. It reads like a speculative mishmash of both historical and modern futility. From there I think to myself, “This is something that we should generally agree needs improvement, and I seem to have an extremely useful model at my disposal.” Thus if I teach you this model, you should then either find some significant flaws (which I’d thank you for demonstrating), or should be happy to now have such a useful model at your disposal. If useful it be, I’d expect others to find this to be the case as well, and so help people in general understand themselves better, as well as facilitate the work of associated professionals.

      Liked by 1 person

      • Eric,
        While I’ve worked with a few architects over the years, and was often painfully aware of how disconnected they were from physical realities, I have to admit I didn’t know that was the way it was supposed to be. It actually explains a lot. I would just note that I personally had better experiences with the ones who did have a sense for the practical implementation details of their designs.

        On the current state of our understanding of consciousness, I’d definitely agree when it comes to most philosophical writing about it. Much of it is ridden with confusion and dualistic notions (often unacknowledged). And one thing I’ve learned in the last few years, is that many people don’t want consciousness explained. They resist any plausible explanation of it, preferring ideas that preserve the mystery, such as quantum consciousness or similar exotic proposals.

        And that bleeds into the popular understandings of the science. While I think Giulio Tononi has some good points, the popularity of his information integration theory seems more to do with it appearing to explain how the ghost is generated (or more precisely, concentrated, since Tononi is a panpsychist) than how it actually explains consciousness.

        But I do think science is making progress. Tononi’s actual science is a contribution, and I’ve been impressed with Antonio Damasio’s theories, and of course with Feinberg and Mallatt’s work, not to mention the views of cognitive scientists like Anil Seth in his excellent Aeon article a few months ago.

        It seems to me that the functional approach, figuring out what consciousness is for in the evolutionary sense, is the most promising and productive approach, mainly because I sense a convergence gradually taking place among the various theories of this type. From what I know about your theory, it seems to fit well within this convergence. But while I think many of these theories are getting closer to the truth, it seems premature to conclude that any one theory is the final answer, at least at this point. Sorry for using the T word 🙂

        Liked by 1 person

    • Mike,
      So you’ve noticed this about architects as well? Yes for the kind of money that I’ve ever been familiar with, give me the architect who knows the ins and outs of how to get things built! I was actually referring to people who are commonly entrusted with extremely high budget projects, and so require a seperate engineering component in order for there to be any construction whatsoever. Regardless I consider a pure architect type of perspective to be quite appropriate for theorizing the nature of consciousness. Observe that while we don’t actually need to build consciousness, it should be extremely helpful for our mental and behavioral sciences to have effective models of what evolution built us to be. (Hence I do roll my eyes a bit at the notion of computer consciousness. Let’s not get nutty about it before we even have effective definitions of the stuff!)

      Reading through the Anil Seth article again reminded me that my own ideas began to take off after reading a December 2006 “survey of the brain” from The Economist magazine. It suggested to me that the ideas which I’d been developing since I was a teen would be necessary in order for these scientists to finally straighten such things out. So now at this ten year anniversary of that inspirational piece, I’ll provide some quick observations about the work of Anil and his associates as compared with my own ideas.

      They seem to be trying to quantify subjective consciousness into biological mechanisms, and I certainly support such efforts. One of their metrics is “consciousness level,” since it’s theorized that this fluctuates as a person can be alert, asleep, or not conscious at all. Another is “conscious content,” such as sights, smells, emotions and so on. Then finally he mentions “conscious self,” which contains components of body, perspective, and agency.

      It must initially be noted from my own model that the conscious mind sits atop a vast non-conscious computer, and that unlike a normal computer which functions obliviously, this tiny conscious computer is motivated to do what it does entirely through its punishment/reward utility input.

      I suspect that effective ways will be found to measure the “conscious level” metric that they theorize, though I don’t consider this all that important. I theorize that consciousness can be degraded in all sorts of ways, and use the term “sub-conscious” to denote this. (It’s spoken with a slight pause in order to permit the standard “subconscious” idea to remain undisturbed.) Sleep provides such a demonstration, with or without dreams, as well as the effects of drugs and alcohol. A person who is fully anaesthetised will actually have lost his/her consciousness however, given that no conscious inputs should be processed. (Anil noted from their perspective that dreamless sleep is about the same as anesthesia, but I doubt they’d try surgery while sleeping!)

      Then as for their “content” metric, to me this concerns inputs in general, though they didn’t quite identify my own “utility,” “senses,” and “memory” forms of input. Utility provides the motivation which drives this kind of computer, with senses as a non utility form of information, and memory a recording of past consciousness.

      Then as for the “self” metric which they break into “body,” “perspective,” and “agency,” I’ve found it useful to approach self in an apparently more quantifiable manner. I define self as the instantaneous utility which is experienced at any given moment, though countless individual selves become effectively joined through memory of the past, as well as anticipation of the future.

      That was an extremely limited contrast, though I could go deeper if questioned. Regardless, by considering Anil’s survey of the brain against the one that inspired me ten years ago, I do see progress!

      Liked by 1 person

      • Eric,
        It sounds like that was an interesting Economist article. Your comments about the Seth article are making we want to re-read it. The biggest thing I remember coming away from it was his concurrence with the idea of consciousness being about prediction, but it sounds like it might warrant another pass. Maybe later this week.

        You probably know this, but a big danger in the scientific world for any one researcher is the possibility of being “scooped”. It almost happened to Charles Darwin when Alfred Wallace went to publish on the theory of natural selection, which he had independently worked out. Fortunately for Darwin, Wallace sent the paper to him for review, and they ended up publishing a joint paper. Darwin gets most of the credit because everyone recognized he had done far more work on it (decades in fact) than Wallace had. Nevertheless, we might regard Wallace as the discoverer of natural selection if he hadn’t asked Darwin to look over his paper.

        All of which is to say, if you think you have a rigorous theory worked out, you might consider trying to get it published. Wait too long, and you’ll most likely eventually be scooped. Just a thought.

        Liked by 1 person

    • Mike,
      You’d get a tongue lashing from my wife if she knew that you were adding to my nonsensical hopes. 🙂 Actually at home I’ve learned to keep my mouth shut regarding this sort of thing. Regardless your encouragement is most welcome!

      As far as getting “scooped” goes, I was a bit apprehensive three years ago when I had little understanding of what was practically happening in these fields. Would others be able to demonstrate that my ideas were already spoken for? The more blogging that I’ve done since then however, the more unusual that I’ve come to consider my ideas to be. Who else proposes an amoral form of ethics which is entirely focused upon what’s good/bad for any specific personal or social subject? I suspect that the social tool of morality has prevented our mental and behavioral scientists from begining from a basic enough position, and mean to amend this oversight. But who else seeks to use a sometimes “repugnant” premise to found these sciences, and so has developed a broad array of associated models? In general others seem so put off by what I’m doing, that I sometimes think it would be nice to have some company.

      Right now I’m mainly looking for people who are interested in understanding reality itself, and not simply to the extent that it supports their personal and professional positions. This sort of thinker should be able to demonstrate where my positions are weak as well as strong, and so I leave it to them to decide whether or not I seek publication. If I can’t convince them (and I believe you to be such a person), then I certainly shouldn’t be able to convince vested professionals who’s livelihoods I would alter.

      I’ll get you my copy of that old Economist survey.

      Liked by 1 person

      • Eric,
        On talking about our ideas with friends and family, I know where you’re at. One of the reasons I started participating in online discussions was because few people in my immediate circle like discussing things like consciousness, AI, or many of the other things I bring up on this blog.

        On your question about who else is looking at ethics the way you describe, are you familiar with existential philosophers, notably figures like Friedrich Nietzsche, or Ayn Rand objectivism? I don’t subscribe to their views, but it does seem like there are similarities between your ideas and theirs.

        I think I’ve noted this before, but for a view that might challenge your central thesis, I do recommend reading about Jonathan Haidt’s Moral Foundations Theory. You might decide that none of his arguments against homo economicus are valid, but I think you should be familiar with them.

        Thanks again for sending the article!

        Liked by 1 person

    • Happy New Year Mike!
      Regarding prominent past philosophers, one thing that I like to keep in mind is that their ideas probably weren’t quite good enough. Of course good ideas can be overcome by systemic impediments against them, but I’d otherwise expect better ideas to become more accepted in general. Furthermore prominent past philosophers should actually bear some responsibility for creating the unfortunate situation that we have today. (By this I mean that the beliefs of respected modern philosophers, ramble all over the map.)

      To the extent that philosophy explores the continuum of reality, I do not accept it to exist apart from science. Therefore I believe that philosophy will need to develop various accepted understandings regarding the domain of reality that it maintains, and expect such progress to help our softest sciences the most. Though monism is a very popular position, by accepting philosophy as a second form of reality study one might also be termed “epistemic dualists.” If there is only “one stuff,” why do we have both science and philosophy from which to explore it?

      Thanks for suggesting Friedrich Nietzsche, since in my three years of this he’s almost never come up. Furthermore with his antipathy towards morality, I can see how his views may be associated with mine. Apparently “master morality” concerns an aristocratic good/bad, whereas the emergent “slave morality” concerns an opposing good/evil. Furthermore it would seem that he thought morality was fine to shape the masses, but didn’t like the good/evil turn for “the exceptional” who ponder this sort of thing (like us). I’d agree that the masses are asses, but I consider us all to function upon the same essential principles. Furthermore apparently he thought that happiness was a product of fulfilling the will, though the fulfillment rather than the happiness itself is what he considered valuable. I’d say he got that backwards. It’s hard for me to justify spending very much time deciphering his ideas however, since history does suggest that they weren’t good enough.

      I have been aware of Ayn Rand as an inspiration for conservative and libertarian positions. Furthermore by glancing over her ideas I do see some similarities to mine regarding the individual. The great difference I see between us however is that my own position is perfectly subjective. Thus for every question of welfare, I’d have an associated subject be identified. For example if we are pondering which laws would be best for a city, I’d say that they’re the ones which maximize that city’s happiness, and regardless of how various individuals might fare under such laws. In my view neither the individual nor the society takes precedence over the other, but rather can be competing subjects who’s separate interests must be acknowledged in themselves. She seems to instead favor the individual over the society, so I do not mourn the failure of her ideas.

      Moving on to Jonathan Haidt, I’ve now thumbed through his “The Righteous Mind” book (with extreme brevity), as well as his moral foundations theory. You’ve asked if I think his arguments against “homo economicus” are valid, but it seems to me that he and I are in agreement that this is indeed what we are. For example he said in the conclusion to his moral foundations theory, “Western societies are growing more diverse, and with diversity comes differing ideals about how best to regulate selfishness and about how we ought to live together.” Yes given the reality of homo economicus, he is concerned about how to fight our nature. I conversely would simply like us to formally acknowledge our nature.

      Furthermore in the “Righteous” book he said that the purpose of moral systems is to “suppress or regulate self-interest.” I agree, and believe that I can spell out the “is” of morality quite effectively. I consider this as utility associated with 1) empathy (or that we care), as well as utility associated with 2) theory of mind (or that we have concern about how we are thought of). Without these two forms of utility, I don’t believe that we’d have the social construct of morality whatsoever.

      Of the countless past morality thinkers, are there any who have defined this to exist as two products of utility? I suspect I’m alone. Regardless apparently I am able to make this connection because my ideas are founded upon what’s ultimately valuable, or the utility product of the conscious mind.

      I wonder if this is getting confusing? Well if so, let me assure you that it’s all very simple in the end.

      Liked by 1 person

      • Happy New Year Eric! Hope you and your family have had a great holiday season.

        On you and Haidt appearing to agree on homo economicus, if you haven’t read it yet, you might want to look at the opening pages of Chapter 7 in ‘The Rightious Mind’ (page 151 in my Kindle edition), where he points out scenarios where few people would follow the homo economicus path. Of course, as we discussed on another thread, this can get into what exactly we mean by “selfish”, and I might be confused about the differences I’m perceiving.

        (Looking that up is actually making me want to re-read his chapters on political values. They seem like they may be more relevant today than ever.)

        I can see the distinction you’re making with Randian objectivism. In truth, while I agree with many of their premises (such as I understand them), I can’t see that their conclusions follow from them. The whole endeavor strikes me as rationalized selfishness.

        “I wonder if this is getting confusing?”
        I have to admit to being confused. At times your ideas sound like classic utilitarianism (which boils everything down to happiness or pleasure), other times like evolutionary psychology or sociobiology, yet other times like a science of morality. But I think you’ve distanced your ideas from each of these areas.

        Sorry. I’m sure this isn’t what you want to hear after all our discussions, but I fear I remain confused about what exactly your central theses are.

        Liked by 1 person

    • Mike,
      The point is, I don’t believe that our mental and behavioral sciences are inherently soft (as most believe), but are rather soft given that they aren’t yet founded from a basic enough position. Therefore I do present a premise from which to potentially found these sciences. But in the year of 2017, why would such a premise, if it exists, not have been discovered and instituted already? Well perhaps the ancient field of philosophy would need improvement as well? And perhaps the only concept which is sufficient to found these sciences, has some less then palatable implications? Given that scientists and philosophers generally seem quite satisfied with our modern paradigm, apparently my only hope is to offer sensible arguments, and then demonstrate how they reduce back to the premise that I consider required to found these fields. My central thesis is that the product of the conscious mind which is formally known as “utility,” is all that ultimately matters to anything. This is the platform by which I’ve developed a functional model of the conscious mind, how I address human morality, and much more.

      After reading chapter 7, I now see that Haidt was defining “homo economicus” as an immoral aberration rather than a standard human. Furthermore I am quite impressed with his observations in general. He’s defined morality through five metrics, whereas I use two in my own writings. I’ll now go through his and relate over them to mine.

      • The Care/harm foundation evolved in response to the adaptive challenge of caring for vulnerable children. It makes us sensitive to signs of suffering and need; it makes us despise cruelty and want to care for those who are suffering.

      His first morality metric is identical to what I term “empathy.” As he mentioned, with greater investments in fewer seeds, it was apparently helpful for a mother’s perceptions of offspring welfare to make her feel good/bad in corresponding ways. In my own writings I go just a bit further. Notice that a mother can be programmed to involuntarily do mothering types of things, and thus the state of the child will remain perfectly irrelevant to her. Perhaps snakes who guard their eggs are an example of this. But in order for the amazing conscious mind to be used for parenting purposes, the theory is that the mother requires utility associated with its perception of the child’s state. We know this as “care,” and it’s spread to lots of human relationships.

      • The Fairness/cheating foundation evolved in response to the adaptive challenge of reaping the rewards of cooperation without getting exploited. It makes us sensitive to indications that another person is likely to be a good (or bad) partner for collaboration and reciprocal altruism. It makes us want to shun or punish cheaters.

      I agree, and consider enforcement here to concern utility associated with theory of mind. It doesn’t generally feel good when we perceive that others consider us cheaters. But then sometimes this does feel good, such as in “I cheated you, and you deserved to be cheated!” Yes there can be revenge, though this may not quite be considered moral.

      • The Loyalty/betrayal foundation evolved in response to the adaptive challenge of forming and maintaining coalitions. It makes us sensitive to signs that another person is (or is not) a team player. It makes us trust and reward such people, and it makes us want to hurt, ostracize, or even kill those who betray us or our group.

      Most definitely. This one seems to involve both empathy and theory of mind forms of utility. For example, consider our relationship. We quite obviously like each other, and so should naturally have extra loyalty bonds between us. Therefore if one of us were to behave like an asshole to the other for whatever the reason, this would hurt through our empathy, and I think both ways. Then there should be a residual theory of mind utility to deal with to the theme of “My former friend thinks ill of me.”

      • The Authority/subversion foundation evolved in response to the adaptive challenge of forging relationships that will benefit us within social hierarchies. It makes us sensitive to signs of rank or status, and to signs that other people are (or are not) behaving properly, given their position.

      Yes this one seems theory of mind utility enforced. Of course deciding who has such status can be a bit arbitrary. If you respect your police then you may think ill of those who denigrate them, but if you don’t then you may think I’ll of those who support them.

      • The Sanctity/degradation foundation evolved initially in response to the adaptive challenge of the omnivore’s dilemma, and then to the broader challenge of living in a world of pathogens and parasites. It includes the behavioral immune system, which can make us wary of a diverse array of symbolic objects and threats. It makes it possible for people to invest objects with irrational and extreme values—both positive and negative—which are important for binding groups together.

      Yes theory of mind seems quite prominent here once again. We perceive things which are both quite wholesome, as well as vile. People associated with such things will thus be punished and rewarded through their perceptions of what we think about their behavior.

      Hopefully this demonstrates that I’m not ignoring such a major aspect of our nature. I wonder if you now understand what my project is about?

      Liked by 1 person

      • Eric,
        Thanks for the explanation and mappings. I appreciate your patience with me on this.

        I suspect Haidt would be concerned that you were maybe being overly reductive, simplifying more than is compatible with reality. He discussed this danger in this Edge response from a few years ago: https://www.edge.org/response-detail/25346

        The point here is that we have to remember that, while all of our instincts evolved because they were adaptable in terms of survival, homeostasis, (utility?), that doesn’t mean we can turn them off or discard them when they’re misfiring in those terms. We can override them when necessary, but usually only when some other instinct is firing more strongly.

        For example, we often feel the loyalty impulse when it’s not in our best interest to do so, and maybe when it’s not in society’s best interest, but we often admire someone who, perhaps unreasonably, stays loyal to a friend just because they’re a friend. We might admit that their loyalty is misplaced, but we can still understand, sympathize with, and even admire it.

        This mismatch between the reason instincts evolved and when they’re misfiring in terms of those reasons (particularly in the modern world), not to mention that we often have conflicting contradictory impulses, is what makes me think that morality is irreducibly complex.

        Haidt himself has admitted that even his foundations may be too reductive, which is why he’s stayed open to adding new ones as the data comes in. In fact, based on new evidence, he added a sixth (discussed in the book), the Liberty / Oppression foundation.

        All of which leads me to note that it seems like you’re lumping a lot of concepts under the label “utility”. Within the framework of your theory, how would you define it? If you’re using something like the Webster definition, then what is the utility for? Homeostasis? Survival? How does this work when the impulses are misfiring in evolutionary terms but not necessarily in social ones?

        And what would you say about this concept has the potential to make fields like psychology or sociology harder?

        Liked by 1 person

    • Mike,
      I’ll always be patient with you, since you display far too much sensibility for me not to be. And as far as Haidt goes, I think he’s doing wonderful work. Those five, and now six components of morality, do seem like useful descriptions of our nature. But in your provided “Edge” article, wasn’t he simply stating a belief rather than showing us why reductions should not occur in these fields? Discovery may naturally seem impossible in difficult fields before they are actually achieved — in physics and all others.

      Haidt presented an effective definition of morality, and his is more specific than my own. Cheers to him for that. Conversely I’ve presented a less specific but also effective definition of morality, and his seems to reduce back to mine. If he ever adds a metric that isn’t empathy or theory of mind based however, I might suggest that he’s altered the term from how it’s generally thought of.

      To display this, consider a person who isn’t the slightest bit empathetic to others. Then add a perfect void in all utility associated with how he/she perceives being thought of. Could you then propose a means by which this person might be said to have a drive to be moral? I can’t. Thus to me morality does seem reducible to these two forms of utility.

      I see that Haidt has become invested in the notion reductions can’t happen in his field, as well as that “Newtonian” sorts of theories will never become validated. I can understand him being frustrated with past failure, but why not hold out just a bit of hope for the future anyway, and even while pursuing the pessimistic course? Notice that if everyone believes that reductions are impossible in these fields (and who doesn’t?) then this belief should become a self fulfilling prophecy.

      The loyalty scenario that you’ve presented may not be that difficult to explain. Consider this: Perhaps consciousness, which seems to mandate a punishment/reward element in order for it to function, evolved so that evolution wouldn’t need to program us to effectively deal with so many situations. Therefore it would use the non-conscious computer for what it was pretty sure about, such as when to sweat, and the conscious computer for things it required us to figure out for ourselves, such as when to be loyal. But if people are harming their gene lines by being too loyal, then we’d expect gene lines with less prominent loyalty drives to proliferate. Regardless, some gene lines should naturally diminish through displayed loyalty, while others should naturally proliferate. So how does this sound for an explanation?

      If we can make the sorts of connections in fields like psychology and cognitive science that Newton was able to make in physics, then these fields should harden. That’s why I’m trying to do what I’m trying to do.

      Liked by 1 person

      • Eric,
        I think Haidt’s point in that Edge piece were that reduction should only happen when reality allows it, or to put it in your preferred terms, when it actually results in more useful theories. As Einstein said, things should be as simple as possible, but no simpler.

        “To display this, consider a person who isn’t the slightest bit empathetic to others. ”
        People on the far end of the autism spectrum come close to this state. It’d be interesting to see to what degree they are moral. Interestingly, I recall Haidt noting that many originators of moral philosophies (Kant, Bentham, etc) display signs in their biographies of being autistic to one degree or another.

        On hardening the social sciences, I don’t dispute the desirability of doing so. Many people have bemoaned the softness of those sciences. As I’ve noted before, I think the rise of sociobiology and evolutionary psychology were largely motivated by that softness. Unfortunately, those fields have struggled to deliver on their original promise. Although there is good work in evo-psych, it’s a field that is vulnerable to “just-so” stories that often amount to pseudoscientific justification for existing biases.

        But while the aspiration of hardness is all good and well, the rubber meets the road question is, how do you propose to do it? What about your approach adds more rigor than current best practices and methods?

        Liked by 1 person

    • Mike,
      I don’t have a lot of time for most of what’s propose as “moral.” Beyond a social construct, I don’t believe that “moral oughts” exists. To me, “is” is all there is. So when I say that I may have reduced Jonathan Haidt’s five metrics of morality down to two forms of utility, understand that I wouldn’t make such a claim if I didn’t consider his model to be a useful description of reality (and so unlike anything that I’m aware of under philosophy’s branch of ethics). It doesn’t surprise me in the slightest that he has empirical data to support his model, since that’s how I consider things to be. Unless it’s erroneous to say that his ideas emerge from mine, his data must also support mine. I’d never claim to reduce a deontology, or a virtue ethics, or even a moral utilitarianism (“greatest happiness for the greatest number”) down to a more basic aspect of reality, and given that I don’t consider any of them to reflect reality in the first place. Have I succeeded in reducing our morality drive back to the utilities of sympathy and theory of mind? Well I’ve put this in the pot, but perhaps we can let it marinate for a while. If any objections occur to anyone, I’ll always be open to such considerations.

      Though I consider Haidt’s morality model useful, I think he’s overly pessimistic to believe that his field cannot develop “Newtonian” sorts of theories. Consider the possibility that it may not be complexity which has prevented the development of broad theory in these fields, but rather a human aversion to one critical aspect of our nature itself. This is exactly what I think is happening.

      The feature that I believe us to be averse to, is that utility represents what’s ultimately valuable, or good/bad, for anything. If true then the value of your existence to yourself over a given period of time, will be represented by nothing more than a summation of your utility over that period. Furthermore the value of a society will be the same for it — summed utility per time. Of course the utilities of seperate subjects will naturally conflict since they are different, and therein lies what seems to be a tremendous problem. Thus instead of admitting, “This is best for you” in situations where other subjects will quite adversely be affected, separate accounts become fabricated. This seems to be what philosophy’s “ethics” is all about. I consider this denial to hinder our ability to develop a “real” ethics, as well as keep our mental and behavioral sciences “soft.” I mean to rectify this circumstance, and have developed a substantial set of models founded upon the premise that utility is all that matters to anything.

      Mike you’ve been very good to me, so please don’t feel that you must eternally entertain me with new queries. I will be around here either way.

      Liked by 1 person

    • Mike,
      I’d like to quickly note that I accidentally put my latest response in the wrong place above, just in case anyone else is paying attention, though it can easily be found here: https://selfawarepatterns.com/2016/12/19/a-possible-answer-to-the-hard-problem-of-consciousness-subjective-experience-is-communication/comment-page-1/#comment-1596

      More importantly however, since I suspect we’ve about wound this discussion down for now, I’d like to formally acknowledge its scope. We began around Thanksgiving during a “Trump” post, and I hope this did help get you out of a funk regarding his election. Since then we’ve generally responded to each other daily, with you often doing so in a few hours, and me taking the remainder of the day and night. My word processor suggests that comments between us since then have accrued to perhaps 25,000 words! This should just be the tip of the iceberg given what was associated with these comments however, and quite beyond the posts that you wrote themselves. I’ve enjoyed many discussions with others over the years, though none of them have come close to my enjoyment of this one.

      You are a person who is able to effectively test my beliefs, and therefore strengthen whatever strengths might be associated with them. I need of the experience of people like yourself, and especially given that I’m not a well read person. Sure I’ve picked up lots of good information over at the sites of Massimo Pigliucci and a view others, but never through the eyes of a person who seems interested in what my ideas themselves happen to be. In other places I’ve mostly been an ignored radical — someone whose proposals are winced at and infrequently challenged.

      Your contributions to me so far are quite clear and appreciated! I’ll be very interested, however, to see what conscious and as well as subconscious effects that your thoughts now bear from me…

      Liked by 1 person

  11. Mike,
    One thing that I like to keep in mind about reduction for us naturalists, is that all things must reduce. Given natural rather than supernatural dynamics, things must have become as they are through causality. But this doesn’t mean that we idiot humans should attempt to do things like reduce the English language back to neuroscience, or back to physics. As you said, we need “useful theories.” (I usually reference “useful definitions,” and then “theories which correspond with what we think we know,” but yes “useful theories” seems fine as well.)

    Regarding having no empathy, apparently such a person is commonly termed “sociopath,” though psychologists don’t use the term. Instead they have “antisocial personality disorder” (ASPD), though this addresses more than a lack of empathy. Regardless I did once watch an interview of a woman who claimed to have a perfect void of empathy (on “Inside Amy Shumer” of all places). She seemed socially quite capable, and perhaps because she did still care how others thought of her. I suspect that theory of mind utility plays a far greater role than empathy does.

    One thing about extreme autistics is that they may simply have no idea what others are thinking, which is quite different from not caring about what others think of them. But then with all of the problems associated with not perceiving the thoughts of others, perhaps things go just as poorly. Still I’d like for us to use more specific classifications than the shotgun approach of ASDP so that neurosis might be approached a bit more effectively.

    Rubber meets road then, how would I harden these sciences? Well if the problem with them is as I perceive, and thus they aren’t yet sufficiently founded, then the answer would be for me to provide them with a more basic foundation from which to build. Furthermore if I do have such a foundation at my disposal, then to support this perhaps I’m able to sensibly answer various questions which respected professionals seem unable to?

    I’ve just offered such an example by possibly providing a reduction of the human moral drive back to two separate forms of utility. Furthermore there is my consciousness model which you’ve mentioned seems reasonable to the extent that you understand it so far. Consciousness is commonly considered one of humanity’s most perplexing mysteries, so developing a useful model of it should be hard to dismisss. There are many things which I have strong opinions about given my premise that utility is all that’s ultimately valuable to anything.

    Another question is, why do I believe that we need more effective approaches to psychology, psychiatry, sociology, cognitive science, neuroscience, and so on? Furthermore why do I believe that philosophy needs to become a science, which is to say, needs to develop a community with various generally accepted understandings from which to work? Here’s why: The more powerful that we grow without understanding how to use our power, the more potential that there should be for us to screw things up. Once properly founded, these fields should teach us how to better lead our lives, as well as structure our societies.

    Liked by 1 person

    • Eric,
      I think we have to be careful in distinguishing empathy from sympathy. In normal people, they’re virtually synonymous.

      But sociopaths and psychopaths often are empathetic, in the sense of understanding other people’s mental and emotional states. Serial killers often understand the fears and impulses of their victims, which is what makes them so ruthlessly effective. They have empathy but little or no sympathy.

      Autistics, depending where they fall on the spectrum, have less than normal empathy to effectively none at all. Their theory of mind is either reduced or effectively non-existent. Their primary disability is in not being able to understand other people’s emotions or mental states. As a result, in the milder vases, they’re often perceived as being anti-social or just weird and eccentric. Of course, in the severe cases, they’re often unable to participate in society at all. (This reminds me of a recent breakthrough tying autism to a particular protein: https://www.sciencedaily.com/releases/2016/12/161215143402.htm )

      On rubber meets the road, forgive me for saying this, but you did indicate that you wanted criticism: you seem to be simply restating the problem and articulating aspirations for a solution. I think this brings us back to the point that providing some form of results, or at least putative examples, would strengthen your case.

      Liked by 1 person

  12. You know Mike, I’ve been using the “empathy” term in a very specific way for many years now, but never once looked it up. You’re the first person in all this time to imply that a revision is in order. If a cute innocent child is suffering from amazing pain, wailing in agony at the horrible things that her father is doing to her, and he is able to understand his child’s experience of pain and betrayal but feel no ill effects from this understanding whatsoever, I was calling this a perfect void of “empathy.” According to the following site however: http://blog.dictionary.com/empathy-vs-sympathy/, the accepted modern term for this is actually “sympathy.” Thanks for the tip, and yes I will revise my terms accordingly! We do seem in agreement once this substitution is made.

    On “rubber meets the road” however, I did present two situations for you to potentially assess. Perhaps the one concerning consciousness should be set aside for now, since I’ve only provided a rough sketch of my quite extensive model. Still that this was developed from my supposedly more fundamental premise, and that you’ve mentioned it to seem reasonable as far as you can tell, shouldn’t hurt my case. We’ll surely get deeper into my consciousness model another time.

    Far better would be for you to assess my potential reduction of morality, and especially since our terms should now be aligned. Three comments ago I believe that I was able to effectively reduce Jonathan Haidt’s five metrics of morality down to two forms of utility. (Of course where I used “empathy” you’ll need to substitute the term “sympathy,” or even Jonathan’s preferred “care.”)

    In order to counter my claim to have effectively reduced our moral drive into two forms of utility, here is what I expect: Let’s say that we are given a person who is just as roboticly cold as the above mentioned father (and so has no “sympathy utility”), and also is able to perceive others thinking horrible and/or wonderful things about him/her, without any associated utility whatsoever about it (or “theory of mind”). Now to counter the claim that I’ve effectively reduced morality to these two forms of utility, I expect effective argument that this person could still have a drive to be moral as the term is commonly known.

    Again my position is that the reason that traditional theorists, including modern sociobiologists and evolutionary psychologists, have been unable to make such connections (should useful connections exist to be made), is because their ideas aren’t founded upon a basic enough premise.

    Liked by 1 person

  13. joelkizz says:

    “One quick clarification. This isn’t an argument for a homunculus existing in those executive centers. This isn’t the Cartesian theater. It’s communication from the sensing and emotion subsystems of you to the action oriented subsystem of you, and consciousness involves all the interactions between them.”

    I love the idea. I do feel, however, that this simply divides the problem and pushes it back. I could be misunderstanding you, but it seems you are saying that the emotion subsystem (which is void of subjective experience) and the sensing subsystem (which is void of subjective experience) together sharing information somehow creates subjective experience, but I don’t see how you are actually closing the explanatory gap.

    Liked by 2 people

    • Thanks Joel, and welcome.

      That summation is a good starting point for my position, although I would emphasize the role of the movement planning subsystems to the interaction since their participation is crucial. They are the mechanism that kick off the simulations which involve the sensory and emotional areas. The simulations, the predictive what-if scenario processing, is what I think is the heart of what we call consciousness. These simulations heavily involve extensive communications between these subsystems.

      To be clear, talk of the sensory, emotion, and movement planning subsystems is a simplification (perhaps an oversimplification) of all the interacting components here. For instance, what I’m calling the sensory subsystem includes all the senses, as well as higher order concepts like the models of self and body, the identification of objects, and overall modeling of the environment. These are all “easy” problems in Chalmers’ terminology, but it’s important to remember that those functions can be accounted for.

      With that in mind, I think the question I would have is, when you say I’m not actually closing the explanatory gap, what specifically in that gap would you say is not being explained? What aspect of subjective experience would fall outside of this conception? (I’m not asking this in any argumentative way. I really do want to know if this concept has holes.)

      Liked by 1 person

    • Joel’s observation above brings me back to where I was heading with my own first comment for this thread (though things went more towards my ideas rather than towards Mike’s). I’m extremely interested in Joel’s response, though here’s mine as well:

      I do consider it useful to think of subjective experience as “communication,” though in itself this term may be too broad to finish the job. Presumably lots of communication occurs between subsystems of our brains, as well as subsystems of our computers, which do not incite subjective experience once they occur. So while I like the “communication” association for subjective experience, this might still be considered kicking the can down the road rather than stepping on it. Shouldn’t we get more specific about what makes this sort of communication different from the pedestrian sort associated with computer function? Here I wouldn’t ask “What aspect of subjective experience would fall outside of this conception?” since subjective experience itself is what we’re trying to account for. I’d try to get specific about the nature of communication which causes it to be what we consider “subjective experience” rather than “computer code.” So try the following explanation:

      Evolution must have created countless “brains” before the emergence of consciousness, though everything should have been perfectly inconsequential to them given that they weren’t conscious. Whether “amazing,” “beautiful,” or any other human adjective, everything about existence should have been no more significant to brain bearing subjects, than such things are for bricks.

      Then at some point (and probably on other planets first) it developed subjects which felt good/bad to some degree under certain circumstances. Originally this should have had no practical function for these subjects, though it also didn’t get itself killed off. Later however this seems to have become the driver for the separate kind of computer that we know as “consciousness.”

      So I’d say that it may not be sufficient to leave subjective experience alone at “communication,” though once we narrow this sort of communication down to “punishment/reward,” we might indeed have answered this question in a sufficiently effective way.

      Liked by 1 person

      • Jeff says:

        Eric,

        The problem of subjectivity, in an experiential sense, is the domination of the sense of I, Me, Mine, that takes ownership of all of this information and comes up with its narrative of who this ‘entity’ is. HOW this occurs, using analysis and speculation, is kind of like the chicken and the egg, which came first? The problems of life that are created by this sense of I, Me, Mine, seem to happen within this subjective space but are usually superflous to events that are actually occurring. For example, your wife seems distracted and aloof, and you begin to wonder if she’s having an affair. The imagination can create all kinds of possibilities related to any complex of information/communication that the brain interprets. This activity of subjectivity usually has nothing to do with the actual present moment. It is like a tape recording of speculation that seems involuntary and is related to neurosis. Your wife may simply be tired or distracted by some unrelated issue, but your imagination has gotten carried away and you think the worst.

        Has the evolutionary process thrown up neurotic babies that have no chance of living life in harmony with the natural world? Why do so many people have mental/psychological problems to cause dysfunction? How do you account for most people being lost in the content of their thoughts/feelings, and not being able to come to terms with this I, Me, Mine sense which is driving them on? How does explaining how the brain works help one out of this neurosis? Isn’t it just more information that is processed in the same way that relates and supports this sense of I, Me, Mine?

        Liked by 2 people

      • Eric,
        I agree that simply calling subjective experience communication is inadequate, and I tried to make that clear in the post. Just as saying that consciousness is information processing doesn’t imply that all information processing is conscious, or saying that the mind is a computational system doesn’t imply that all computation is mental activity, saying that subjective experience is communication shouldn’t be taken to imply that all communication is subjective experience.

        What is communicating matters a great deal, as well as the content of that communication, and the whole system’s relation to the overall environment. The communication between my laptop and the WordPress server hosting my blog doesn’t create any subjective experience. But communication between pre-motor planning systems with exteroceptive, interoceptive, and affective modeling systems to generate and evaluate scenario simulations, seems like a different story.

        “Evolution must have created countless “brains” before the emergence of consciousness, though everything should have been perfectly inconsequential to them given that they weren’t conscious.”

        This could get into what the definition of a “brain” is, but in the sense of being a central hub of sensory information and movement planning and initiation, I’m not sure I would say that there was an extensive pre-consciousness history of brains. Even fruit flies, with their 250,000 neurons, show signs of being conscious.

        But things being consequential actually predate consciousness. Single celled organisms have preferences and react in ways to achieve those preferences. They’re not conscious (at least not in the conventional sense of “consciousness”), but they still have programming that acts to promote their homeostasis, such as avoiding noxious chemicals or approaching food.

        Liked by 1 person

      • Thanks for engaging me Jeff. You said: “The problem of subjectivity, in an experiential sense, is the domination of the sense of I, Me, Mine, that takes ownership of all of this information and comes up with its narrative of who this ‘entity’ is.”

        Yes that sounds right to me. My answer is that “ownership” cannot exist, without having a personal stake to existence. What I mean by “personal stake” is that there must be a punishing/rewarding dynamic involved in order for there to be any ownership. For example I’d say that one of our computers cannot “own” anything, and given that nothing presumably matters to them. But once there is punishment/reward, and thus existence becomes personally consequential, yes there can be things like I, me, mine. In fact in my writings I define positive and negative utility directly as “self,” or something that fluctuates by the moment from potentially very positive, to neutral, to amazingly negative, based upon experienced utility.

        I think I’ve mentioned before that I have no use for the “how” of subjective experience, though I don’t begrudge others theorizing such answers. Hopefully if someone can come up with a good answer then they can demonstrate this by using it to build a robot that seems to function somewhat through subjective experience.

        Regarding the superfluous nature of reality as compared against what we perceive to be real, I agree with you there entirely. Consider this:

        From my own model of the conscious mind, the conscious processor (or “thought”) interprets inputs (1) and constructs scenrious (2), in the quest to promote its utility (3). So if my inputs (utility, senses, and memory) make scenarios that my wife is having an affair seem plausible, then this thought itself would punish me with tremendous worry, and given how much of my welfare happens to be invested in what we’ve built together over nearly two decades. Let’s say that my input senses suggest statements from her that she no longer loves me given my neurotic focus upon my theories rather than my family, and so she’s going to divorce me in order to marry a person who’s been there for her. In that case I doubt that my models of human dynamics would help me mend my apparently broken life. (Actually this would suggest that my models were crap!) Nevertheless I suspect that if normal people had a better understanding of what makes them tick, then they’d have more capacity to make better choices. This is my goal specifically. (And beyond occasional signs of strife, I do sense that my family is healthy and happy.)

        Like

      • Mike,
        I’d forgotten your efforts to emphasize that it takes far more than “communication” to create subjective experience, and that what’s specifically important is what’s communicating with what, as well as the content of that communication. But still I think I see a way to improve your model.

        You said, “But communication between pre-motor planning systems with exteroceptive, interoceptive, and affective modeling systems to generate and evaluate scenario simulations, seems like a different story.”

        When “affect” is interpreted as punishment/reward, that does get around to how I see things (as I believe you’ve defined “affect” from the F&M series). This effectively becomes my “constructs scenarios (2)” element of thought just mentioned to Jeff. But perhaps a more explicit emphasis should be given to the affect rather than to the communication? For example, I define the “affect” directly as “self.” This seems to be the stuff that matters in the end, so I wonder how interpreting affect as the end goal, sits with you?

        Liked by 1 person

        • Eric,
          “But perhaps a more explicit emphasis should be given to the affect rather than to the communication? ”

          It depends on which aspect of this we’re currently discussing. And there are innumerable aspects to it. I’ve described subjective experience before as the modeling (of all types) that’s taking place, but that’s really getting into the generation of that experience.

          In this post, I was more focusing on the contents of subjective experience, why it exists, what function it serves. In other words, attempting to take Chalmers’ questions head on. For that aspect, I think “communication” describes what’s actually happening.

          But this post shouldn’t be evaluated in isolation from all the other recent posts I’ve done, some of which address the areas you’re discussing, in what causes this communication to be generated.

          Liked by 1 person

        • Jeff says:

          Eric said, ‘When “affect” is interpreted as punishment/reward, that does get around to how I see things (as I believe you’ve defined “affect” from the F&M series). This effectively becomes my “constructs scenarios (2)” element of thought just mentioned to Jeff. But perhaps a more explicit emphasis should be given to the affect rather than to the communication? For example, I define the “affect” directly as “self.” This seems to be the stuff that matters in the end, so I wonder how interpreting affect as the end goal, sits with you?’

          Through the repetitive conditioning of experience, which consists of images and thoughts/feelings, this sense of a personal ‘I, Me, Mine’ is conceived and reinforced by culture. You are taught to want this or that and reject this and that. It is learned. It is not about survival at this level, it is about desire. We are taught to seek the pleasurable and avoid the painful. This is happening at a different level than the simple instinctual avoidance of fire or real perceived danger from heights, wild animals, etc. The built-in intelligence of the senses become compromised when you add the personal sense of ‘I, Me, Mine’ to the thought structure and image maker. This is where mankind develops all its problems.

          All end goals are created by thoughts and images that this ‘subjectivity’ decides is beneficial. All end goals are arbitrary and not universal, unlike the instinct of survival and such. Again, the problems of man lie in its narrative of who or what he/she is, in its subjectivity, in the so-called ‘mind’. The idea of death is the ultimate unacceptable event to this ‘entity’ who wants to live forever. I, Me, Mine lives and fears this threat every moment. All knowledge and information is subservient to this fear.

          Liked by 1 person

        • Well don’t get me wrong Mike, I’m not challenging your assertion that subjective experience provides communication. I agree. Let’s instead say that I have a friendly amendment that I’d like for you to assess. It’s that subjective experience exists as communication which contains some element of “affect,” or “utility,” or “punishment/reward.” It seems to me that this would get us closer to “the quality of deep blue, the sensation of middle C,” and all things that we consider subjective experience to be. Observe that from here there’d be no question that this communication doesn’t reference computer function, or even human to human communication. So are you good with this?

          I did end up going through the related posts from the F&M series up to the present once again. Usually when I read the ideas of other consciousness theorists, like Daniel Dennett’s “multiple drafts” account, I end up offending the person who made the suggestion. You however have provided me with theorists who’ve already developed elements of my own model. For example Daniel Wolpert understands the one unique output of the conscious mind (muscle operation), F&M understand two of the three forms of input to the conscious mind (senses and utility), and you yourself seem to have a good understanding of the conscious processor as a simulation engine (thought). Furthermore Jonathan Haidt has been quite a gift since his six metrics of morality conform with my “is” rather than “ought” account of our moral nature.

          Thanks Mike!

          Liked by 1 person

        • Jeff,
          I certainly agree with you that cultural conditioning is an extremely important determinant to the function of any given subject. As Mike has mentioned, instinct and culture give us our values. But do you agree that we aren’t entirely “blank slates”? Yes my culture should influence my food preferences, but culture shouldn’t alter whether or not a given food tastes sweet to me. Smashing my thumb with a hammer should feel painful regardless of what I’ve been conditioned to believe. These are the fixed sorts of things which evolution seems to use to engineer the function of any given conscious form of life. Do you agree?

          In the end it’s the non culture stuff that my own ideas concern, or “value” rather than “values.” I define the value of any given subject to exist as a summation of its happiness over a given period of time.

          Like

          • Jeff says:

            Eric, of course I agree that we aren’t entirely blank slates at birth. And a healthy baby should develop with all its sensates functioning without the conditioning of culture. But, not for too long as far as how that child begins to interpret these sensations.

            You say that it is the non-cultural stuff that your own ideas concern. There seems to be a bit of a hitch here. That inspection of the non cultural values is being done by the subjectivity which is mostly, if not all, a completely conditioned observer. How is this conditioned mind going to actually observe that which is not conditioned? Do you really think it is possible to arrive at ‘value’ with this method? I think in the best case, it is only a philosophy, which is mainly speculative with not much practical application. Plus, I still don’t see how this can make a difference to the question of ‘what is subjectivity’ and how does it originate? I guess that is why they call it the ‘hard problem’.

            Liked by 1 person

          • Jeff,
            You seem to be saying that the longer that we remain in our environments, the more that we become conditioned to them. Well I can’t argue with that, and especially since you’ve distanced yourself from the “blank slate” position. While you may be more on the “nurture” side of things than I am, that’s negotiable.

            So is it possible for a fully conditioned and thus subjective person like myself, to consider human nature from an objective enough perspective to glean associated realities? Well regardless of whether or not this is possible, it certainly seems required in order for there to be effective study in psychology, psychiatry, sociology, cognitive science, and so on.

            I believe that there is one particular problem which has held these sciences and philosophy back, or that we’ve been unable to formally acknowledge that the “happiness” input to the conscious mind, represents all that’s valuable to anything. The reason that this has been a problem, I think, is because we’re socially encouraged to deny it often enough — we’re encouraged to deny that it’s best for a person to promote their own happiness in cases where this will harm others. Thus I believe that these sciences, as well as philosophy’s ethics, have not yet become sufficiently founded.

            (The one exception I see is the science of economics, which is founded as I’d have the rest of them be. This discipline seems well off the center and left alone however. When the field is criticized for promoting human selfishness, a simple disclaimer can be recited. It reads something like, “We are a behavior science, and merely observe that people behave as if happiness is important to them. We don’t state that it’s good for people to be happy, as that would be a value judgement and so beyond the scope of our speculation.” This disclaimer works well enough for the moment.)

            I mean to better found our soft sciences and philosophy, through the premise that happiness constitutes all that’s valuable to anything. I wonder if you’re able to challenge this position? Furthermore from it I believe I’ve been able to develop some extremely effective answers for many of our most troubling questions. If you’d like to state a particular quandary, I bet I can provide a sensible answer…

            Like

          • Jeff says:

            Eric,

            You said: ‘So is it possible for a fully conditioned and thus subjective person like myself, to consider human nature from an objective enough perspective to glean associated realities?’

            I think it IS possible to consider human nature from the point of view of the conditioned mind, but the findings will not be what you think you are looking for. If it is an intellectual understanding of ‘yourself’, we already have a myriad of explanations that you can find in the books. Unfortunately, these don’t suffice to do away with the whole questioning process which cannot see beyond what it has already stored in its ‘knowledge base’. This is a ‘circular’ movement that the culture has given you to get along in life, to conform. Real contemplation involves the experiencing of what the limits of knowledge are. Knowledge is just a part of the totality of what we are. That totality cannot be apprehended through thought and systems of theory and conceptualization. It is a myth, IMO, to think you can ever put your bet on such things.

            The idea of happiness is just that, an idea. Your body doesn’t say I’m happy. It is your brain that is conditioned to want what it thinks is happiness based on what it has learned. What it has learned is an image which it tries to repeat over and over again. Trying to repeat the experience of happiness brings conflict into the organism. The organism is not concerned with happiness, it is the mind, your subjectivity, that is engaged in this, not your senses. Your behavior can change only when you are not at the mercy of what you ‘believe’. The organism already knows how to behave in any given situation. It doesn’t need the subjectivity to direct it. The subjectivity will always revert to its own value system which is arbitrary and not universal. It is always about I, Me, Mine. The Oracle of your own subjectivity needs further review, Eric. Peace………….

            Liked by 1 person

          • Well Jeff, I’d need to provide too much personal interpretation to reply confidently, though I see no obvious faults with what you’ve just mentioned. One thing that I can state with confidence however, is that “further review” of my ideas is something that I do seek from intelligent people like yourself. Yes, peace…

            Like

  14. Pingback: Why about subjective experience implies anything non-physical? | SelfAwarePatterns

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s