Two brain science podcasts worth checking out

As my long time readers will know, I’m very interested in the mind, and my preferred way to explore it is through science, notably neuroscience or cognitive psychology, or with science oriented philosophy.  With that in mind, I want to call your attention to a couple of podcasts I’ve been following for a while.

gingercampbellThe first is Dr. Ginger Campbell’s excellent Brain Science podcast.  The posting frequency isn’t very high, but most episodes are packed with interesting information.  The most common format is Campbell interviewing an author.  One of the recent episodes was an interview with Jon Mallatt, one of the authors of the book that has informed many of my recent posts on consciousness.  Older episodes feature neuroscientists whose work I’ve highlighted before, such as Michael Graziano and Michael Gazzaniga.

Some of the people and books that Campbell discusses do get pretty technical, but most of it seems oriented toward a science literate lay person.  Unfortunately, the older episodes are pay walled, but I’ve been impressed enough by the recent episodes to get a subscription to work my way through the archives.  I feel comfortable recommending this podcast for anyone with an interest in the brain and mind.

The other podcast is Brain Matters.  This is a much more hard core “inside baseball” show that often gets very technical, to the extent that I have trouble following many of the episodes.  It’s done by a group of neuroscience graduate students who most often are interviewing working neuroscientists.  As a result, the subjects can get somewhat arcane, with topics such as cortical columns, aphasia, mitochondria in neurons, or the tracing of particular neural circuits.  I don’t try to listen to every episode of this one, instead focusing on the ones where the title or description catch my interest.

Both of these podcasts can be subscribed to in the standard services.  (My subscriptions are through iTunes and the iOS Podcast app.)  Or they can just be listened to on their web sites.

If you know of any similar sources, I’d love to hear about them in the comments.

59 thoughts on “Two brain science podcasts worth checking out

  1. Thanks for the link to the Brain Matters podcast – hadn’t seen that one.

    I’ve been following Ginger Campbell’s podcast from the early days and my bookshelves contain a few that have been reviewed on there. The podcasts are sometimes a book review and may later follow up with an interview of the authors. The content is great for those not working in the field. When I’ve recommended it to others they have found they’s taken a while to get used to Ginger’s format. For example, if you like to re-listen to podcasts on an mp3 player you’ll want to trim them down to the meaty bits. But overall I have a lot to thank Ginger for. My favourite interviewee and book – still Eric Kandel & In Search of Memory.

    Liked by 3 people

  2. Thankyou Mike, and I shall certainly explore Ginger’s site – as she so alluringly puts it in her introductory video: “The show for everyone who has a brain”. After mere seconds of pondering that necessary qualifier, I decided it was for me! 🙂

    Liked by 2 people

  3. Mike,
    Keeping up with Ginger Campbell’s exploits will surely help me better understand modern thought regarding the mind, so thanks! Given ongoing discussion here I did go straight for the September Jon Mallatt interview. It was nice to get a sense of his personality as a very pleasant and ambitious theorist. I found this “NPR” sort of discussion both surreal and delicious! It was good to finally get a more direct account of what he and his partner have developed. While I’m very pleased with what they’ve done, it is my hope that they’ll do it again some day through the perspective of my own models.

    One challenge I have concerns their position that distance senses required the development of a conscious mind, and particularly vision. In my view there should be no reason that completely non-conscious organisms couldn’t use light bearing information as input for their function. Observe that we humans build robots that do exactly this. Here images are processed, and if the proper information occurs then further instructions are provided about what the machine is to do.

    To me the only hard problem of consciousness is figuring how to truly punish one of our machines so that existence is horrible for it, as well as to truly reward it so that existence is wonderful for it. This may be considered the “how” of consciousness. The “what” of consciousness, as well as the “why” of it, are questions that I believe that I’ve already worked out pretty well.

    Liked by 1 person

    1. Eric,
      I agree that Campbell’s interview of Mallatt was excellent. It was what alerted me to their book. Even before I had finished listening to it, I had already purchased the ebook edition.

      On distance senses, I think we have to make a distinction between eyesight and light sensors. F&M didn’t see light sensors as necessarily indicating consciousness. From what I recall in the book, their point is that actual high resolution, focus-able, direct-able eyesight is an evolutionary expensive adaptation, but it’s useless without mental imagery, which is itself useless without modeling of the environment, and all of that is useless if it isn’t used as a guide to action. If they’re right, it all comes as a package and likely co-evolved together.

      I might disagree on the hard problem issue, although it may depend on what you mean by “truly punish” and “truly award”, or more precisely what you consider the scope of those phrases to be.

      To me, the system goals, while probably crucial to triggering our intuitions of consciousness, are relatively straightforward. (Note the emphasis on “relatively” because nothing here is actually easy.) It’s all the modeling and simulations that trigger the various affect reactions, as well as the modeling of those reactions themselves, that are hard. (If they weren’t, we likely would have had self driving cars and other fully autonomous robots some time ago.)

      Liked by 1 person

    2. Mike,
      So this was the interview that started you down the F&M path nearly five months ago? Well I don’t blame you for getting excited about these guys, though caution is always warranted in the early days of any potential revolution. I know that you’ve rolled your eyes at at least as many invested believers as I have over the years, given that such investments tend to become the source of a subsequent inability to think straight. And how does one not become invested in exciting ideas? I know that the models which I’ve developed over the years have become quite integral to my own life. Given my understanding of this investment, I do force myself to remain as objective as I possibly can regarding potential flaws. Thus epistemology itself has become quite an obsession for me.

      The question at hand is, can something have distance input information from which to function, and not be “conscious”? Furthermore can it do so through computer imagery for modeling from which to guide action?

      If we can say that flies have eyes which are more than just light sensors (or that bring “high resolution, focus-able, direct-able eyesight”), and so can model their surroundings for action, then they can only be conscious according to the thesis of F&M. I suspect that association professionals would say exactly this about flies, so I presume that that’s that given their thesis.

      Nevertheless I can also imagine flies to function in a way that we may not generally consider conscious, even though this would seem to have the F&M “consciousness” stamp of approval. It could be that the light which hits their eyes is successively run through algorithms which force them to fly away before they are smacked, and indeed, which force them to do all that they do. Here they might better be termed “biological robots” rather than “conscious life.” Existence should be no more personally consequential to such a creature, then it is for my computer.

      It would seem prudent to establish a useful definition for the term “conscious” before we decide what does and doesn’t have it. (From my own such definition I actually suspect that flies are conscious, though that’s beside the point.)

      Liked by 1 person

      1. Eric,
        In truth, Antonio Damasio’s book sort of primed me for F&M’s ideas. (F&M reference Damasio approvingly.) But Damasio’s book focused on humans. It was F&M’s broader scope that I think made the difference, at least for me.

        F&M actually hedged somewhat on the idea of insect consciousness. They weren’t sure there was enough neural substrate there, enough breadth and depth. (According to Wikipedia, flies and ants only have about 250,000 neurons in their entire nervous system.) F&M were only confident that all vertebrates were conscious.

        That said, there is some compelling research on fruit flies that demonstrate behavior that seems to imply consciousness, weighing possible courses of action (simulations?), detectable changes in brain processing when they know they have control versus when they don’t, etc. I’m inclined to think they are conscious, albeit at a far lower resolution than human or mammalian consciousness, but perhaps not much lower than the simplest conscious vertebrates.

        The problem I see with positing them as robots is that, as far as I know, we still don’t have a robot with the range of abilities of even a simple fruit fly, one with the fly’s ability to navigate its environment, avoid dangers, find food and mates, all autonomously. Could we build such a robot with those capabilities without it having to do simulations on various courses of action, evaluating the consequences of each scenario in relation to its goals?

        Possibly, but I suspect it would require far more computational substrate than the fly uses. I think it was you who posited that consciousness may simply be a computational hack, an evolutionary shortcut to reduce the necessary processing power of the brains or organisms that would require a vast collection of instinctive algorithms to otherwise succeed.

        Liked by 1 person

      2. Well okay Mike, I realize that you’re quite able to figure this stuff out for yourself. I can help with a slightly different viewpoint, but no more. If you ever do find that the ideas of F&M don’t quite explain all that we need them to explain, then I think that I’ve got some pretty coherent models to consider as well. And I will keep bringing them up!

        Liked by 1 person

        1. Absolutely Eric!

          I actually don’t see your views and F&M’s as predominantly exclusive of each other (although specific points here or there do appear to be). There’s a lot of resonance between them. As you noted before, we agree on a lot more than we disagree.

          My inclination is to look at any theory with empirical support and see what it adds to the overall picture. Each neuroscientist, psychologist, or science oriented philosopher of mind that I’ve read has added to that picture. They all have disagreements, but each has aspects of their theory which seem right when contemplating a particular portion or aspect of the overall system. The intersections of all these theories, I think, point to a plausible reality worth learning about.

          Panksepp is spending a lot of time, at least early in the book, pointing out his disagreements with other neuroscientists (not always accurately in my view), but based on what I’ve read so far, I can see slotting his framework into the overall model painted by Damasio, F&M, and others.

          Liked by 1 person

        2. Mike,
          I agree that my ideas are not predominantly exclusive of F&M’s ideas, as well as other theorists that you’ve been known to admire. Actually I often seem able to recognize elements of their models functioning within my own highly comprehensive models. My last comment was actually incited somewhat out of worry that I was taking the role of “F&M attacker,” which would naturally pit you against me as their defender. I don’t want that at all.

          Though I should perhaps be a bit more coy, as well as private, I’ll lay my cards on the table right now anyway. I’ve mentioned before that I suspect that my theory could be incredibly important for humanity. Furthermore to win its success against all sorts of opposing circumstances, I believe that I’ll need a partner who happens to have the strengths that I see in you. Therefore my plan is to demonstrate the nature of my ideas to you, since I suspect that you’d join me if you had a glimpse of the potential that I see. I am happy with our progress so far, though there’s a great deal more to explain. At some point I consider the whole thing to weave together into a very coherent package.

          Regarding F&M’s theory that high resolution distance senses require consciousness, I have a specific question to ask. I don’t need a full lab study to tell me that common house flies have high resolution distance senses — otherwise swatting them should be quite easy. But I also recall Mallatt emphasizing to Campbell that he and his partner were not saying that insects were conscious. Wouldn’t house flies have to be conscious in order for their thesis to remain consistent in this regard? Otherwise these creatures would seem to be an example of non-conscious life that has high resolution distance senses.

          There’s one other question that I’ve been meaning to ask, since the interview didn’t quite answer it for me in an explicit enough way. They claim to have an answer for the hard problem of consciousness. Is their answer something like “The modeling associated with distance senses is what gives us consciousness”? Or did I miss their answer entirely?

          Liked by 1 person

          1. Eric,
            I enjoy our discussions, but not sure if I’m the right person for what you envisage. I’m an old IT guy who likes reading about and discussing this stuff, but whose aspirations are to be a science fiction author in retirement. You might be better off finding a credentialed scientist or philosopher to partner with.

            On flies, I actually agree. There have been recent behavioral studies of flies which seem to show them weighing options, which certainly seems like conscious deliberation. It’s possible Mallatt hadn’t seen those results yet, or was just being epistemically cautious.

            On the other hand, he and Feinberg posit a minimum number of neural layers for consciousness, and his concern may be that there isn’t enough substrate in the fly brain for that. I personally think the answer is that the fly’s consciousness is at a far lower resolution than ours. Once we make that adjustment, it seems like the substrate problem subsides, but there may be complications I’m not remembering, or aware of.

            F&M do address the hard problem in the book. They note that subjective experience is irreducible. A subject cannot experience the workings of their own neurons. They call this “auto-ontological irreducibility”. Which means that objectively reduced explanations won’t feel like the answer. They also note that an outside observer can never access the subjective experience of another system, calling this “allo-ontological irreducibility”. They admit that the subjective-objective divide can’t be closed, but they express hope that they’ve “bridged” it. IMO, I fear all they did was clarify why the hard problem may never be solved to the satisfaction of those troubled by it.

            Liked by 1 person

        3. Mike,
          I was expecting you to present far greater obstacles against being included in my plan than a desire to write science fiction. Involvement with me shouldn’t get in the way of that, and only if we were to be amazingly successful and you chose to let it. So I’ll continue on as before.

          Also I question your claim that certified professionals would be better for me. I’m rewriting many of the rules for our weakest sciences and philosophy, and so it should be quite difficult for credentialed people to openly challenge convention to the extent that you seem able to. Philosophers tend to get angry with me when they find it difficult to challenge my assertion that humanity is in desperate need of generally accepted philosophical understandings. Mental and behavioral scientists tend to get angry with me when they find it difficult to challenge my assertion that they’ve not yet developed a basic enough platform from which to build. Jaak Panksepp may be a hopeful rebel, though in practice I haven’t noticed enough of his kind, and can’t say how much further even he’d venture from the mainstream.

          I got the sense from the Mallatt interview that F&M felt quite persecuted for theorizing that so much of life was conscious, and thus they might have not included insects for political reasons. Nevertheless this does seem to have blown a hole in their premise. They can’t just say “Yes insects have high resolution distance senses, which we theorize can only exist consciously, though they also don’t have enough neural substrate layers to be conscious.” And while insects may actually be conscious, thus preserving their theory, I presume that our robots can have high resolution distance senses without being conscious.

          Regarding the F&M work on the hard problem, thanks for the reminder. In your original post I now remember saying something like “Whatever” to it. To me the only hard problem of consciousness is answering what it is that the non-conscious mind does in order for phenomenal experience to be created. I’m not optimistic that humanity is going to answer this question soon or ever, though no answer should be believed until after something is built that seems to have it. You and I suspect that flies have phenomenal experience, which would make it pretty mundane for evolution to create. Excluding the “how” of consciousness, I believe I have pretty good answers for the “what” and the “why” of it however.

          Liked by 1 person

          1. Eric,
            On credentialed academics, I understand what you’re saying, but it’s a matter of being taken seriously. We live in an age where anyone can publish, but in the realm of non-fiction, being taken seriously is a far more daunting challenge. The fact is that most writing by non-credentialed people is ignored. Since most of it is by people who really don’t know what they’re talking about, I’m afraid that’s generally a rational move for the public. Even credentialed people struggle to get their views out. Your chance of making an impact goes up substantially if you can find a partner with a PhD.

            “I got the sense from the Mallatt interview that F&M felt quite persecuted for theorizing that so much of life was conscious, and thus they might have not included insects for political reasons.”

            I think something like that is right. Remember that they’re pushing against a common sentiment that only mammals, and possibly birds, are conscious, or even more skeptical sentiments that only the most intelligent mammals are conscious. I think the insect thing is them just being epistemically cautious in view of those sentiments.

            On the hard problem, I’m still attracted to the answer I floated a few posts back, that subjective experience is communication from various areas of the brain to the action planner in the prefrontal cortex. That two-way interacting communication includes sensory information from the various sensory processing regions, but also emotions from the brainstem and limbic system, and a sense of self from the brainstem and insular cortex.

            Ultimately though, it depends on what we mean by “experience”. Panksepp floats the idea of noetic and anoetic feelings (with or without “knowledge”), arguing that the upper brainstem generates anoetic feelings while the noetic variety requires the cerebrum. I’m not sure if anoetic experience what most people mean by experience, but I’ll admit intuitions here vary.

            Liked by 1 person

        4. Mike,
          In the “publish or perish” academy, I actually suspect that the vast majority of what PhDs publish, is crap. Nevertheless I have sought their help, and still do. For example there is a professor in forensic psychology who happens to be a good friend. About thirteen years ago she actually inspired me to craft my theory into its modern form. While she’s privately been supportive of my radical positions, her job demands that she educate students through the standard paradigm. For the sake of her livelihood and the family that it supports, she’s quite unable to venture where I do.

          Furthermore I’ve had quite extensive private discussions with an east coast child psychologist who shares our blogging passion. We made very little headway with it however. In general I would observe apparent systemic problems, and explain them by means of a void in founding theory, while he’d deny that there was such a void (as a disciple of B. F. Skinner) and explain disarray as a result of unfortunate political nonsense. I did find our discussions quite educational and entertaining however.

          It’s not PhDs that I’m looking for specifically, but rather people of all sorts who seem able to consider our nature in reasonably objective ways. If my ideas happen to be as good as I believe them to be (and I think you’ll get a sense of this soon enough) then don’t count me out! Three Hundred years is really just a blink of human existence, but consider the effect that the institution of science has had over this speedy duration. If I’m right, then our softest sciences (including philosophy), should soon begin to harden up just like the rest of them, and even given how entrenched opposing forces seem right now. I believe that my name will be plastered all over this coming revolution, even if obscurely so.

          I’m starting to think that it would be useful to ditch David Chalmers’ “hard/easy” distinction regarding consciousness, and switch to a distinction that I recall our good friend Hariod Brawn use. (This was from a discussion the two of you had in your recent “hard problem” post.) Since then I’ve been thinking of consciousness in terms of “what,” “why,” and “how.”

          The “what” seems to be the critical question, and that’s the way I’d classify the speculation that you’ve just presented. To me the “why” seems like an interesting curiosity, though not nearly as important. Regardless I have an answer for each of them. Then the “how” should be the really difficult one, though an answer here should mainly just help us build conscious computers of our own.

          Your just presented “what” answer does seem to correspond with mine. Considering subjective experience as communication to the action planner, is like the way that I consider the conscious processor to interpret inputs (affect, senses, and memory), and constructs scenarios (“simulations” from your terminology I think) in order to figure out what to do to in the interest of affect welfare.

          I would have liked Panksepp to use the term “belief” rather than “knowledge” in the speculation that you’ve mentioned, since we do not “know” things. But this might be interpreted as acknowledging both knowledge and affect forms of input to the conscious computer, which roughly corresponds with my model (if he considers things to be known by means of “memory”). He probably classifies my remaining form of input, or “senses,” under both knowledge and affect.

          You’re right of course that we don’t think of experience as “something without knowledge,” but you’ve got to admit that “nodal” and “anodal” sound pretty cool! The more primitive a science happens to be, the more reason there should be for esoteric terms to be fabricated. Otherwise the masses might figure out what’s going on, and then decide “The emperor has no clothes!”

          Liked by 1 person

          1. Eric,
            On the hard problem and what I described above, I’m not sure yet on the “planner” label. It may imply too much functionality to that region, that it’s doing the simulations. In truth, except perhaps for a small amount of working memory, it heavily depends on the rest of the brain for its results. Visual simulations take place in the visual cortices, auditory in its cortices, emotional reactions in the limbic system, etc.

            (I’m feeling an urge to go back to my Neuroscience for Dummies book and get a refresher on the prefrontal region.)

            I think Panksepp sincerely believes his argument, that primal affects (the feelings of emotions) actually arise in the brainstem (as opposed to pre-feeling emotional programming most neuroscientists think is there). His primary evidence seems to be the hydranencephalics, children born without a cerebrum, who demonstrate the outward appearances of joy, anger, etc.

            But given that our facial reactions, except when we consciously decide to control them, are largely automatic non-conscious reactions, I’m not sure I buy it. It seems like the hydraencephalic behavior can be explained with conditioned auto-responses. Of course, I might feel very different if I knew a hydranencephalic child, but I’m not sure our intuitions are to be trusted here.

            Liked by 1 person

        5. Mike,
          It’s quite responsible of you to question your theories, though I wouldn’t be alarmed that your “planner” accepts input from other areas of the brain. Furthermore even without being sure where/how it might exist, wouldn’t it be great if you were right, and thus your model was effective? I think so. In fact I present no biological mechanisms whatsoever for my own models (abstaining even from using the term “brain”). I suppose that I’ll learn some neuroscience some at some point, and so interpret my extensive architecture through modern understandings in the field. Who knows, perhaps you’ll achieve a good working understanding of my models before I’m versed in modern neuroscience, and so theorize this yourself? Anyway my point is that theorizing the nature of consciousness itself, regardless of where and how various areas of the brain display it, should give humanity some extremely effective models.

          I don’t know about the brainstem, but if hydranencephalic children smile in appropriate situations, and Panksepp says that this is because they are happy while the establishment says that such children have no capacity for happiness, my own models strongly support Panksepp.

          If something positive were to happen to you, such as an unexpected windfall, here we are considering two different means by which a smile might come to your face. It could be that your conscious computer assesses this circumstance to be personally positive, which somehow causes your non-conscious computer to put a smile on your face, and from here you also become happy through your higher order brain function. Thus something which lacked higher order brain functions could not feel happy, even if an appropriate smile were displayed.

          Conversely it seems far more plausible to me that the windfall would be assessed through associated conscious scenarios, and interpreting them would cause you to be happy (through non-conscious mechanics), and then your non-conscious computer would take this happiness itself as a sign to smile. Observe that this gives the non-conscious mind a clear parameter to use for smile criteria (“happiness”), while in the first case “pre-feeling emotional programming” would need to somehow decipher that the conscious computer considers the windfall to be a positive circumstance, and so smile. That seems to be asking the non-conscious computer to do the work of consciousness, or what the p-zombie is asked to do.

          More to the heart of my models however, if hydranencephalic children experience no “affect,” then nothing should drive the function of their conscious minds, and therefore they should effectively be “vegetables.”

          Liked by 1 person

          1. Eric,
            I wasn’t exactly questioning my theory (although the reason I blog is often to see if others can point out convincing problems with pet theories), so much as questioning the way I was describing it.

            I think I’ve mentioned this before, but just in case, I think anyone who wants to theorize about the mind should learn basic neuroscience. I’ve found that learning the basics allows me to recognize how facile many theories of mind and/or consciousness are. I see psychologists, philosophers, technologists, and other scientists say things all the time that, if they’d had a basic knowledge of the brain, they would have known was nonsense or irrelevant.

            I’m still trying to work out my thoughts on the hydranencephalics. I wish there were more data on them. The accounts I’ve read don’t give much information on their actual behavior. They’re obviously profoundly impaired, to the extent that most neurologists seem to think they are vegetative and only give an illusional veneer of seeming consciousness.

            What seems certain is that if they do have any consciousness, it’s far more limited than any healthy mammal’s. Even adult pre-mammalian vertebrates have the ability to navigate their environment, something the hydranencephalics lack. The hydranencephalics never go beyond the capabilities of your typical newborn. (A newborn’s behavior reportedly mostly comes from their brainstem since their cerebral axons are largely still unmyelinated.)

            Liked by 1 person

        6. Mike,
          It’s good to hear that you were more concerned about the description of your theory than your theory itself, since I seem to I incorporate it into a larger scheme of my own. Regardless I offer an associated description as well. I consider your “planner” to be part of what I call “thought,” or the processing element of the conscious mind.

          Consciousness is defined by me to exist as what thought does, or interpret inputs (senses, affects, and memories) and construct scenarios (like, “If I do this, then that should happen”) in order to determine output that will promote personal affect welfare. Thus a demonstration of any of these will be an example of consciousness from this model. It’s important to note that I theorize “affect” to drive the function of this particular kind of computer. Thus without any punishment/reward, I theorize no motivation from which to interprete, for example, visual images.

          Consider a person who is perfectly sedated. I presume that a damaged finger would incite the non-conscious computer to signal “pain” to the conscious computer, though if the thought processor doesn’t interpret this signal, there will be no pain. Similarly, transmitted input information regarding things like light, sound, and chemicals of the air/mouth, will all go for naught without the function of the conscious processor.

          If “thought” were turned back on for this person, but without any affect whatsoever, then I’d be forced to say that this person was conscious to the extent that associated images and such were so interpreted. This reasoning is tautological however. I find it most useful to define consciousness as something which is motivated through affect, and therefore without it, there will be no conscious function.

          I’m sure you’re right that neuroscience has helped you see flaws in the mind theory of various philosophers and such who display this sort of ignorance. But then I come to you plainly admitting that my models were developed with absolutely no practical understandings of “brain,” and yet you’ve not only been unable to so challenge my accounts, but they seem consistent with the work of some of the theorists that you most admire. The solution to this riddle may be that the still primitive science of neurology, will need an outside perspective to straighten it out. It’s as if neuroscientists are “engineers” in desperate need of “architecture” from which to put their talents to effective use.

          I hope that you do keep thinking about hydranencephalics, as well as do so with a clear definition of the humanly fabricated term “consciousness.” It just kills me how accepted it is that consciousness requires discovering rather than defining. I fear that the discipline of epistemology will need to be straightened out in this regard (effectively through my first such principle) in order to halt this hopeless quest and many others.

          It’s pretty clear from my own definition of consciousness that hydranencephalics, like normal human babies, display conscious behavior. If this mandates that some or all affect occurs through the brainstem, then so be it.

          Liked by 1 person

          1. Eric,
            I generally agree with what you say about “thought”, although I usually avoid that word as an explanation because it’s one of those words, like “experience”, that sound like something fundamental but mask underlying details. I like “simulation” or “modeling” because they evoke a possible reducible idea of what “thinking” is.

            It’s interesting that you mention pain, because pain itself requires both interoception to be aware of the signal, and affective processing to categorize it as pain. That fits with my understanding of either interoception or exteroception being necessary for there to be anything to have affects about, but it seems like you had reservations about anything other than affects being necessary for consciousness.

            On the hydranencephalics, since my last reply, I found additional information at http://hydranencephaly.com/about_hydranencephaly.htm. Apparently most hydranencephalics have an intact thalamus, the central core of the cerebrum, and many have small amounts of the lower frontal and temporal lobes. To me, this clouds any conclusions we can make from their behavior about brainstem consciousness.

            Panksepp cites decorticated animals demonstrating outward emotional reactions, but F&M note that decerebrated animals lose any capacity for operant learning or other well accepted signs of consciousness. Not sure if “decorticated” and “decerebrated” are necessarily the same thing, as the first could conceivably refer to only the cortex itself being disabled but retaining the thalamus and other sub-cortical structures. If so, then we’re back to not being able to use any outward behavior for conclusions about brainstem consciousness.

            In any case, outward emotional reactions may trigger our intuitions, but I can’t see how we can conclude there’s any consciousness there in the absence of behavior that definitely requires it. My knee jerks if hit with a rubber hammer, an outward reaction, but there’s no consciousness between my knee and spinal cord. A healthy person is conscious of it after the fact, but a paralyzed person might not be, and the reflex happens even for someone who is brain dead.

            I’m currently in chapter 3 of Panksepp’s book, although I’ve taken a break to read other stuff. He’s covering what he calls the SEEKING system, but what is commonly called the reward system. Panksepp dislikes the “reward” label. If he’s right, it more like the drive to action system. Which makes the famous cases of animals self stimulating for this until dropping even more striking. They weren’t pleasuring themselves so much as being stuck in a kind of action inducing loop.

            Liked by 1 person

        7. Mike,
          Accounting for all the details can be challenging given our separate approaches, but that should be expected. I actually do reduce my “thought” term back to your “simulations” or “modeling.” This is what I mean when I say that this conscious processor “constructs scenarios.” But in addition I define thought to “interpret inputs,” and they come in the forms of “senses,” “affects,” and “memories.”

          I appreciate your statement that “…pain itself requires both interoception to be aware of the signal, and affective processing to categorize it as pain.” This suggests an openness to something that seems difficult for me to make clear. Quantified conscious experiences, such as a given example of pain, tend to provide two separate varieties of input to the conscious processor as I define them. One of these can be referred to as “utility” or “affects,” which exclusively represents punishment/reward. This leaves the other to simple information, and I call this “senses.” Observe that a pain in your toe provides both “affect,” as well as “sense,” since the pain suggests the location of the issue. Smells and tastes provide conscious information that commonly aren’t affect neutral, though visual images can sometimes be very bland. There may not be much affect provided by a blank movie screen, though a pornographic scene could have a significant affect element. Regardless most every quantified experience should have both affect and sense components.

          One issue I have with using the “thought” term to represent the conscious processor, is that I expand its definition beyond how it’s generally used. We don’t generally consider the experience of pain to be an act of “thought,” though from my definition that’s the way it is. I’m not aware of anyone else who uses a conscious processor term, but I’d certainly consider adopting another.

          It’s not that I have “…reservations about anything other than affects being necessary for consciousness.” Notice that I’ve just defined senses to have no affect component to them whatsoever, that are still interpreted. Nevertheless I theorize that consciousness (unlike the vast majority of stuff in our heads) happens to be driven by affect.

          Perfect sedation was considered in my last comment , or a means by which the conscious processor is switched off. But now consider being perfectly “numb.” Here we can theorize visual information, sound information, chemical analysis of the air/mouth, touch — though each of them would be manifestations of thought without affect — perfect apathy. Though we may be able to conceptualize sense without affect, I theorize that the conscious processor would effectively cease its function. This should actually be somewhat testable.

          I appreciate your detective work on hydranencephalics! Thus it would seem that “brainstem consciousness” remains inconclusive for us. But this does seem to be a question that deserves earnest exploration. Apparently Jaak Panksepp considers the whole of his profession to have no use for it (though Ginger Campbell appears to be a strong supporter).

          I have a separate way of considering this however. Two comments ago I asked, if you came into an unexpected windfall, how might the smile reach your face? It could be that your non-conscious mind assesses this situation and so gives you a smile (which would suggest that it has the proficiency of a philosophical zombie). Here you’d only be happy after you consciously assess this through higher brain function, and so might amplify your smile. Or conversely it could be that you consciously assess the situation, it makes you happy given associated implications, and then given that you are happy, your non-conscious computer is activated to put a smile on your face. Thus the non-conscious computer wouldn’t need to figure “conscious things” out.

          Here the establishment might say that I’m whacking a strawman, since they would never claim that such a smile would occur through the non-conscious computer assessing this situation. But if they instead agree with the account that I’ve provided, then why not let parsimony carry further? Why insist that such a full complement of the human brain be so critical to feeling good/bad? Why argue that an animal which chooses to do some unnatural thing into exhaustion, might be caught in a feedback loop instead of doing it because it enjoys associated electrical signals? Why would F&M decide that it was so necessary for them to explicitly state that their work does not suggest that insects are conscious, when this counters their “distance senses” premise itself? To me this stuff smells of confirmation bias, as well as a lack of epistemic curiosity.

          I must emphasize that if we are to determine that something is consciousness, the first job should be to present a definition for “consciousness.” From my own, if a subject demonstrates any interpretation of the five senses, such as a visual image, then it will be conscious. Is there any reason to believe that pain exists? How does it act when “injured”? If the hydranencephalic actually “does” much of anything, then they would seem to be conscious, given that the human didn’t evolve to do much of anything when it isn’t conscious (or at least from my definition of “consciousness”).

          Regarding Panksepp’s various systems of function that begin with SEEKING and work down to lesser ones, I hate to admit that this does seem arbitrary. Perhaps a better example of something which is “classification worthy” would be my three varieties of input to the conscious mind? I think he should instead have left this open to say that consciousness functions on the basis of feeling good and not feeling bad. And given that most of his animals must have evolved to function in open environments rather than cages, it’s no wonder that he noticed an urge to “seek.” They were probably bored as hell! (Yes I agree that a “REWARD” title would have been more appropriate.) I still consider him to be fighting on my side however, and I think yours.

          Liked by 1 person

          1. Eric,
            I stand corrected on your understanding of the relationship between interoception and affects. Thanks.

            Yes, Jaak Panksepp seems to be contrarian by nature, at times emphasizing disagreements, even when they are, as I suspect, largely definitional. Whether or not anoetic feeling is conscious feeling seems like it comes down to what we require for the “conscious” label. I cut my arm the other day, but it was a while before I was aware of the pain from it, although I’m pretty sure my brain was feeling it from the beginning. Was that feeling before I was aware of it conscious feeling?

            On your smile analogy, it seems like a windfall is a high order concept, which we likely couldn’t have without a functioning cerebrum. And a smile in an of itself may not be the best indicator, since it’s often more social signalling than genuine emotion, or perhaps more accurately, a culturally created emotion.

            A better more primal example might be getting tickled and the laughing reaction. This reaction seems like a mental reflex. We consciously feel it, but don’t invoke the reaction. We can inhibit it but only with great effort. It seems like a largely autonomous reaction independent of consciousness, albeit more complex than the knee jerk reaction, and unlike the knee jerk, somewhat inhibitible.

            Hydranencephalics reportedly laugh when tickled, but most of them have thalami. I wonder if derebrated animals emit their version of laughter when tickled. According to F&M, those animals do struggle with the insertion of feeding tubes, pushing it away with their paws, bite at syringes and experimenter’s hands when receiving injections, and lick the injection site. They also learn to withdraw their limbs from electrical shocks, although so do rats with their entire brain cut from the spinal cord.

            “if a subject demonstrates any interpretation of the five senses, such as a visual image, then it will be conscious.”

            The issue I would have with that definition is it seems to include things done by habit, which can usually be done with our consciousness focused elsewhere, or rapid reactions to emergency situations, which we normally only process consciously after the fact. Of course, those habits and reactions are usually only possible because of prior work by our consciousness, and although we’re usually not conscious of the act itself, if something goes wrong in the habit repetition, it usually summons our conscious attention.

            That definition also seems like it would include robots that can take in visual information and make programmatic decisions based on it. Is a face recognition system conscious? It interprets visual images and varies its output based on what it finds.

            Panksepp gives some pretty compelling reasons for preferring “SEEKING” label to the “REWARD” one. He notes that animals which can stimulate other pleasure centers usually pause after each stimulation, perhaps savoring it. But when stimulating SEEKING, they invoke it far more frantically. All that said, I’m trying to keep conscious of the fact that I’m getting his viewpoint and that the alternate ones might have their own compelling points.

            Liked by 1 person

        8. Mike,
          So your assessment of Panksepp is that he doesn’t seem to work with the definitions of others and thus inflates disagreements beyond what they need to be? Well this doesn’t entirely surprise me. He did seem to have quite a chip on his shoulder in those two interviews, though Campbell’s continual agreement might have led me to not be quite as suspicious as I should have been. If true, it’s a shame. The only way forward, I think, is through earnest exploration, regardless of expected definitional inconsistencies.

          If you cut yourself but feel no pain, as I see it, no pain exists for you at that moment. This isn’t to say that the wheels aren’t in motion regarding that injury however. I’ve noticed that cuts can have a pain lag, for whatever the reason, especially from a sharp blade. I’ve never observed a solid blow to the thumb, however, bring anything but instant pain.

          On smiles, it’s my impression that even though we can control our faces mostly when we try, there is also a vast array of expressions, including frowns, sadness, curiosity, anger, and on and on, that are programmed into the human well beyond its culture. It’s not important that you believe this, though the presumption was fairly important to my point itself. Furthermore it was also important that a higher order understanding, such as the implications from an unexpected windfall, be considered rather than something like being tickled.

          My point was first about normal people such as yourself. How does a smile come to your face with such a windfall? Does the non-conscious mind figure out the implications and so put a smile on your face? Of course it doesn’t. Instead you consciously figure out the implications yourself, and then because this makes you happy, your non-conscious mind (I suspect) uses the act of how you feel to give you a universal sign of human happiness.

          If the non-conscious mind does have such control over our facial expressions, and this seems pretty clear to me, should we lightly dismiss the appropriate expressions of subjects born with missing areas of the brain, simply because modern neuroscience suggests that they shouldn’t have the capacity to feel the way that they look like the feel? Furthermore should we also accept such understandings to doubt the potential for rich feeling complexity in various non human subjects, when they display behavior that’s quite appropriate to having such feelings? I’m not entirely sure what modern neuroscientists believe, but in a genetic sense I don’t consider the human to be all that exceptional. I consider us just animals in the end, even given our culture.

          Regarding non-conscious function, since I consider the non-conscious computer to be perhaps over 99% of the whole, your observations here happen to fall quite in line with my own models of our function. I’ve presented the following diagram once before a good while ago, but it may be helpful here. (I still haven’t updated it however. The top box that says “The Human Mind” should read “The Human Computer,” and the conscious “Sensations” input should read “Affect.”)

          The “learned line” here exists as a conduit through which a tremendous amount of what we do, becomes taken care of through the non-conscious computer. We couldn’t drive a car, speak effective words, and countless things more if the non-conscious mind were not conditioned in this manner to serve conscious function. Furthermore note that the non-conscious senses must take in all the information that the conscious senses do, as well as many other forms of senses that the conscious computer doesn’t. The non-conscious computer can be considered a vast supercomputer upon which the conscious computer that we perceive, exists. It functions behind the scenes.

          The reason that my consciousness model isn’t open to the inclusion of the computers that we build, is because they presumably cannot be punished or rewarded, and so can only function non-consciously. If existence is ever good or bad for them rather than inconsequential however, then this would be because they process affect, or something that could potentially drive a conscious computer that’s functional. Functional or not however, any example of affect by anything, would be an example of consciousness from my definition of the term.

          Regarding Panksepp’s modes of function, I suspect that he realizes that they all reduce back to affect, and this is given that he talks so much about punishment/reward in general. If that’s the case then he and I are certainly on the same page. I’m not challenging him with the claim that these traits don’t exist in the human and certain other forms of conscious life, since they clearly do. I’m challenging him to the extent that he claims that he’s found relatively unique classifications. For example, different conscious species display different levels of “CARE,” and some seem to not have this at all. Furthermore different individuals within a given species should have different such dispositions as well. Thus I think it was ill conceived to identify these classifications in the first place. Again, for an example of what I consider “classification worthy,” I submit my three varieties of input to the conscious computer — senses, affect, and memory.

          I certainly appreciate your patience Mike! If this stuff were simple then it would have all been solved a long time ago. In that case I’m sure that each of us would have other mysteries to explore.

          Liked by 1 person

          1. Eric,
            I wouldn’t put too much stock in Campbell’s stance. She has a tendency to be polite with her guests and to present their arguments in the best light. That isn’t to say she might not be on board with Panksepp’s views, but I’d be cautious in assuming it just from her demeanor in the discussion.

            The problem with facial expressions is that the control of what we eventually see is influenced from multiple levels. The lower reflexive layers will send their reaction if not inhibited by the higher cerebral layers. But if the higher layers do inhibit, then we say they are in control. In reality, in healthy people, it’s a complex collaboration between the layers. But if only the lower layer is there, then its output will be the only thing visible.

            Panksepp notes that decorticated animals are more emotional than those with their cortex intact, indicating that the cortex, presumably the prefrontal cortex in particular, has an inhibitory role when it comes to the expression of emotion. The question is whether those emotional displays are possible without consciousness, and I still tend to think it’s all in what we want to call “conscious”.

            “The reason that my consciousness model isn’t open to the inclusion of the computers that we build, is because they presumably cannot be punished or rewarded,”

            What would you say the difference is between a biological punish / reward and a robot’s movement planning functionality receiving directives from its goals functionality? Specifically, what is meant by “punishment” and “reward”?

            I don’t know if I’ve listed this anywhere else in the thread, but Panksepp posits these primal emotions: SEEKING, FEAR, RAGE, LUST, CARE, PANIC/GRIEF, and PLAY. He’s resistant to reducing everything to just positive and negative valences (which F&M present as the more common stance). He does admit that animals like and will stimulate themselves to receive SEEKING, LUST, CARE, and PLAY, but dislike FEAR, RAGE, and PANIC/GRIEF. But he argues that each of these have their own circuits and neurochemical mechanisms. This may be a case of Einstein’s maxim that things should be as simple as possible, but no simpler.

            No patience required! Again, this is useful for thinking it all through.

            Liked by 1 person

        9. Mike,
          Regarding Campbell, anyone in her position would be expected to be polite, and if her show happens to be more about letting notable people explain their positions rather than about debate, it would be appropriate to present a guest’s views in an optimistic light. She seemed to go further to me however, and might even have stuck her neck out to support the ideas of a radical. I was interpreting this as “integrity” rather than “pandering.” I don’t know her very well yet, but I’ll be surprised and disappointed if I don’t come to further respect her integrity in the future.

          Regarding facial expressions, I certainly agree that they can be influenced on multiple levels, and that if only the lower layer becomes expressed, then this output will be all that’s visible. In the mentioned high level assessment of an unexpected windfall, I was depending upon a lower lever reaction to be displayed without distortion. If happiness translates to smiles for normal people, it would seem quite a coincidence that subjects which lack higher brain components, display apparent happiness in appropriate situations without feeling happy. (I’ll add one thing more. Our facial expressions seem to exist as communication to others. Therefore we should expect the lower level to be less compromised in private situations, but far more so in public where it may be helpful to not let others know how we feel.)

          The question is whether those emotional displays are possible without consciousness, and I still tend to think it’s all in what we want to call “conscious”.

          I agree. But then if that’s the case, “whether emotional displays are possible without consciousness” is not a question, but rather a definition. Anyone can define the term “consciousness” one way or the other. Thus the imperative question would seem to be, “Have useful definitions been developed?”

          How do I define punishment and reward? If my thumb is slammed with a hammer so that I feel associated pain, I can say that from this experience, I personally understand what I’m referring to as “punishing existence.” It’s stuff like that. Furthermore if my family is occupied with their own pursuits on a beautiful weekend day, and so I’m able to get out to my lawn chair with a cocktail and write to a good friend about this exact sort of thing, then I also know rewarding existence. One feels really bad, and the other feels really good. I’m quite sure that you’re able to relate.

          So now given this conception of punishment and reward, how does it come to exist? Unfortunately I can’t provide blueprints for manufacturing something that experiences this sort of thing. My theory however is that a normal computer functioning through logic statements, will not be sufficient. This kind of computer presumably functions through personally irrelevant dynamics, while we’re talking about personally relevant dynamics. But apparently evolution took normal computers, things that feel no punishment/reward, and figured out a way to produce punishing/rewarding existence through those computers. Given this achievement, it would seem that it created a second kind of computer that has a personal interest in promoting its own happiness. Apparently this kind of computer was so successful that today the most advanced forms of life incorporate “consciousness,” perhaps going back to insects (as you and I suspect). From the above definition, if a fly experiences punishing/rewarding existence, (a concept that I think we both understand) then it has a conscious element to it.

          How does this explanation sound to you Mike? Does it fit together given my associated definitions?

          Liked by 1 person

          1. Eric,
            On the facial expressions, one dynamic I’m still wrapping my head around is how a high order concept, like a windfall, gets translated into something the upper brainstem can react to. Based on Panksepp’s writing, the translation layer seems to be the limbic system, sub-cortical structures above and round the thalamus, such as the amygdala (which is often misidentified as the source of emotions such as fear), that connect and “translate” between the cerebral layers and the upper brainstem. It’s probably also why we feel emotions when we’re imaging (simulating) scenarios.

            “If my thumb is slammed with a hammer so that I feel associated pain, I can say that from this experience, I personally understand what I’m referring to as “punishing existence.””

            Okay, let me attempt to reduce this using standard neuroscience. A hammer impacts your thumb, which excites the sensory neurons in the thumb, causing an electrochemcial signal to your spinal cord, which computes a response and sends back a reflexive withdrawal reaction to your bicep and shoulder muscles. Do we have punishment yet?

            In an intact CNS, the spinal cord signals up to the brainstem, which also is receiving primitive versions of sight and hearing, but not smell. The influx of sensory signals causes certain chemical releases that change heart rate and other psysiological changes, and maybe further primal physical reactions such as, perhaps, crying out and/or facial grimacing. Have we reach punishment at this point?

            The brainstem signals up through the thalamus to the cingulate cortex and neocortex, which model the incoming information from the brainstem, perhaps comparing it to more ideal circumstances and registering a significant discrepancy. I think we’d agree that there is definitely punishment, or a negative affect by this point. But is this too belated a stage to make that statement?

            It seems to me, that for there to be an experience, there must be an experienced and an experiencer. For there to be a feeling, there must be a felt, and a feeler. And for there to be a punishment, there must be a punished and a punisher. Where does the punished exist? Where does the punisher exist?

            It seems unlikely that either one exists in the spinal cord. I could see the punisher existing in the brainstem, maybe. But the punished? It seems like, to be the punished, requires some ability to model the punishment, the feeling, the experience.

            Now, it could be that there is both a punisher and a primitive punished in the brainstem, and a more sophisticated version of both in the cerebrum. Or maybe the whole punisher is in the brainstem with a primitive version of the punished, and a more sophisticated version of the punished higher up. I don’t know.

            But now, imagine a robot. It has a prehensile limb, which sustains damage. It’s sensors relay information to its CPU, which determines that this is a problem for its programmed functionality. Maybe it immediately sends signals to its motors to withdraw the limb to minimize further damage. Then it models scenarios. How compromised are its goals? Should it withdraw for maintenance? Each scenario is evaluated to determine which action should be undertaken.

            Is the robot experiencing punishment? Why or why not? Does it make a difference if the robot’s goals are to continue functioning, finding energy to preserve that functioning, and building new robots, in essence reproducing?

            Liked by 1 person

        10. Mike,
          On the upper level to lower level translation, that sounds fine to me. I still don’t know much about our neurology, though I am happy that you’re getting this worked out. Here you did mention something that I’d like to expand upon however. You said, “It’s probably also why we feel emotions when we’re imaging (simulating) scenarios.” I don’t know the neural mechanics behind those specific emotions, but I do consider them to be extremely important. In fact in my basic theory (well before consciousness) I reduced them back into two forms.

          I consider “self” to only function on the basis of the present rather than the past or future. Still in practice we do carry some representations of the past to the present by means of our memories. Furthermore the present self also anticipates what might happen in the future (simulated scenarios), and as you’ve mentioned, this tends to bring emotions that presumably help us decide what to do. I’ve reduced these emotions to hope and worry. If you think about your emotions when you simulate what might happen in the future, can you come up with any other than that which is hopeful or worrying? We love it when we’re driven by our hopes of course, as well as hate it when we’re driven by our worries.

          Regarding your neurological reduction of my thumb getting smashed scenario, let me begin by presenting my own “architectural” model of what happens. Unlike a cut which may have a delay, the hammered thumb seem to bring instant pain. As I define it, the punishment here exists as the pain itself — a very strong input, or incitement for my conscious mind to figure out how to deal with this circumstance. Beyond an injection of novocaine, I probably couldn’t do much about it other than try to console myself that it will probably be over soon.

          Given this pain I’d expect to automatically remove my hand, yell, jump around, have a quickened heart rate, and so on. And why do I suspect that it’s the pain that would cause my non-conscious computer to automatically make me do those kinds of things? Because if this thumb had been struck while it was quite numb, then those non-conscious functions would not occur. In the human, the non-conscious computer seems to mainly service the conscious computer. Conversely for our robots the non-conscious computer must take care of most everything directly.

          Regarding your model, removing my hand, yelling out, a faster heart rate, and all such non-conscious reactions, are not what I’m referring to as “punishment.” Instead this punishment refers to existence which feels negative to me. Furthermore procreation his nothing to do with it — that’s a function of evolution rather than me.

          Where does the punished exist? Where does the punisher exist?

          I don’t know the location of the punished any more than René Descartes did, or perhaps more accurately, I do know where the punished is, because it exists as me, but don’t know the location of anything else. I think therefore I am, and punishment is defined by me as thought. When it exists, I can’t be convinced that it doesn’t exist. Everything else may be an illusion, but not the punishment itself.

          As for the punisher, I consider this to be the non-conscious computer, or whatever it is that creates the pain for my conscious mind to experience. I theorize that punishment and reward effectively create me — otherwise existence should be just as inconsequential as it seems to be for a standard computer or a rock.

          If that’s all too confusing, try this: I exist as a robot, but one that can feel good and bad, and thus I have “self.” The more positive that I feel, the better existence will be for me, with the opposite being the opposite. While nothing seems to feel good/bad for anything that we build, existence does seem personally relevant to many of the things that evolution builds (or me at least). In practice this kind of stuff seems to function somewhat on the basis of a conscious form of computer.

          Liked by 1 person

          1. Eric,
            ” If you think about your emotions when you simulate what might happen in the future, can you come up with any other than that which is hopeful or worrying? ”

            Sometimes I experience anger when simulating the future, which I’ve thought could be a type of worry, but it feels different. Mapping to Panksepp’s primal emotions, “hopeful” might map to simulations that trigger SEEKING, LUST, CARE, or PLAY, “worrying” to SEEKING, FEAR or PANIC/GRIEF, and “anger” to SEEKING, or RAGE. (The fact that SEEKING can be in all of these is making me nervous about calling it an emotion. It might be more of an intermediary mechanism triggered by the primal emotions.)

            I think I’ve noted it before, but your description of self seems similar to Damasio’s “core self” which he posits as existing in pulses of present. The core self is built on the proto-self, which exists in the brainstem and insular cortex. He posits a broader autobiographical self which is the core self with all the memories and future plans.

            “And why do I suspect that it’s the pain that would cause my non-conscious computer to automatically make me do those kinds of things? Because if this thumb had been struck while it was quite numb, then those non-conscious functions would not occur. ”

            It depends on the numbing drug’s mechanism. A local anesthetic might certainly have that effect, since the local sensory neurons couldn’t signal to the spinal cord. A typical general anesthetic would block the reflex also, mainly because most general anesthesia effect the whole CNS including polysnaptic reflexes in the spinal cord. But if you had a spinal cord severed just below the brain, you could still have the withdrawal reflex, even though your consciousness would be totally not involved. (Many spinal cord injury patients apparently suffer from flexor spasms.)

            “If that’s all too confusing, try this: I exist as a robot, but one that can feel good and bad, and thus I have “self.””

            I’m still interested in what “feel good” or “feel bad” mean. What do they reduce to? In other words, how would we know that a robot was “feeling” something? What does “feeling” mean? My working understanding is that emotional feeling is a model of an emotional reaction. If so, is a robot that models its own programmatic reactions “feeling”?

            Liked by 1 person

        11. Mike,
          Good point about simulations that make you angry. So no, I clearly didn’t reduce such emotions back to hope and worry. For one oddball example, the right simulations could make a person “jealous.” Furthermore the dreams we have as we sleep might be considered simulations, and all sorts of emotions seem to exist in them. Now that I think about it, the point of my theory here was not reduction, but rather that the present self is only concerned about its own instant welfare rather than its future or its past welfare, though “hope” and “worry” exist as punishment and reward which translate perceptions about the future to present self interests. (Yes I realize that you’re still not clear about what I mean by “punishment” and “reward,” so that’s still left to be worked out.)

          I need to take a harder look at Damasio, though I did go through your June 2016 post again and his provided TED talk. Unlike Jaak Panksepp he doesn’t seem to discuss punishing and rewarding existence, so I suppose that he’d also question me in the way that you now are. I don’t quite understand his models yet, but would you say that from them there is no sharp distinction between our computers, and the conscious computers which evolution seems to have developed? Regardless, I present the two to function in fundamentally different ways. I consider the computers that we build, as well as the non-conscious computer which makes up the vast majority of the human brain, to function such that nothing matters to them whatsoever. Conversely existence does matter for conscious computers, and to the magnitude that each moment feels good/bad to it. I suspect that conscious computers evolved so that the most challenging bit of subject programming becomes transferred over to the interests of thusly created “agents.”

          Good point about the numbing. If it simply blocks local sensory neurons or disrupts the cognitive network system in general, then perhaps it’s like the damaged thumb didn’t happen at all. So let’s say that the numbing agent wears off and so the thumb starts to throb massively. I wouldn’t expect an automatic jerk away here, though I suppose that these specific signals wouldn’t now call for such a reaction. Perhaps the heart rate would go up because of the pain, or perhaps it wouldn’t be because of the pain, but the now sensed damage. I can’t say, but I am open to evidence either way.

          So now then… how might I effectively illustrate the concept that I’m referring to with “punishment” and “reward”? One thing that I should first clarify is that I don’t know the “how” of it. It could be that a robot that models its own programmatic reactions will thus “feel” to some degree, and so attain some sense of personal punishment/reward regarding its existence. All I really know is that existence can be horrible and wonderful for me personally, so I base my terms purely upon my own memory of associated feelings. I can’t give you that for an example, though if existence affects you in similar ways as it affects me (and I suppose that I still haven’t effectively described to you what I’m talking about), that’s what I’m referring to.

          Furthermore it seems to me that horrible existence might potentially be 1,000 times stronger than wonderful existence. If a human were born into an experiment from which to maximize the total magnitude of disutility that could be achieved through this subject (perhaps the lab has quantified utility measurement devices), and so longterm survival and the maximization of pain was the objective, I’m saying that it might take 1,000 people with the most wonderful lives possible to offset such a tragedy of existence, thus causing the combined welfare of the total society of 1,001 subjects, to come out neutral.

          Liked by 1 person

          1. Eric,
            I actually now regret that I only did one post for Damasio’s theories. They deserved at least the treatment F&M got. Convergence divergence zones by themselves deserved a series of posts. If he ever publishes refresher book, I may rectify that lack of coverage.

            In regards to rewards and punishment, he does discuss it, but using different vocabulary. He talks about biological value, something that all life strives to maximize, and sees consciousness as basically an evolved mechanism for animal life to pursue it, with our most primal impulses evolved to meet that pursuit. Since our initial discussions, I’ve always seen some convergence between your ideas.

            On the two computers, I think Damasio would respond that it’s more complicated than that. Indeed, my own responses along those lines were largely informed from Amthor’s, Damasio’s and F&M’s writing. Panksepp’s writing are only deepening that sentiment in me.

            “I don’t quite understand his models yet, but would you say that from them there is no sharp distinction between our computers, and the conscious computers which evolution seems to have developed?”

            Damasio doesn’t really talk about the brain in terms of computation. Somewhere, I recall him saying that he saw the comparison between brains and digital computers as unfortunate. Along those lines, he doesn’t use words like “models” or “simulations”, preferring “images” and “image maps”.

            While I can understand his sentiment since the brain’s architecture is radically different from modern digital computers, something that too many people who call themselves “computationalists” don’t seem to understand, it feels like his stance may be hardened from a lack of knowledge about computation more broadly.

            On punishment and reward, I think I understand the concept your discussing. I just want to figure out its internals. But you’ve said many times that’s not in your scope of interest, so I need to stop pestering you about it. My bad.

            Incidentally, my break from reading Panksepp might be extended. I’m finishing a fictional book, then I’ll probably read Dennett’s new book, but I given his usual writing style, I don’t anticipate it will take that long. I may return to Panksepp’s book at various points. His writing requires a good amount of mental effort to process, and I’ve learned not to rush through such books.

            Liked by 1 person

        12. Mike,
          This one makes me apprehensive, so I’ve taken some time to mull things over. Of course I’m happy with your admission that you think you understand my punishment/reward concept, and were simply pumping me to get to something that I admittedly do not understand (nor do I see much reason for us “idiot” humans to try figuring out before the rest). But then you’ve contradicted this underrstanding by associating my punishment/reward with Antonio Damasio’s “biological value, something that all life strives to maximize…” I suspect that you’d like for us to be in agreement (as would I) and so you’ve subliminally related my ideas to theorists that you’ve found insightful over the years. This has been great for me when I’ve been able to find parts that work, thus giving me a more established platform for my ideas in general. But where fundamental divergence exists, this tendency may also help prevent you from understanding the actual nature of my ideas.

          The “punishment/reward” associated with my ideas, simply does not concern genetic proliferation. A person who is born into a lab setting so that (evil) scientists can demonstrate how horrible human existence can potentially be, should not suffer by means of Damasio’s account (genetic proliferation), but rather by means of my account (affect). Regardless, to the extent that my ideas present generally useful descriptions of our function, his ideas should fail, and vice versa. Conventional wisdom sends me packing here rather than him given his prominence, but not so fast. A quick perusal of the wiki consciousness section gives us a great demonstration of how ineffective he and prominent theorists in general have been.

          I don’t talk about conscious and non-conscious “computers” in my own writings, but rather “minds.” In our discussions I abandoned the “mind” term early however, since apparently this term has become accepted to concern consciousness. Conversely I use it as a contrast with the function of “mechanics.” Instead of happening more like a mechanical typewriter, the mind/computer takes in an assortment of inputs, processes them algorithmically, and provides associated output. Living cells, for example, function this way by means of their genetic material. I’ve actually adopted a scenario of yours as the worlds first mind/computer for an entire organism. This was where a precambrian creature with a full nerve system (though each nerve went to a single source of output), came together in one place so that multiple inputs could provide algorithmically processed outputs. If I had my choice I’d prefer to use the “mind” rather than “computer” term, but I’d much rather be understood.

          Regarding Panksepp, don’t worry about how many other books you read first (as if I’m beholden to you for my research!). In truth those two interviews gave me what I wanted most. I do seem to find that the more I learn about the ideas of others, the more that I find to criticise. I’ll have far more fun demonstrating the failures of Daniel Dennett, an entitled icon of the system, than my new hero!

          Liked by 1 person

    3. Mike,
      I was ready to bitch out F&M pretty hard for theorizing that distance senses were the “killer app” which brought the need for conscious function, since they went soft on insects (who clearly do have distance senses). But then I watched Jon’s video once again and softened up. I really do enjoy the guy. In the end I must say that their premise does seem flawed, though there’s so much more that seems right to me that I’d hope for them to not take offense at my assessment itself. Insects obviously do have very effective distance senses, so if they aren’t conscious, then the F&M conclusion can only be incorrect. But then what if insects do happen to be conscious as you and I suspect, thus permitting F&M to potentially be right that distance senses are what incited consciousness through the need for “mapped images”? Well that does work, though I believe that I have an explanation that’s quite a bit more coherent. Yes it’s the “computational hack” that you’ve attributed to me just above, though let me now provide a far more detailed “image map” of this supposed hack.

      Before there was any life on Earth, I suspect that all things functioned “mechanically,” which is to say that information wasn’t “processed.” By this I mean that all things functioned essentially as mechanical typewriters function rather than as digital devices do. When you press a key on a mechanical typewriter, notice that an arm is directly forced to rise up to strike the page. Conversely a digital device instead takes information from a pressed key, processes it through associated algorithms, and can then provide an associated output on the basis of such processing.

      So in what manner do I believe that “life” brought something that processed information? Well presumably way back in time there were chemical dynamics which replicated themselves somehow at a very small scale, and this evolved into what we today observe as cells which function on the basis of their genetic material. Cells “processes information” in the sense that they don’t just mechanically do things on the basis of what is done to them, but also run inputs through algorithms (which is their genetic material) that force them to produce specified proteins and such. Thus I’d say that the first “computers” emerged!

      While life has this computer nature in a cellular respect, I’d say that multicellular life as a whole continued to function “mechanically” for the most part for quite some time. For example there is the pre Cambrian organism that you’ve mentioned with a full nerve network, though a given input will lead it to make just one specific output — perhaps an associated motor neuron will be fired. I’d say that modern plants and fungi function purely mechanically, even though their cells should have central processors from which to function. But then once a pre Cambrian nerve network came together to a spinal cord, there should have been the potential for whole organism inputs to be processed through algorithms so that more involved outputs could be fabricated. If “2+2=” is entered into a calculator, for example, we expect an algorithm to produce an output of “4.” In this manner a whole organism should been able to develop complex output on the basis of various inputs.

      So now we have cells that function both mechanically and computationally, as well as whole organisms that also do so given their own central information processors. Yes during the Cambrian explosion I would expect consciousness to have evolved, just as F&M expected, though not specifically to facilitate distance senses. I’d expect senses such as sight to have developed so that predators could be eat more effectively, as well as so that prey could avoid predators and so on, though through central computer algorithms rather than through consciousness itself.

      Now we come to the “computational hack” that you’ve referenced. Under more open environments computers seem to run up against natural limits in the sense that organisms must be programmed to effectively deal with most every situation that they face. While specific programing may be fine in more closed environments, such as for playing chess, how might evolution program for everything that a human might come up against, or even a fly? Apparently it couldn’t effectively do so, and so “cheated” by creating a small conscious type of computer that functions through a relatively massive non-conscious computer.

      For the conscious form of computer, an instantaneous “subject” seems to exist to the magnitude that existence is personally consequential for it. For example if nothing matters to you because you are comatose, then your conscious mind should not function. But the more pleasure/pain that you experience at any given moment, the more “self” that should exist from which to incite your conscious function at that moment. This is the first of three forms of conscious input, and F&M call it “affect.” The secondly is “senses,” and like Jon Mallatt I see no reason to divide them into inner and outer varieties. Then the third form of input to the conscious mind is “memory,” or “past consciousness, that remains.” F&M did acknowledge memory to be important, though didn’t quite give it the full input status of affect and senses. I think they should.

      Note that unlike F&M and many modern theorists (such as Ned Block), I don’t present each of them as different forms of consciousness. No, I consider each variety of input to be quite essential for effective conscious function. Without some level of affect, as well as sense, as well as memory, the conscious mind seems to effectively become useless.

      Beyond these three forms of input there is also the conscious processor itself which I call “thought,” and consider it to “interpret inputs” and “construct scenarios” in order to promote its self interests. (By “self interest” I mean its “affect” input. Of course this is often called “utility” in a formal sense, as well as “happiness” informally.)

      Then the only non thought output of the conscious computer that I know of, is “muscle function.” You’ve informed me that this is also the position of the neuroscientist Daniel Wolpert.

      There’s a good deal more that I could say about the manner by which this tiny conscious mind functions through a far greater non-conscious computer, such as the “subconscious,” the “sub-conscious,” and my “learned line.” Details… In truth I only ventured into consciousness in order to test a model that I consider a good deal more important. I believe that I’ve been able to develop a “real” rather than “moral” form of ethics by which to help the ancient discipline of philosophy become an actual science, as well as to help better found our perpetually soft mental and behavioral sciences.

      Liked by 1 person

      1. Eric,
        Jon Mallatt has a video? I haven’t seen that yet. You wouldn’t happen to have the URL handy? A quick Google search didn’t really get me anything.

        This might come down to a lack of understanding on my part, but I don’t know that there’s really that much daylight between your views and F&M’s. I didn’t get around to it in my posts, but in their book, one of the defining characteristics they list for consciousness is its unified nature. Referring to interoception, exteroception, and affect perception as “types of consciousness” isn’t meant to say that they think they’re separate consciousnesses or anything. They’re components of the subjectively unified experience. (Of course, subjective unification doesn’t mean objective unification.)

        In the same manner, in talking about why consciousness evolved in the Cambrian, it was for distance senses, but distance senses evolved because of the benefit it provided in the arms race between predators and prey. So both reasons are right, just at different levels of the reason chain.

        I remain unsure about viewing consciousness as a separate computer. So much of its processing is tangled with unconscious processing that I don’t know if that conception really clarifies. Of course, this depends on where we draw the border between the system’s inputs and the system itself. Consciousness seems driven by the pre-motor planning functionality of the prefrontal cortex, but the PC by itself is not conscious, not without the contributions of the sensory cortices, limbic system, thalamus, and many other regions. For example, the PC has no sense of self, which appears to live in the brain stem and insular cortex.

        For your non-moral ethics, what would you say the difference is between “ethics” and “moral philosophy”?

        Liked by 1 person

      2. Eric,

        Jumping in here on a specific point.

        With what I’m about to say I mean that the small scale physics describes the detail of the chemical processes I’m describing.

        I see pre-life world as chemical. All interactions that don’t require breaking down or building elements are chemical, in that the involve the interaction of elements and their electrons. What we see as mechanical actions, one macro object impacting on and moving another, is a result of the chemical bonding within the tow objects and the chemical resitance to a ‘merging’ of the objects.

        A rock bouncing down a hill is resistant to merging with the rocks it hits … except bonds may be broken, fractures occur, and we see bits breaking off.

        But most chemistry is more complex than that. Even the interaction of wind and water that disturb the rock.

        A mechanical view, that you described, may be a useful model in some respects, but I think the chemical model most significant to non-life-to-life transition, and even to consciousness. I’m not saying any other models we use aren’t useful in their contexts, just that the cheical model is crucial and most informative.

        The first replicators that began life are basically a novel chemical process. But cyclical chemical processes already existed. Any process can be cyclical if energy and materials are added.

        Even a materially closed cycle could occur just with changing temperature – rising and falling – to allow heat in to move the process one way, and out to move it the other. A daily solar cycle, for example.

        So, replicators are a vague bundary condition between non-life and life, and it may be quite arbitrary where one marks the boundary. As with many complex things there might be no benefit to being to fussy about where we draw the line.

        With all that, I consider information processing to be a natural aspect of a dynamic universe. We humans tend to use the term ‘information’ is a sense that conveys useful information. But then our problem is defining ‘useful’ in various contexts. Rain is useful for the chemical reactions in rocks.

        This has a bearing on the notions of information and computation.

        There’s a problem with the term ‘information’, as commonly used, when we often think of it as something abstract and apart from the substrates it is carried on.

        My view is that information IS the material substance (I’m including fields here too) that carries it. As such, any dynamic aspect of the universe that causes some other part of the universe to change is a transfer of information, in that the destination takes on some form that is somewhat consistent with the form of the source, or the reaction with the source, or both.

        I don’t see any but arbitrary barriers laid down as different models that distinguishes information processing from non-information processing.

        So, internally, cells are still chamical systems. That they combine their components to form fairly self-contained units does not alter the fact that they are larger scale and more complex versions of even the most basic aspects of the universe.

        That was all I wanted to say on that. It’s pertinent to consciousness, but I’ll comment elsewhere on that aspect.

        Liked by 2 people

        1. Ron,
          Yes I agree that chemical dynamics must have been quite central to the development of “life,” and tried to be as vague as possible above since I don’t know much about this at all. Like me, you sound like quite a physicalist! I’d love to hear your thoughts on consciousness as well.

          Like

    4. Mike,
      It looks like I let through quite a few grammar mistakes last time. Furthermore, no I didn’t dig up a video of Jon, but was simply referring to Ginger Campbell’s MP3 interview of him. I can sometimes be quite absent minded.

      The thesis of F&M seems vulnerable since distance senses should not specifically be why consciousness evolved, given that non-conscious entities seem quite able to process such information. Notice that I’m able to speak to my phone, and this forces it to do various associated things. Similarly flies obviously have amazing distance senses, so if they aren’t conscious then it would seem that consciousness isn’t required for distance senses to be used for effective non conscious function. Therefore we should expect those initial Cambrian predators and prey to have developed extensive distance senses without being conscious. Something else must have brought the need for consciousness, whether my own explanation or another.

      My own explanation is that computers seem to require relatively specific programming in order to be effective, and so evolution shouldn’t have been able to keep up with the necessary programming associated with more open environments. Therefore it seems to have created personally invested agents to figure things out for themselves, given that existence can be horrible and wonderful for them. We know this as “consciousness.” Of course this is starting you at the latter stages of my theory, so I can see why it can be confusing.

      When you ask what I consider to be the difference between ethics and moral philosophy, I guess I can say that my ideas go back exclusively to the aspect of the conscious mind which involves good/bad existence for any given subject. This should be a fixed scientific concept. Moral philosophers have instead delved into “oughts,” which I don’t believe exist beyond their social construction. My ideas aren’t actually “non-moral,” but rather “amoral.” Regardless they can have extremely repugnant implications, and most presume that this makes them false. But it seems to me that reality itself can be amazingly repugnant, and so an accurate model of it should naturally have various repugnant implications.

      Liked by 1 person

      1. Eric,
        No worries. Comments are casual things.

        On Cambrian predators with distance senses not being conscious, again, I would ask if there are any autonomous robots that can do what those creatures could do. It seems like the closest thing might be self-driving cars (or self driving Mars rovers), but even there their range of navigation (a 2D road system versus a 3D sea) is smaller than what the simplest vertebrate can do. I don’t know if self driving cars do scenario simulations, but they definitely build models of the environment.

        The idea that oughts, hence morality, aren’t objective has been around since David Hume. A lot of moral philosophers are moral realists, but not all of them. And nihilism (descriptive or normative) have also been around for a while. I’ve been called a nihilist, and I’d own up to being a descriptive one.

        Have you heard of Jaak Panskepp? I’m currently listening to the BSP interview of him, and his focus, at least in terms of consciousness, has a lot in common with yours. He studies affects and emotions, punishments and rewards: http://brainsciencepodcast.com/bsp/the-origin-of-emotions-with-jaak-panksepp-bsp-91.html
        Unfortunately that episode is pay walled. And his book appears to be a bit pricey. But I thought you might find his research interesting. https://www.amazon.com/Archaeology-Mind-Neuroevolutionary-Interpersonal-Neurobiology-ebook/dp/B007HXFCIS

        Liked by 1 person

    5. Mike,
      Apparently we’re having a bit of a communication gap regarding those non-conscious Cambrian predators and prey with distance senses that I theorize. No, we don’t have autonomous robots that are able to do what they should be able to. But even though we haven’t built anything like them, this shouldn’t imply that evolution couldn’t build such things. If my phone can have “distance senses,” then one of evolution’s non-conscious organisms should certainly be able to have such senses as well. We don’t say that my phone has “vision” or “hearing,” since we reserve these terms for conscious function, but my phone is still able to use image and sound information from distant sources. Thus evolution must have been able to develop non-conscious predators and prey that were able to use such information in order to eat and to not be eaten. I personally don’t mind describing such life as “biological robots,” given that existence should be perfectly inconsequential to such things, but that’s me. So are we agreed that predators and prey should have developed “distance senses” before the rise of consciousness? (It occurs to me now that perhaps you’re defining consciousness as anything which runs simulations? If so that’s fine, though I prefer a separate definition for consciousness.)

      Wikipedia doesn’t quite use the descriptive and normative classifications for nihilism, though if by “normative” one means “moral,” then this does seem similar to my position. They even mentioned that some moral nihilists consider morality to be “a human construction,” which is something that I might say. Still I do find references here to “right and wrong” difficult to take. I consider it far more appropriate to use the “good and bad” terms, which I use to reference punishing and rewarding existence. Furthermore I have no problem acknowledgeding what I consider to be a “real” morality, such as what you’ve shown me from Jonathan Haidt. I’m sure you recall my claim to have reduced his six components of morality back to sympath and empathy forms of utility.

      The classifiction that I like to put myself under, by the way, is “amoral subjective total utilitarian,” or ASTU. At the moment I don’t know of any others.

      Your “descriptive nihilism” position sounds interesting. Here I doubt you mean that you’re a nihilist regarding descriptive ethics, since it’s claimed that this simply catalogs ethical beliefs. Perhaps you mean what Wikipedia calls “epistemic nihilist”? I don’t mind this one, except that I’m perfectly certain that I think.

      Thanks for the pointer to the Jaak Panskepp BSP interview! I’ll soon go through his earlier interview with Ginger as well. Sensible people who stand up to established interests like him, bring me hope. I consider him sensible because with the evolution timeline, it just doesn’t make sense for humans to be all that biologically different from other animals. Furthermore I love how he talked so explicitly about punishing and rewarding existence, as well as Ginger’s enthusiasm for this.

      I wonder what Jaak would say about my amoral subjective total utilitarianism? I’d love to tell him that affect constitutes all that’s valuable to anything in the end, and thus the value of something’s existence to itself over a given period of time, should be represented by the magnitude of its positive minus negative affect over that period. I’d tell him that without a formal understanding of value, I don’t believe that our mental and behavioral sciences will be able to harden up. Furthermore I believe that this will teach us how to more effectively lead our lives, as well as structure our societies.

      Liked by 1 person

      1. Eric,
        On robots and Cambrian creatures, I wouldn’t argue that it’s impossible that non-conscious creatures like you describe could have evolved, but I think we have to think about the limited neural substrates of these early creatures. They would only have had a few hundred thousand neurons, and maybe a few million synapses (similar to modern flies and ants), which doesn’t seem like enough to produce the necessary behavior only based on mental reflexes. (If it is, it seems like we’d have those robots by now.) Of course, we can only study those creatures by their modern day analogues, so who knows.

        By “descriptive nihlist”, I meant that I accept that there is no objective morality, no platonic determination of right and wrong. To me, a “normative” or prescriptive nihilist is someone who takes that position and extends it to saying that there shouldn’t be any morality, that it’s illegitimate to talk about moral rules at all, even at a social contract level. I personally don’t buy that philosophy.

        I started reading Panskepp’s book. Campbell is right that it requires some commitment, but so far (still in chapter 1) I’m not finding it any worse than F&M’s book, and a bit easier going than Damasio’s. Like those books, it looks like it will spend a good amount of time discussing neuroanatomy. I’ll let you know if he discusses morality. Based on the table of contents, he might get into it in the last chapter.

        Liked by 1 person

      2. Oh, on defining consciousness as simulation, no, not solely. I think it matters what the simulations are about, why they’re being invoked, and what results they produce. I think the simulations are what separates our conscious experience from unconscious processing, but considering anything that runs simulations as conscious doesn’t strike me as productive.

        The reason I think self driving cars might be approaching some paradigm worthy of the consciousness label is because they build models of the environment, make predictions based on those models in relation to their own existence and goals, all as a guide to action. Simulating various possible courses of action seems like it would bring it all the closer.

        Would that be enough to trigger our intuition of a fellow conscious being? Probably not. Its goals, its version of a reward and punishment framework, are too non-organic for that, too subservient to the needs of its users. In other words, it’s a tool with no “aspirations” to be anything other than a useful tool.

        Liked by 1 person

    6. Mike,
      First off, I’m thrilled you’re reading Panskepp’s latest book! After listening to his other BSP interview as well, I’m quite sure that you’ve put me on to someone who would naturally be sympathetic to my cause. (Furthermore there is also Campbell, who seems to share his views). I’ll be analysing the man’s ideas in greater detail this weekend, or constructing simulations to assess how his work might help my own project. Hopefully his book will inspire you to give us another great series of posts at some point, just as F&M did. But no pressure!

      In the past I’ve explained that I’m more “architect” than “engineer,” and this perspective might be better suited to understand how the Cambrian explosion occurred. Consider those early fish that non-consciously ate nothing but the microbial mat. Here we’d expect some of them to naturally evolve to eat their own kind, simply given the nutritional value of doing so. Thus jaw bearing predators should have evolved, mandating that the original fish were now prey, and even though neither should have developed more advanced non-conscious minds yet. Note that the prey don’t actually mind being eaten any more than waves mind when they crash on the beach. It’s all perfectly inconsequential, since agents do not exist here.

      Given these dynamics we’d expect more predators to be selected for, as well as prey that doesn’t remain to be eaten quite as passively as the microbial mat does. Thus we should have gotten more and more advanced predators and prey, eventually with non-conscious distance senses aiding them. Why? Well if my idiot phone can have distant senses, then evolution certainly should have been able to give it to these basic predators and prey. To take this step they shouldn’t have needed to be anywhere near as advanced as modern ants and flies.

      Furthermore with enough millions of years under non-conscious sensory function, the need for “autonomy” might explain why they became conscious (that is if they did, and most scientists seem to think otherwise). My answer is that perhaps evolution couldn’t program them well enough given their open environments, and so it created personal agents to somewhat figure things out for “themselves.”

      I’ll certainly join you as a “descriptive nihilist” who doesn’t believe in objective right and wrong. (I do believe in objective good and bad, however, and consider this “affect” based.) Furthermore I’ll say that rules are quite necessary in order for societies and individuals to function productively. So in this regard I’ll say that we should have rules, and thus remove myself from the “normative nihilist” classification.

      There are no true definitions for consciousness, but rather only more and less useful ones in the context of a given argument. Therefore from your definition, yes a highly autonomous car would need to be conscious — it would need to run massive simulations in order to effectively do what it does. But would this mandate that there be something it’s like to be this car? I don’t think it would. Regardless, if there is something it is like to be this car, then I would call it conscious, and if not, then I wouldn’t. Thus I’d define something which is in tremendous pain, but runs no simulations whatsoever, to still be conscious.

      If a person is functioning as a television camera, and so isn’t consciously processing any information at all, then I’d say that he/she isn’t conscious. But once “pain” or any affect becomes interpreted, I consider consciousness to exist. Here I consider there to be incentive to run conscious simulations about what to do about the pain, regardless of whether or not any simulations are run.

      Liked by 1 person

      1. Eric,
        On posts, we’ll see. It definitely feels like it should be good for at least one post, but I’ll be disappointed if it isn’t more than that. I suspect it will take me a while to make my way through the book though.

        Interestingly, and perhaps counter-intuitively, jaws didn’t evolve until well after the Cambrian. The earliest predators had to get by without them.

        On your phone, I’d just note that its distance senses (the camera and mic) are built to record information for later use. It can fulfill its function without ever needing to actually model the contents. All it’s doing is moving data. (Of course, this is changing. My SurfaceBook uses its camera to authenticate me by recognizing my face.) But of what use would a high resolution lens eye be to a creature if it wasn’t using the information to make movement decisions?

        That said, I suspect arguing about how far distance senses got before consciousness arrived is probably unproductive. The changes were likely gradual, with a lot of proto-conscious creatures existing at various points. (Of course many, might insist that any pre-mammalian, or even pre-human consciousness was proto-consciousness.) Even arguing about distance senses themselves is problematic since the earliest organs we’d call “eyes” were probably far less sophisticated than later versions, with consequently very simple modeling. As I noted in my final F&M post, there was probably never a “first” conscious creature, just increasingly sophisticated processing that we would at some point recognize as conscious.

        My problem with using criteria like “something it is like” to be a certain system or the ability to feel pain, is that, like the word “experience”, we have to be willing to unpack what exactly we mean by those terms. Until we do, they’re just phrases that seem to invoke something fundamental but in reality mask a lot of complexity.

        Would a robot be “feeling pain” when its sensors indicate damage and its central systems identify that information as antithetical to its goals? Even if that robot is running simulations on courses of action to avoid, minimize, or repair the sensed damage? If not, then what does biological pain have that these damage sensor signals lack?

        Similarly, what exactly do we mean by “something it is like”? How similar to our own processing does it need to be before we categorize it as “like something”? Does this only apply to systems with goals similar to ours (survival, procreation, homeostasis, etc)? As I indicated above, I think most people would intuitively answer “yes”, but if you think about it, would any engineered system that didn’t have biological looking goals ever then qualify as conscious?

        Liked by 1 person

    7. Well Mike, I suppose that when I present “architectural” examples, I should at least try to get related “engineering” issues right. Arbitrarily proposing jaws for time periods where there weren’t any shouldn’t help, and I also might have chosen a standard robot rather than my phone to demonstrate non-conscious distance senses. These robots do make movement decisions on the basis of images, sounds, and so on. But regardless of my criticism of the Feinberg and Mallatt thesis, they do still impress me.

      While I can see that there would be a sliding scale of existence for the consciousness model that you’re using, for my own such model there is a theoretically distinct “on and off.” Neither yours nor mine happen to be “true,” though I do consider mine extremely useful. Given my naturalism, and given my memories of feeling good and bad, I believe that there is a causal property of nature no less real than “mass,” by which existence can feel positive to negative. Furthermore I theorize this to incite the function of the conscious mind, or a strange sort of computer that operates by means of a far larger non-conscious computer. Thus the first time that something felt good/bad, would have been reality’s first spark of consciousness from my own definition of the term. The emergence of this physical aspect of reality shouldn’t have had any functional use initilly, but seems to have been put to very effective use by evolution at some point, in order for effective “conscious” computers to evolve. Perhaps Panskepp’s book will help make this case for me, since I can see that he believes in the ontological existence of punishment and reward no less than I do.

      I understand the difficulties of unpacking “something that it’s like,” though that’s exactly what I’m trying to do by means of the terms “punishment” and “reward.” If there isn’t a physical property of nature associated with such existence, then I don’t believe that a “something that it’s like to be” can exist. I presume that none of our computers harbor it yet. So no, as I define it, a robot would not be “feeling pain” when its sensors indicate damage and its central systems identify that information as antithetical to its goals. And not even if that robot were to run simulations on courses of action to avoid, minimize, or repair the sensed damage.

      What would physically need to occur to create biological pain, or indeed, any affect through any substrate? I consider this to be the true hard problem of consciousness. Ironically however, I don’t consider solving this “how” of consciousness all that important. It’s the “what” that should finally get our primitive mental and behavioral sciences on firm ground. Isn’t it silly to think that these professionals can do their jobs effectively, without effective “what” understandings of consciousness? I can see that Dr. Panskepp agrees.

      I believe he mentioned hooking up electrodes into animal brains, theorizing that he’s permitting an animal to control its pleasure by operating a switch. I believe he said that the animal will continue working this switch with every bit of energy it has left into exhaustion. I’ll need to hear far more about that to believe it. Regardless, if as a naturalist you want evidence that there is a material property of nature that constitutes good/bad existence, I’d have you look the same place that I found this existence as naturalist child. I’d have you consider your own existence. From this perspective, would you say that there a material property of nature by which existence can be horrible/wonderful for you? Or do you feel that you’re simply running computer simulations by which to function?

      Liked by 1 person

      1. Eric,
        Sorry, if this is a bit choppy. Had to type it in a hurry.

        “Thus the first time that something felt good/bad, would have been reality’s first spark of consciousness from my own definition of the term. ”

        Sorry for sounding a bit like a broken record, but my response here would be that we need to unpack what is meant by “felt good/bad”. “Good/bad” implies instinctive emotional responses. “Felt” implies affective modeling of those emotional responses. But would there be any point to affective modeling without exteroceptive and/or interoceptive modeling? What would there be to have an affect about?

        I think these support structures would have emerged gradually. Instinctive programmatic reactions is very old, going back to single celled organisms. So is primitive conditioned learning. I don’t doubt this got much more sophisticated as the other systems evolved, but it’s inception seems to be well before anything we’d call conscious.

        Distance senses without the modeling seems somewhat pointless. And the modeling without those senses doesn’t really seem possible. Which, to me, implies that they evolved in tandem.

        So we have creatures with distance senses and the associated modeling. I can see creatures perhaps existing in this state for a time, but such creatures seem vulnerable when the sensory information triggers multiple contradictory programmatic responses. How many generations does it take before this creature starts modeling its own instinctive responses and doing incipient scenario simulations to see which provides the best responses? I don’t know, but it seems like something that would have been selected for in its earliest and most primitive iterations.

        “Ironically however, I don’t consider solving this “how” of consciousness all that important. It’s the “what” that should finally get our primitive mental and behavioral sciences on firm ground.”

        I’m not sure it’s that easy to divorce how from what. I think both have to be considered in tandem with a feedback loop between them. Descartes’ substance dualism views formed because he couldn’t conceive of how a mind could exist from just brain stuff. Of course, he knew nothing about cells, much less neurons, or electricity or computational systems.

        On the animal pushing the button until it drops, I’ve actually read about that in several places. I don’t think we should find it too surprising. Since all we do is seek rewards and avoid punishments, which in a healthy organism is aligned with its survival, a mechanism that simply gives that reward without that alignment seems like a very dangerous thing for that organism. Drug addicts that destroy their lives probably only get a glimmer of what those animals experienced.

        “From this perspective, would you say that there a material property of nature by which existence can be horrible/wonderful for you? Or do you feel that you’re simply running computer simulations by which to function?”

        I don’t think this is an either / or situation. I think it’s both. Me having a horrible/wonderful day is a physical computational state in my brain, hopefully caused by events outside of it.

        Liked by 1 person

    8. Mike,
      Was your quick response choppy? Well I’ll show you “choppy,” and I’m rarely “quick”! Your conscious processor must function at ten times the rate of my own.

      As far as “unpacking” goes, I am doing what I can, though fortunately we may have just stumbled upon a crucial point. You said that good/bad implies instinctive emotional responses, but in my own writings I call these responses of “self,” or the consept which I’ve set up to oppose “instinct.” Here “good/bad” references personal units of value, and so there is personal incentive to develop conscious associated models. Conversely I define “instinctive function” to occur behind the scenes by means of anything else.

      From there you got teleological by asking about the point of affective without exteroceptive and/or interoceptive modeling. Just as what we call “a fork” shouldn’t exist as such inherently, affect shouldn’t inherently have a point, but rather simply exist wherever it does exist. Nevertheless I’ll teleologically say that this stuff seems to cause my own existence to be good/bad rather than personally inconsequential. There should have been millions of years where existence was positive/negative to various creatures somewhat, even though this aspect of reality was not what we would call effectively implemented into the function of these forms of life. Evolution occurs “randomly,” of course. (I say this teleologically as a hard determinist.) Thus something could have had distance senses without any ability to model, or even the opposite, as weird as such things can seem to us purposeful humans. We agree that evolution did put this stuff together “well before anything we’d call conscious.”

      I think I understand why it’s difficult to divorce the “how” from the “what.” We once wanted to build things to fly us from place to place, for example, but didn’t know much about the “how” and thus “what” of it. Then once we started building effective machines from which to do so, we did gain such understandings.

      Nevertheless in “consciousness” we’re talking about something that already does exist, and indeed, this is us! Surely we should be able to understand ourselves without having any capacity to build ourselves? And if we do want to build ourselves, a “what” understanding of ourselves would seem quite important. (This is a bit of a “strawman argument” however. I know you’re not saying that we must build humans in order to understand humans. Nevertheless it should be quite critical for us to develop better understandings of our nature long before we build computers with phenomenal experience, since this should help harden up our perpetually soft mental and behavioral sciences.)

      I’m very happy that you’ve heard multiple accounts of Dr Panskepp’s animals which continue working levers until they drop! To me this seems like incredible support for his life’s work, as well as for my own more broad architectural models. So the animals were free to go off and eat, drink, sleep, socialize or whatever, but continued working this “feel good lever” for as long as possible? Wow! I’ve heard that goldfish with enough food will eat to death, so I suppose this shouldn’t surprise me. The human has evolved what economists call “diminishing marginal utility,” and thus each unit of a good or service becomes less valuable as more are attained. Wiring a human up to such a lever should be quite lucrative, so I presume that it’s difficult to do since that we don’t have them. Regardless I eagerly await your report, though until then I have plenty of things to do (not least of which is to straighten out my own writings).

      Beyond a few technical issues, it really does seem that we’re on the same page. The main difference seems to be that we use somewhat different consciousness models, and thus associated issues are to be expected.

      Liked by 1 person

      1. Eric,
        It’s probably more accurate to say that I ran out of time yesterday when typing that comment, and had to post it without a good chance to proofread it. So even my excuse for the roughness was itself rough 🙂

        When discussing evolution, it’s very hard to avoid talking about the “purpose” of attributes. As long as we all understand that we’re talking metaphorically about adaptations and their propensity to be selected for, it’s fine. It’s really more accurate to say that we’re talking in teleonomic terms rather than teleoloigcal ones.

        Certainly mutations in evolution have no purpose. They’re effectively random. But once they occur, they are either selected for, selected against, or neutral in terms of selection. If they are expensive in terms of energy or development, then for them to be develop beyond their initial rough beginnings, they must be selected for. For instance, the human brain, which consumes 20% of the body’s calories, has to earn its keep or it would end up being selected away. So, when I say that affective modeling is pointless without the other types of modeling, my point is that it wouldn’t have been selected for unless it conveyed some survival advantage, some survival “purpose”.

        I think that was F&M’s point about the eyes, modeling, and overall consciousness. Eyes are an expensive adaptation, which means they wouldn’t have been selected for if the information they gleaned wasn’t used, in other words eyes imply mental imagery, which is itself expensive, requiring a good amount of neural substrate, so it has to be used, if not by modeling, then some other way. But the modeling itself is expensive if it’s just going to allow the animal to be confused when its emotional reactions might just trigger multiple contradictory actions. Modeling those emotional reactions (affects) and assessing options seems like something that would have arisen (in its most incipient form) pretty early.

        Now, admittedly this is a longish line of reasoning, and it’s always possible that some of these adaptations initially had other functions. The question is what those other purposes might have been?

        It seems simpler, to me, to assume that distance senses, exteroceptive and affective modeling, and even simulations, all evolved in tandem, starting with primitive proto-versions of each. Of course, this is all speculative to some degree, since we can’t examine the Cambrian creatures directly, run experiments on them, etc, but only on what we take to be their modern analogues.

        I do agree that we’re mostly on the same page. This discussion really just helps me think “out loud”. I hope it does the same for you.

        BTW, I have to correct a mistake I started. I initially introduced Jaak Panksepp by misspelling his last name as “Panskepp”, transposing he ‘k’ and ‘s’, and unfortunately leading you to do the same. (Someone corrected me on another thread.) Jaak Panksepp. Sorry for the confusion.

        Liked by 1 person

    9. Mike,
      I’ve already admitted that I’m dealing with a far higher intellect than my own, and now you’ve rubbed this in by observing that you actually got the previous comment right without even proofreading! 🙂 Ah, but then apparently you were the villain who tricked me into misspelling the name of my new hero, Jaak Panksepp. Of course your correction that I should have been using the term “teleonomical” rather than “teleological” puts things straight once again. Like the empathy/sympathy distinction, I do appreciate such clarifications.

      “Energy” is certainly a useful parameter to consider regarding evolution. Permit me to present my own model in a more full way however. I think you’ll find that it doesn’t hog energy, and you might even decide from it that F&M should stop presuming that early non-conscious life lacked effective distance senses.

      If you are born, live, and die in the stuff that sustains you, then there probably isn’t a whole lot more required of you than that. But then again, perhaps the excrement of you and your kind effectively pollutes your environment? Thus it might be useful to have the ability to move. (Here I’m still impressed with your precambrian organism with a full nerve network that isn’t tied together, but did tie up in the end. Here a given stimulus to a nerve could only incite one thing, such a a motor neuron. Once the network came together however, there should have been the potential for more nuanced reactions to occur — algorithmic in fact, and thus a central computer should have emerged. Am I correct that this theory happens to be one of your own?) Regardless these organisms shouldn’t just have developed the ability to move, but to move methodically with inputs that are algorithmically processed for an associated output function.

      Apparently a great deal of varied life emerged in the Cambrian explosion, and not just eating the microbial mat, but also eating each other. I presume that a good living could be made by attaching to something more like one of your own kind, and then sucking out nutrition. Thus we’d expect such programming to be selected for, as well as programming to avoid other predators. In order to find prey as well as not be preyed upon, information beyond the chemical analysis of the immediate environment (smell and taste for us, though not for “computers”) should have been required. Sound waves might have been useful, since various organisms would be expected to leave signature energy patterns. Light waves might be as well, since its frequency (color for us) can suggest various things, and especially when contrasted against other frequencies (to imply shapes and such). Here I’m not talking about “vision” or “hearing,” but rather distance senses that our robots today use in various non-conscious ways. I don’t know how much energy would be required to make use of such information, but I presume it would be quite negligible against the prospect of eating prey, as well as not being eaten by predators.

      Now on to my own “hack.” Here we have all sorts of things with individualized programming from which to eat and not be eaten, and so more and more advanced computers from which to operate should have been developed. I don’t mean advanced as far as “use lots of energy,” but rather as far as developing effective behavior from which to promote genetic proliferation. Given the vast assortment of ways that such things should be able to be killed off without successful offspring, they should have required vast sets of programming from which to effectively deal with a multitude of environmental contingencies. But I believe that evolution both didn’t, as well as couldn’t, simply continue developing more an more advanced non-conscious computers from which to deal with more and more open environmental contingencies. I believe that it instead “cheated” by developing a separate kind of computer, or one that we know as “consciousness.”

      The initial trick here (which should merely require the time of lots of evolving non-conscious computers), is for some of them to mutate such that there’s a second form of computer built on top of the first, by which existence feels something that’s non inconsequential to it. (I consider this “the hard problem of consciousness,” and don’t care much if it’s answered.) Existence would now feel good and/or bad for such creatures, but bring no effective behavior modification initially, since this shouldn’t yet be hooked up to anything effective. I doubt that this punishment/reward would require much energy, but still expect that it should have emerged and died out many times. At some point however I imagine that this second computer would have gotten hooked up to what the non-conscious computer uses to assess its own damage. Thus the conscious computer should have come to feel “pain” from such damage, as well as gain some ability to move away from things which were painful. And thus the non-conscious computer should have been able to turn over a small bit of duties to the conscious one. With success I suspect that some of the non-conscious computer’s sense information became passed over to the conscious computer. Now existing light wave information would be interpreted as colors and shades of light intensity. Now sound energy information would be interpreted as hearing. Now chemical analysis of the environment would be interpreted as things that taste/smell good and bad. Furthermore that nerve network should have come to be used for a tactile touch sense. Of course such things would only be expected to occur over eons.

      Crucially in order for a truly functional conscious computer to exist, I believe that a third form of input, beyond the punishment/reward “affects,” and beyond the “senses,” would have been required. This is some form of recording of past conscious experiences, to serve as future input information, or “memory.” Thus now we’d have “affects,” “senses,” and “memory” for three kinds of input to the “thought” conscious processor, or something which interprets inputs and constructs scenarios in order to figure out what to do for potential output, and this is through the only non-thought consciousness mechanism that I know of, or “muscle operation.”

      Let me restate that the reason I believe that consciousness was required to augment the vast non-conscious computer, is because this form of computer seems better able to function in open environments autonomously, while the non-conscious computer seems to require more dedicated programming to deal with open environment contingencies. I don’t suspect that evolution was able to program these computers well enough, though the whole thing may well have been later than the Cambrian explosion.

      Liked by 1 person

      1. Eric,
        When considering how to reply, I remembered that F&M, when trying to ascertain when affective consciousness might have evolved (which I believe you consider to be consciousness), came up with behavioral criteria for it, and then looked at animal studies to see which animals had it. The criteria were: operant learning (that is, learning beyond reflexive conditioning), behavioral trade offs (to me, the key criteria), frustration behavior, self delivery of analgesics or rewards, and “approaches reinforcing drugs / conditioned place preference” (not quite sure what that last one entails).

        There was extensive evidence of these behaviors in mammals, amphibians, reptiles, and vertebrate fish. Even some snails and slugs met three of them, although only the “predatory sea snail” met the behavioral trade-off one. The creatures who met few if any were c-elegans or flatworms. Chordates (spinal cord but no brain) as a group met none of the criteria. I wish I could share the table here, but it’s copyrighted stuff. It’s table 8.3 in their book if you ever read it.

        The thing is, all of the creatures with high resolution distance senses seem to have many if not all the behaviors (all of them had behavioral trade offs). It seems like the ones without distance senses (worms, chordates, etc) didn’t. Of course, these are modern day animals, not Cambrian ones, but if distance senses without consciousness were likely, shouldn’t we expect to still see some around? Particularly if we’re still seeing lots of other non-conscious species?

        On the two computers idea, I do think it matches reality in one important way. Many primary emotions and impulses originate in the sub-cortical regions of the brain, but our awareness/feeling of them (affects) seem to happen separately in the cerebrum. Of course, the difference is that the cerebrum is huge (19-23 billion neurons) in comparison to the sub-cortical regions (1 billion neurons or less). (The rest of the 86 billion neurons are in the cerebellum, which doesn’t appear to contribute to consciousness.)

        Incidentally, Panksepp doesn’t appear to buy the above distinction (although he admits it’s widely held among neuroscientists), arguing that affects for primary emotions happen sub-cortically. I think his logic for this stance is somewhat shaky, but I’m still in chapter one; maybe his case will get stronger. In any case, I’m not sure how crucial it is to his overall thesis.

        Liked by 1 person

  4. Eric,
    Replying to this comment: https://selfawarepatterns.com/2017/01/17/two-brain-science-podcasts-worth-checking-out/comment-page-1/#comment-16153

    “A person who is born into a lab setting so that (evil) scientists can demonstrate how horrible human existence can potentially be, should not suffer by means of Damasio’s account (genetic proliferation), but rather by means of my account (affect).”

    I’m not seeing the factors that make your ideas incompatible. Certainly, there is a two stage relationship here. In the first stage, we evolved certain instincts and impulses that preserve and enhance our genetic legacy. It’s why we have negative reactions such as fear, rage, or panic to drive our actions, and the consciousness mechanisms to selectively follow or resist them.

    But once those instincts, impulses, and mechanisms exist, then the relationship you’re describing exists. In an evil lab that was breeding human subjects, but stimulating all of each subject’s aversion circuits, so that they were experiencing maximum hell, the relationship between the normally adaptive relationship between the two stages would be subverted.

    But that’s true any time the environment rapidly shifts. I remember Douglas Adams talking about a species of bird in Madagascar that couldn’t stop making its mating call, which was once adaptive, but wasn’t anymore because of the introduction of foreign predators, and so was in danger of extinction. Indeed, many extinctions seem to happen because a shift in the environment too rapid for instincts to evolve and adapt to. (Today humans are often the cause of those shifts, but all it takes is a natural environmental shift happening across thousands of years instead of millions to do it.)

    So the way to think about it is that the mechanism you describe exists because of biological value, and that consciousness is driven by that mechanism. When everything is evolutionarily aligned, consciousness is driven to maximize biological value (reproduction, homeostasis, etc). But there’s no guarantee that everything will be aligned. Anyone who uses birth control is getting attractive affects while sabotaging the genetic mechanism. (Although if birth control endures for the next 100,000 years, it might lead to weird changes in our species instincts. Of course, well likely take control of our own evolution long before then.)

    Dennett’s latest book is about the evolution of minds, which seems relevant to my latest readings. I’m hoping it gives me another viewpoint on the things F&M discussed, although a part of me is anxious that it might be simplistic in comparison. I definitely expect it to be less technical. We’ll see.

    Liked by 1 person

    1. Mike,
      Good new this time I think! Firstly it seems to me that your last response illustrates that you do know what I mean by “punishment/reward.” I consider it quite important to fully acknowledge the difference between good/bad existence, and perfectly inconsequential existence (the difference apparently being affect/utility/happiness or whatever). It is with this understanding that I believe a science of ethics will finally become established. Then secondly, I’ve now come to see that Damasio’s model of consciousness actually does conform with my own. I might have realized this earlier if I’d have spent a bit longer reviewing his Wikipedia info. (That “biological value” thing threw me as well, but as you’ve just noted, he was merely referencing evolution.) I’ll now run through my perception of his consciousness ideas, as well as illustrate my own approach.

      From Wikipedia he begins with a “protoself,” and it’s stated that all life has such a thing. Conversely I begin before life, or a state of nature where all things seem to function “mechanically.” Then I consider life to have brought computation/mind given that something like a cell will take input, dynamically process it through its genetic material, and then provide output based upon that processing. A plethora of microorganisms, plants, and fungi should have thus been able to evolve through such a start.

      Then I suspect that the Cambrian explosion brought computers from which to instruct the function of full organisms. This is where I believe Damasio’s protoself should actually have emerged (rather than at life itself). Supposedly it’s “…signified by a collection of neural patterns which are representative of the body’s internal state.” They continue, “The function of this ‘self’ is to constantly detect and record, moment by moment, the internal physical changes which affect the homeostasis of the organism.” I expect that a vast array of these “biological robots” emerged that accept inputs from various internal and external senses, to process them for output.

      From here my own theory is that evolution found it difficult to program them effectively enough given the wide range of possibilities found in open environments, and so it developed a second kind of computer which runs off the first that was instead motivated by punishment/reward. This is my “why without how” of consciousness.

      In the following passage their account of Demasio’s theory doesn’t give a “why,” but I think rather a phony “how”: “In this state, emotion begins to manifest itself as second-order neural patterns located in subcortical areas of the brain.[2] Emotion acts as a neural object, from which a physical reaction can be drawn. This reaction causes the organism to become aware of the changes which are affecting it. From this realization, springs Damasio’s notion of “feeling”. This occurs when the patterns contributing to emotion manifest as mental images, or brain movies. When the body is modified by these neural objects, the second layer of self emerges.[2] This is known as core consciousness.”

      While they didn’t mentioned punishment/reward in this section, there are other places where the concept is attributed to him. The following is one: “Damasio also proposed that emotions are part of homeostatic regulation and are rooted in reward/punishment mechanisms.” So if we give him “affect” (even though, unlike F&M, he didn’t use a formal term for it), he clearly did get the “senses” and “memory” inputs. I’ll go through my associated model first.

      I present “memory” as a potential input to the conscious processor, that exists as a rough recording of past conscious experience. The conscious processor (“thought”) interprets inputs (affect, sense, memory) as well as constructs scenarios, to promote the only thing that matters to it, or the instantaneous maximization of the “affect” input. Although concerned only about the present, consciousness is still able to be an effective instrument over time because affect from memory of the past, as well as from anticipation of the future, can make temporal issues matter presently.

      Damasio instead leaves core consciousness without the memory input, as well as without what might be anticipated in the future. But then in “extended consciousness” he provides an “autobiographical self” that does add memory of the past. (You know more about his “convergence-divergence zones” than I do, but my interpretation is that it’s a theory of how memories connect with each other.) Beyond memory the article didn’t mention anything about how he handles the future, but he must have provided a mechanism of some kind.

      Though only a fast interpretation of an account from Wikipedia, my impression is that he’s “an engineer” who hasn’t quite mastered the big picture perspective of “architecture.” The thought is that he’s too close to brain mechanics to stand back and realize the scope of what a full consciousness model requires. Here we don’t need to know where it is that various things are happening in the brain, or even to “geek out” with an elaborate conception of memory. That could all be worked out once a full coherent model demonstrates the relationship between the principle instruments. I believe that I’ve developed such a model, and I’m also pleased to see that it encompasses his attempt.

      By the way, I wonder if you’re aware of the following statement from Wikipedia?: “He also demonstrated that while the insular cortex plays a major role in feelings, it is not necessary for feelings to occur, suggesting that brain stem structures play a basic role in the feeling process.” This would seem to put him in league with Jaak Panksepp, as well as open the door to insect consciousness.

      Liked by 1 person

      1. Eric,
        I’m not sure how accurately that Wikipedia article captures Damasio’s ideas, but then Damasio himself isn’t always the clearest writer, so it might be me who isn’t getting it. But I never understood his idea of the protoself as applying to all life. An animal needs some kind of central nervous system to have it. The wiki writer might be confusing the protoself with biological value, which does apply to all life. But the rest does seem right.

        On the brainstem and emotions, Panksepp has a long passage in his book noting (somewhat gloatingly I perceived) that Damasio has come closer to his position, but repeatedly making it clear that Panksepp had been there first by decades.

        But in his 2010 book, Damasio seems clearer in his distinction between the emotion and the conscious feeling of the emotion. He somewhat follows James / Lange theory, that the emotion is a triggering of hormones from sub-cortical structures leading to changes in heart rate, etc, but that the feeling of the emotion comes from the proto-self and core-self perception of its effects on the body. But his views may have changed since then.

        Panksepp himself doesn’t seem to make a distinction between emotions and feelings. He seems to see “emotion”, “feeling”, and “affect” as all more or less synonymous with each other, and he’s scornful towards James / Lange theory.

        F&M have kind of a middle ground, which actually feels more right to me. They agree with Damasio’s (and others) separation between the triggering of the emotion and the conscious feeling of the emotion, but don’t think it only comes through the brain-body loop, noting the extensive connectivity between the brainstem, limbic system, and cerebral cortices.

        Anyway, the thing I thought was missing from Damasio’s framework was what separated conscious and unconscious processes. F&M didn’t address it much either, but in their case they were discussing animals, which makes that distinction difficult. And high-order consciousness seems outside of Panksepp’s scope.

        In his book, Damasio uses the phrases “core self” and “autobiographical self”, rather than “core consciousness” and “autobiographical consciousness”, indicating maybe that he no longer wants to necessarily claim levels of actual consciousness for these entities. In that sense, the wiki article might be using an older terminology that he doesn’t use anymore.

        On CDZs (convergence divergence zones), associating memories isn’t all they do. The idea is that they handle just about any association, including the ones between memories, between perceptions and memories, between memories and emotions, memories and actions, etc. And you have to think of them in terms of hierarchies, such as the parts of the concept “dog”, but also the association between that concept and the dog that chased your car the other day, a memory which itself would be a vast array of CDZs.

        Based on that wiki quote, I’m not sure what Damasio’s current position on feelings are. He does note in the book that the brainstem does build images (such as the body map), which he links with consciousness to some degree. But his view of consciousness, and Panksepp’s from what I can tell, seems to be that it exists in several stages throughout the layers of the brain.

        My own current feel with all of this, is that what we call “consciousness” draws on information from throughout the brain, with the PFC (prefrontal cortex) acting as the information nexus. The brainstem isn’t conscious, nor any other region by itself. It’s the flow of information (including feelings) to the executive centers, to the planner, that creates conscious experience. (Which means that lobotomies effectively turned patients into zombies, which unfortunately matches the descriptions of those poor patients’ post-operative behavior.)

        Liked by 1 person

      2. Mike,
        It’s good to hear that I was reasonably close with my brief look at Damasio’s ideas. I’m sure you’re right that in wiki they meant to say that all life has “biological value” rather than “protoself,” since one makes sense and the other does not.

        I can understand the urge for Panksepp to gloat about Damasio coming closer to his position, since Panksepp seems to be considered a relative nobody when compared against the quite renowned Antonio Damasio. (If my ideas ever get anywhere, I think I’ll be too happy to gloat.)

        If Panksepp sees “emotion,” “feeling,” and “affect” to all be “conscious,” then it would seem that he defines these terms somewhat as I do. There are no true definitions. Thus if Damasio defines emotion as some sort of neural means by which emotion is consciously felt, he simply can’t be wrong. But it’s hard for me to see how this might be useful.

        James / Lange theory seems pretty backwards to me, or at least when interpreted strictly. Let’s say, for example, that you call me some horrible name. First I consciously interpret your statement. Then given this interpretation, my non-conscious mind would be expected to causes me to feel bad. This feeling would then exist as a new input from which I’d figure out what to do next.

        I’m sure you’ve noticed that I don’t use the “unconscious” term, so by now an explanation is probably in order. I don’t like that it implies the existence of a “quasi consciousness,” or something beyond conscious and non-conscious computing. When I want to blend the two I like to use “subconscious,” and “subliminal,” since I don’t perceive them to imply the independent existence of something else. (I also use a “sub-conscious” term, spoken with a slight pause, to address degraded states of consciousness, such as being asleep or stoned. I wonder if you know of a more established term to represent degraded conscious states in general?)

        The essential difference between a conscious computer and a non-conscious computer as I see it, is the first functions by means of a punishment/reward motivation, and the second functions by means of logic statements. I consider the conscious computer to exist through the non-conscious computer however, so logic statements must do everything in the end.

        Regarding memory, I use a definition that shouldn’t be any less broad than Damasio’s, or “Past consciousness, that remains.” Like him I consider these past conscious experiences to be connected in groups, and thus a given scent might evoke a host of associated memories.

        Liked by 1 person

        1. Eric,
          “Thus if Damasio defines emotion as some sort of neural means by which emotion is consciously felt, he simply can’t be wrong. But it’s hard for me to see how this might be useful.”

          I’m not sure you’re getting Damasio’s definition and distinction. Consider this. Emotions a) come into being and b) they are felt. As far as I’ve been able to tell, Panksepp believes that these two events are more or less the same, or that they take place in the same region of the brain. Damasio makes a distinction between them, although he does seem to allow that at least some of the feeling might take place alongside the neural origin of those emotions.

          I do agree that James / Lange theory is implausible, if for no other reason because it ignores the brain’s actual wiring. (To be fair, that wiring wasn’t understood in William James’ time.) But I also think its insight that the emotion coming into being is a distinct event from the emotion being felt, is valuable. And I think Panksepp overlooks something quite important when throws that out with the rest of the James / Lange bathwater.

          On “consciousness”, “unconsciousness”, “subconscious” and all the rest, I don’t know of any clearly delineated terminology. In truth, I have to admit that I tend to be pretty loose with the way I use those terms, although my use of the word “consciousness” itself seems to have narrowed over the years. This is one of the biggest challenges with discussing consciousness, that the definitions are vague and usage inconsistent.

          It’s similar to the problems with words like “emotion”, “feeling”, and “affect”. It takes considerable care not to fall into situations where people are arguing past each other with disparate definitions.

          “The essential difference between a conscious computer and a non-conscious computer as I see it, is the first functions by means of a punishment/reward motivation”

          One difference between our approaches is I think it’s important to unpack the phrase “punishment/reward motivation”. Once we do, I can’t see anything other than a different type of programming directive. The main difference I can see between the processing of a lamprey and that of a self driving car, are the goals of those directives. The lamprey’s programming is geared toward survival and procreation, the self driving car’s toward human safety and successful navigation. But the lamprey’s goals trigger at least a glimmer of our intuition of consciousness, whereas the car’s goals don’t.

          Liked by 1 person

      3. Mike,
        Regarding emotions coming into existence before they are felt, couldn’t this just be a matter of definition? I don’t mind others defining the term such that emotions occur before they are felt (and so might never be felt) as long as we’re clear on what definitions we’re using in any given case. It’s not like there’s a “truth” here. Nevertheless I personally prefer defining “emotions” such that they do not exist unless they are felt.

        Consider this thought: Perhaps I have some wound that puts me in constant pain when I’m conscious. Thus it may be that my non-conscious computer is signaling “pain” for my conscious processor to interpret. Let’s say that I then lose consciousness by some means that doesn’t disrupt the non-conscious pain signal. Here I would feel no pain because I’m no longer conscious, even though the pain signaling does continue. Thus it could be defined that pain exist in me here, though I prefer to say that it doesn’t. Similarly I prefer to say that there are no emotions in cases where these signals are produced, but not felt. Regardless, all that should really matter is that I use the definitions of anyone that I’m trying to understand.

        Note that it wouldn’t have been neurological wiring that caused me to consider James / Lange theory implausible, given that I’m probably at least as ignorant about such wiring as they were back then. I simply consider the concept epistemologically un-useful in relation to my own architectural models of conscious and non-conscious function.

        Regarding the pernicious nature of fluxuating terms, I certainly agree. I mean to help improve things through my first principle of epistemology, as well as by providing what I think are some very useful, as well as specific, definitions. (And fortunately your recent epistemology post does still await me!)

        On unpacking punishment/reward, I’d hope for you to compare yourself against a self driving car rather than a lamprey. If you have a first hand account of the nature of what we’re calling “consciousness,” then it would be a shame to not use it directly.

        We believe that evolution made you and our kind conscious, in order to promote our survival. But apparently this thing that it created in us to promote our survival, harbors an independent purpose of its own that’s different from “survival.” Evolution gave us a separate purpose (I think) because it could then design this auxiliary purpose in ways that serve the primary one (survival), but force the subject with this independent purpose to figure things out “personally” rather obligate evolution to sort out such programming beforehand. That’s “the hack,” as you once called it — build something with its own personal purpose from which to function, such that it also serves your purpose.

        For example, if we could build a vacuum that enjoyed cleaning houses (reward) and hated not cleaning houses (punishment), the purpose of it would not be to clean houses. That would instead be our purpose, or the reason that we created it. Its purpose would instead be to promote its rewarding existence and diminish its punishing existence, popularly known as “happiness.” The more happiness it receives, the better existence should be for it, with the opposite being the opposite.

        The only way that anything can understand punishment/reward, I think, is to experience it personally. I know I do. If existence can be horrible for you, and if existence can be wonderful for you, then you should be able to use your memories of such experiences to contemplate the punishment/reward concept that I’m referring to. Here your purpose is to “be happy” rather than “survive,” though evolution created the means of your happiness to roughly correspond with genetic proliferation. This way it could get us to figure things out for ourselves rather than directly program us to do what we do.

        As I conceive it, this punishment/reward stuff really is quite magical, or the only thing that’s valuable to anything that exists. Thus I can’t blame Descartes for believing in magic. Perhaps he was right. But I’m a naturalist anyway, and so I believe that evolution figured out a way to develop personally relevant computers, by means of the logic statements that normal computers use. Maybe so, and maybe not. The only thing that I can’t dispute is that punishment/reward does exist, and this is because I know that existence can be horrible/wonderful for me.

        Liked by 1 person

        1. Eric,
          My issue with defining “emotion” to only be consciously felt emotion, is that it leaves us without a name for a subconscious impulse or disposition. Plenty of psychological studies have shown that we often aren’t conscious of things that affect our behavior, that bias our decisions. If we narrow the definition in the way that you suggest, then we need to come up with another term for the subconscious or unconscious emotion-like impulses.

          Myself, I’m gradually becoming resigned to the fact that, for these types of discussions, I’ll just have to use the “conscious”, “subconscious”, or “unconscious” terms as prefixes to be clear what I’m talking about. So we have consciously felt emotion and unconscious emotion.

          On comparison with self driving cars, I chose the lamprey over myself for two reasons. One, it’s an external system, just like the car, whose consciousness we can only assess as an observer, no matter how much information we have on their internals. Second, they’re much closer in ability than I am with the car. (The car may look far more sophisticated than the lamprey, but its raw ability to navigate the world seems to be just starting to approach lamprey like capabilities, although admittedly the comparison is fraught with complications.) Using humans for the comparison exaggerates the differences between the car and a conscious system.

          On punishment / reward being magical, “magic” to me is just a word to refer to something we don’t understand. One of the things I push back most often in consciousness conversations is the resistance to piercing the veil of these notions. Until we do, what goes on behind that veil may feel magical, but just the willingness to attempt that piercing deflates a lot of the mystery. The problem, of course, is that many don’t want to see that mystery deflated, but I think progress requires it.

          I would add some addendums to my observation above that punishments/rewards are just programmatic directives. They’re a collection of directives that can’t be voluntarily dismissed, and the individual directives can conflict with each other. The conflicts are resolved by the simulation engine simulating the result of following each directive (or combination of directives). The result of each simulation is in turn evaluated according to the directives and, ideally, the path most conducive to the stronger directives is chosen.

          So, input from outside the system, programmatic goals, and a simulation engine to assess specific actions in relation to the incoming information and goals, seems like the crucial ingredients for what we call “consciousness”.

          Again though, the specific goals seem to be important for most people. I’d be inclined to consider the vacuum cleaner you described as having some form of consciousness, but I don’t think most people would. I don’t know if you watched the new Westworld TV series, but “consciousness” on that show seems to be about the robots finding an overall biology-like orientation, a distinction that viewers intuitively seem to agree with.

          Liked by 1 person

      4. Mike,
        On your concern for us to acknowledge our behavior to be far more than conscious, and so your preference for defining emotion more broadly than I do, I agree with the sentiment. Still my models seem to already have this baked in. I’m the person who says that less than one percent of the human computer is “conscious,” and also that we’re similarly more “mechanical” than computer driven. Thus I may not be quite as concerned about claiming “Hey, that behavior is not exclusively conscious,” since it never is. So from the miniscule amount of computing that I’m able to call “conscious,” I’d hope it not be shared with the main computer. That computer presumably creates “emotion,” and all conscious inputs. I may not consciously acknowledge that I’m in love with someone, even when I am. Still I’d call it felt emotions that cause me to be in love, and regardless of any non-conscious and subconscious dynamics involved in this. If I didn’t feel those emotions, then I wouldn’t call it “love.” It’s the same with “pain” — I reserve them for sentient existence.

        You gave good reasons for comparing autonomous cars with lampreys rather than us. My point however is that no external observations can ever demonstrate the punishment/reward, or affect, or utility, or happiness, concept sufficiently. Either one is able to find this in one’s personal existence, I think, or one shall not find it at all. No concept has nearly this importance to my ideas, so I’m pleased with your interest. If you know that existence can be horrible/wonderful for you, then that’s specifically what I’m referring to. Your own experiences are all that apply.

        This brings us to “magic.” I appreciate your use of this term to represent what we don’t understand. Thus over the past few centuries, science has unburdened us of a great many “magical” notions. But beyond that, I also consider magic to represent a void in causality. Thus here magic is not natural, but rather supernatural. In your epistemology post ahead, I’m sure you’ll enjoy thinking about why I’m wrong to claim that most modern physicists believe in “magic” by means of their interpretation of Heisenberg’s Uncertainty Principle. Soon enough. 🙂

        So, input from outside the system, programmatic goals, and a simulation engine to assess specific actions in relation to the incoming information and goals, seems like the crucial ingredients for what we call “consciousness”.

        Sure, though I’d preface that with “effective” consciousness, or how it’s suppose to function. (Effective “output” would be helpful as well.) If a person processed horrible pain, but without simulations about what to do about it, that should not be effective function. But should it then be classified as “non-conscious” function? No, I’d rather just call it conscious function that isn’t effective. I suspect that professionals in the mental health business would agree.

        Liked by 1 person

        1. Eric,
          Sorry, had to be brief and rough again.

          “My point however is that no external observations can ever demonstrate the punishment/reward, or affect, or utility, or happiness, concept sufficiently. Either one is able to find this in one’s personal existence, I think, or one shall not find it at all.”

          I have a couple of issues with this point. First, it seems to inherently assume that the distinction is epiphenomenal, that it has no behavioral effects. And second, it seems to make it impossible to assess consciousness scientifically.

          But from everything I’ve read, consciousness has a role in producing behavior. Certain types of behavior seem impossible without it. Which makes sense from an evolutionary perspective. Once we isolate those types of behavior, then we can observe if another system has it, and assess whether consciousness is there or not. I listed F&M’s criteria in an earlier comment: https://selfawarepatterns.com/2017/01/17/two-brain-science-podcasts-worth-checking-out/comment-page-1/#comment-16072

          “If a person processed horrible pain, but without simulations about what to do about it, that should not be effective function. But should it then be classified as “non-conscious” function?”

          My thought on reading this is to wonder what we mean by “processed horrible pain”. “Pain” seems to require a comparison of sensory information against a model of ideal circumstances. And I’m not sure suffering is suffering if there’s no capacity to imagine it not being there. At some point, without those things, it becomes just a signal that triggers a disposition, which seems like robotic reflex action.

          We also have to be careful not to see this as equivalent to someone with locked in syndrome, where they can run simulations and modeling, but can’t act on any of it.

          Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.