The sensorium, the motorium, and the planner

I’ve been reading Gerhard Roth’s The Long Evolution of Brains and Minds. This a technical and, unfortunately, expensive book, not one aimed at general audiences, but it has a lot of interesting concepts.  A couple that Roth mentions are the terms “sensorium” and “motorium.”

The sensorium refers to the sum total of an organism’s perceptions, to its ability to take in information about its environment and itself.  The motorium, on the other hand, is the sum total of an organism’s abilities to produce action and behavior, to affect both itself and its environment.

What’s interesting about this is that sensoriums and motoriums are ancient, very ancient.  They predate nervous systems, and exist in unicellular organisms.  Often these organisms, such as bacteria, have motoriums that include movement via flagella, tiny motors that propel them through their environment.

Their sensoriums often include mechanoreception, the ability to sense when an obstacle has been encountered, and this triggers the flagella to temporally reverse direction for a brief time, followed by a programmed change in direction, and subsequently by forward motion again.  This is typically repeated until the organism has cleared the obstacle.

These organisms also often have chemoreception, the ability to sense whether the environment contains noxious or nutritious chemicals, which again can cause a change in motion until the noxious chemicals are decreasing or the nutritious ones increasing.  Some unicellular organisms even have light sensors, which can cause them to turn toward or away from light, depending on which is more adaptive for that species.

Reading about these abilities, which in many ways seem as sophisticated as those from simple multicellular animals, and given the evolutionary success of these organisms, you have to wonder why complex life evolved.  (There are many theories on why it did.)  But it’s interesting that the earliest multicellular organisms, such as sponges, actually seem less responsive to their environment overall than these individual unicellular life forms.

It’s also interesting to consider what is different between the sensorium of these unicellular and simpler multicellular organisms, and those of more complex animals such as amphibians, mammals, birds, arthropods, and the like.  Obviously the addition of distance senses dramatically increases the size of the sensorium, allowing an organism to react to more than just what it directly encounters, but also by what it can see, hear, or smell.

When we remember that some unicellular organisms have light sensors, and that the evolution in animals from light sensor to camera-like eye is a very gradual thing, with no sharp break between merely detecting light, having multiple light sensors to detect the direction of light, and forming visual images, then the addition of distance senses starts to look like a quantitative rather than qualitative difference.

Of course, more complex animals also have far more complex motoriums enabling a larger repertoire of behavior.  A fish can do more than a worm, a lizard more than a fish, a rat more than a lizard, and a primate more than a rat.  This increased repertoire requires more sophisticated motor machinery in the brain.

But that leads to what is probably the most significant difference, the communication between the motorium and the sensorium.  In unicellular organisms, the communication between them seems to be one way.  The sensorium senses and sends signals to the motorium which acts.  This also seems like the pattern for simple animals.

But distance senses and complex behaviors require interaction between the motorium and sensorium.  In essence, this involves higher order functionality in the motorium interrogating the sensorium for both past perceptions and future scenarios.  A good name for this higher order functionality could be “the planner”.  (I considered “imaginarium”, but that sounds too much like an amusement park attraction.)

The motorium planner interacts with both the sensorium and the lower level motorium.  It constantly queries the sensorium for perceptual information related to possible movement scenarios, and the lower level motorium for reflexive responses to each scenario.  Sometimes it does this while directing the lower level motorium in real time, but often it is considering alternatives while the lower motorium engages in habitual action.

Lobes of the brain
Image credit: BruceBlaus via Wikipedia

In the human brain, the sensorium seems to primarily exist at the back of the brain.  It includes the temporal, occipital, and parietal lobes.  The top part of the parietal lobe is a region sometimes called the posterior association cortex.  This is the center of the human sensorium.  (There is also a simple sensorium in the midbrain, but it appears to operate somewhat separately from the thalamo-cortical one.)

The motorium exists at multiple levels.  The lower levels are in the brainstem and basal ganglia.  Together these handle reflexive and habitual movement respectfully.  The higher order motorium, the planner, is in the frontal lobe cortices, including the prefrontal cortex and premotor cortex.

Neuroscientists often say that consciousness requires activation of both the frontal lobes and posterior association cortex.  (This is sometimes referred to as the “fronto-parietal network.”)  The reason for this is the communication between the motorium planner and the sensorium.  It may be that full phenomenal consciousness requires this interaction, with qualia in effect being the flow of information from the sensorium to the motorium planner.

But there is some controversy that the motorium is required for phenomenal awareness.  Many neuroscientists argue that the sensorium by itself constitutes consciousness.  The problem is that a patient with pathologies in their motorium usually can’t communicate their conscious state to anyone, making determining whether they’re conscious somewhat like attempting to see whether the refrigerator light stays on when the door is closed.

Neuroscientist Christof Koch points out that patients with frontal lobe pathologies who later recovered, reported having awareness and perception when their frontal lobes were non-functional but simply not having any will to respond.  But even this leads to the question, were they fully conscious when they laid down those memories?  Or is “consciousness” just a post hoc categorization we’re attempting to apply to a complex state on the border?

So we have the sensorium and the motorium, which predate nervous systems, going back to unicellular life.  What seems to distinguish more advanced animals is the communication between the sensorium and motorium, particularly the higher level motorium planner.  And this might converge with the view that full consciousness requires both the frontal lobes and parietal regions.

Unless of course, I’m missing something?

25 thoughts on “The sensorium, the motorium, and the planner

  1. “patients with frontal lobe pathologies who later recovered, reported having awareness and perception when their frontal lobes were non-functional but simply not having any will to respond.”

    Skip the question of consciousness for a moment, and ask a value question instead. How would the patient value that stretch of life when they had a sensorium but no motorium? Would they feel that, should they permanently relapse into that condition, their life is as good as over?

    Liked by 1 person

    1. Good question. I just checked Koch’s book and a paper I once read to see if there were any details that might reveal that, but neither did.

      I suspect while the patient was in that condition, they weren’t bothered by it, since to actually be bothered by it requires feeling preferences in their frontal lobes. I’m sure afterward they were happy about having recovered, although I suppose it would depend on the extent of the recovery.

      Like

        1. Do you mean while they’re aware-but-not-motivated that they would still regard that state as valuable? That’s a complicated question. They might still have reflexive reactions to negative stimuli (assuming the relevant subcortical structures are still functional), which might give the impression that they do. But it’s not clear they’d feel that preference. Put another way, they’re valuing of life might only be pre-conscious in that state.

          Like

  2. Hi, friend!
    I agree with all you said.
    The thing you’re maybe missing is that our consciousness unfolds even in the body nerve cells, not only in the brain or frontal lobe at least. I recommend you the Vilayanur Ramachandran’s book “The Tell-Tale Brain”. He enplanes very interesting processes in the consciousness.

    Liked by 1 person

    1. Thanks. I am familiar with Ramachandran’s book and periodically recommend it as an excellent source of information about what can go wrong in the brain, and what it reveals.

      Though I’m not sure I buy that consciousness unfolds outside of the brain. Definitely the brain is heavily influenced by the signals it is constantly receiving from the body in a tight resonance loop. But that loop can be interrupted in a lot of ways and someone can remain conscious. The one things you can’t eliminate for consciousness is the brainstem / thalmo-cortical stack. Everything else seems optional.

      Liked by 1 person

  3. This post generally conforms with my own perspective. What I’ll do however is add in some of my more original positions to perhaps incite some thoughts in others.

    The “sensorium” term might more plainly be referred to as “input”, and the same could be said of “motorium” as “output”. What’s missing here then is a middle that converts one to the other, or “processing”. Mike did get into that at a higher level, or his “planner” when there is feedback from the motorium. I think it’s helpful to presume even some processing whenever input leads to output however. While the press of a key input is clearly processed to produce output letters on my screen, I’d say that the mechanisms which force a lettered arm to hit the paper for a mechanical typewriter, may effectively be called “processing” as well. One machine is referred to as “computational” in nature while the other is “mechanical”.

    This observation leads me to think that the genetic material by which “life” functions at its most fundamental level, may also effectively be referred to as “computational”. Here various substances (input) interact with genetic material (processing) to result in associated function (output). Indeed, before the emergence of genetic material I can’t think of anything else to effectively refer to this way.

    When a microorganism enters a more toxic environment, I’d think that such material input (chemoreception) is set up to react with genetic material so that flagella output directs it to less toxic environments, or computation. Will computation be displayed when a barrier incites the flagella to turn the organism in an appropriate way (or mechanoreception)? Possibly not. That function might be more mechanical in nature, though I can’t really say.

    Then once we get into multicellular organisms, reality’s next form of computer seems to have emerged given nerve function. But I’m not sure if unique paths connecting a single form of input to a single form of output lasted long. Regardless today we generally see central nervous systems. This is to say that an assortment of inputs are processed under a single system for output function. In more advanced varieties of system we even say that “brain” exists.

    These systems are all fully non-conscious of course, though non-conscious brains do sometimes and somehow produce “consciousness” as well (or the medium by which, I presume, you’re perceiving existence right now). I consider consciousness itself to be reality’s next variety of computer.

    “PE consciousness” (or my own “Philosopher Eric” consciousness definition), may thus be considered entirely as an output of the brain. So when I speak for example of “pain”, I mean that the non-conscious brain somehow produces this experience for the conscious entity (rather than the brain itself) to receive. Or it could be “vision”, “smell”, or even a past conscious experience (also known as “memory”). So here we have inputs for this next entity (such as you, me, or a bird) to interpret and construct scenarios about how to promote valence based interests for output function (or muscle operation). I consider this teleological form of function to be the value driven place where things get “personal” for us conscious entities. And here value represents all that’s good or bad for anything anywhere, because by definition nothing is good or bad for anything else.

    Like

    1. “I’d think that such material input (chemoreception) is set up to react with genetic material so that flagella output directs it to less toxic environments, or computation.”

      I don’t think genetic information is processed that quickly, in a way where it could participate in real time interactions. From what I understand, genetic information is more used in building and maintenance. It “delegates” that kind of processing to the systems it generates. So these cellular systems outside of the cell nucleus could be considered another form of computer that predates the neural one.

      Although whether these count as computation is an interesting question. What distinguishes computation from a mechanical process? I think the distinction is that processes we normally think of as primarily physical depend on the magnitude of the energy being processed. A mechanical typewriter only works if your fingers provide a certain amount of force. But a computational system tends to keep the energy levels more or less at an even keel. What matters for them is the patterns of energy flow.

      I’m not sure which category unicelluar sensory and motor systems fall into. It might be that they’re much more mechanical than computational.

      Eric, have you given any thought to how PE-consciousness, the conscious computer, is physically implemented? Is it tied to a biological substrate? Could it ever be generated by a technological system? If not, what about it makes it something that can come from carbon based biological processes but not silicon ones?

      Liked by 1 person

      1. Yeah Mike, it does make more sense that the “programming” would be built right into the flagella motor. I wonder if experts today have a sense of how noxious chemicals influence the flagella to behave as it does? Can they actually see a path by which chemical receptors alter motor function? If so then they should understand the relative energy processing requirements and thus whether or not this meets your definition for “computation”. If it does then I suppose that I’d need to add another computation category for all such micro biological machines.

        Regarding the “how” of my consciousness definition, beyond being a causal output of a non-conscious computer, I’ve got no clue. My consciousness definition boils down to sentience, or what I consider to be reality’s value dynamic — a truly hard question as I see it. Even if we some day figure out how to build sentience, I doubt we’ll figure out why sentience actually results. I consider this truly weird stuff!

        Since I have no clue what happens in my head to create sentience, I certainly don’t feel qualified to say that it’s impossible for a technological system to create it. Indeed, under a causal realm of existence it would surprise me if biological dynamics were mandatory. Why would they be special?

        Back to your low energy processing requirement for computation, I’m now wondering whether or not my own consciousness definition qualifies? The first form of computer is genetic material, which I presume concerns low processing energy given associated chemical relationships. Then the second seems to qualify given neural dynamics — the chemical/ electrical propagation of logical operations for a central organism processor. The fourth certainly qualifies given that our computers process information on the basis of quick and efficient electricity. So does sentience (or #3) incite processing with low energy like electricity, or is this more like pressing a mechanical key that moves other such instruments?

        Imagine a puppet that functions consciously like us, though by means of magic. Here it will have information senses like vision, as well as memory recollection. Such inputs will consciously be processed given the motivation of sentience. So it interprets inputs and constructs scenarios about what to do to make itself feel better from moment to moment. It seems to me that the displayed sentience based processing would be low energy, which is to say computational.

        The magic element to this thought experiment may make it difficult to follow, but if comprehensible enough, it might also help clarify “the tiny computer” component to my consciousness definition. If you ask this puppet how it feels, how many calculations should it make while it decides what to tell you? Not millions, but certainly a handful or more. Perhaps it feels hungry and tired and so finds the words to describe this to you. And these are the sorts of calculations that I’m referring to as “conscious” in the human as well. In us I’m not referring to a vast set of “brain calculations”, but rather outputted conscious dynamics. Hunger, vision, and so on are output of brain experienced by the conscious entity, and they are what’s being referred to rather than what causes them. From this level I can be confident that less than one thousandth of one percent as many calculations are being made as in the vast supercomputer which outputs PE-consciousness.

        Liked by 1 person

        1. Eric,
          Taking a quick perusal through Roth’s descriptions of unicellular systems, it sounds like the receptors are in the membrane. Depending on what is received, chemical reactions are altered. (The text is a bit vague whether this is along the membrane or through the cytoplasm.) The chemical mix alters the configuration of the molecular motor that drives flagella or cilia.

          Then there’s this passage (just power through the chemical details to the conclusions):

          But how does E. coli know that it is swimming in the right direction? In order to do so, it must test whether the aspartate concentration increases or decreases. This requires the measurement of a change in concentration. At the same time, the receptor must always remain optimally sensitive, even at major changes of the concentration of the substance under question. Both problems are solved by an ingenuous chemical process which is illustrated in Fig. 6.1. It is based on the fact that the degree of methylation of the glutamate resting at the receptor (the methyl group CH3) is modified by attaching SAH. SAM serves here as the “donor” of SAH (Fig. 6.1, right). The Tar receptor is the more susceptible to aspartate, the higher the degree of methylation of the glutamate. The receptor can control the methylation process via an inhibition of a methyl transferase B (i.e., an enzyme that transfers a methyl group) and this happens the stronger, the more the receptor is activated. At increasing aspartate concentration, the receptor becomes less and at decreasing concentration more susceptible. Thus, the receptor possesses its own negative feedback or adaptation mechanism. Since this feedback of methylation and demethylation takes a few seconds, it creates a “short-term memory” of what happened a moment ago. Similar things happen at the recognition of other nutrients or toxic substances.

          We can interpret this entire process as the simplest example of goal-directed behavior known in nature.

          Roth, Gerhard. The Long Evolution of Brains and Minds (p. 71). Springer Netherlands. Kindle Edition.

          Whether this is computation is a matter of interpretation, but it certainly seems to be processing information.

          On the low energy requirement, don’t get too hung up on that. Even technological computers don’t follow it exactly. For example, when processing video, your computer may increase the clock speed, which might increase the temperature leading to the fan coming on, and burn up the battery faster. Computation is always a physical process. It’s just that generally the magnitude of the energy being moved around is not the main concern, but the specific patterns.

          There are usually ancillary systems that magnify the output signals and mediate the input signals into whatever the main system’s working energy level is. We call these ancillary systems input/output systems in technology, and neuromuscular junctions in biology.

          So are you saying that the computations of the tiny computer are what we consciously work out, essentially the computations we consciously engage in?

          Liked by 1 person

          1. [I somehow missed this discussion. Deciding to jump in … here]

            Not sure if I should curse you or thank you for making lookup how chemotaxis works. (Took a while). I’m going with … thanks. Here’s my quick (over simplified, but essentially accurate) summary:

            The receptor is a protein stuck in the membrane, part of it outside the cell, part of it inside. When it binds a chemical on the outside, it “shrugs”, and so does something physical on the inside of the cell. That changes a different protein (CheA) sitting near the membrane inside the cell, making cheA active or inactive, depending on what bound on the outside (an attractant or a repellant). If CheA is active, it activates another protein which is cheY. CheY floats around inside, so can act as a messenger and float to the motor. If an active cheY binds to the motor, the motor goes into tumble mode (as opposed to go straight mode).

            As for whether this is a computation, I think you just have to define computation first and see if it matches. And I think your energy level idea is worth considering … and then discarding. 🙂

            I think a more significant difference is whether the chain of signals, the chain of causation, involves arbitrary symbols. An arbitrary symbol is one that has no dependence on its physical form. I.e., it would be easy to change the physical form and keep the same functionality. A neurotransmitter is an example of an arbitrary symbol. You could change it to a different one and you would get the same effect (as long as you change the recipient to have the appropriate receptor). The signaling in chemotaxis described above does not involve any such symbols. Each step in the pathway does a specific physical thing to the next thing in the pathway, kinda like knocking over dominos.

            So chemotaxis is definitively signal/information processing. There’s a whole field called semiotics that deals with various kinds of signals, including symbols.

            *

            Liked by 1 person

          2. Since you mention tumbler mode, I’ll just clarify that the uniform direction of a unicelluar organism is often the result of a collection of flagella (a superflagellum) spinning together. When one of these reactions takes place, it causes one or more of the individual flagellum to change direction, which induces tumbling. After a second or two, the superflagellum reforms and the organism is going straight again, but now in a random new direction. If that new direction still causes issues (moving toward toxicity, away from food, hitting an obstacle, etc), the reaction happens again and the sequence repeats itself.

            Every computation is a physical process and it ultimately always comes down to physical events unfolding like dominoes. Although in computation we would expect not just a sequence of events, but also arranged for selected actions and looping or some other kind of feedback. I quoted the sequence above from Roth because it does seem to denote looping and selection. But I’ll agree that this is a relatively simple mechanism and that to the extent it might be computation, it’s pretty simple computation.

            The arbitrary symbol requirement is similar to other philosophies of computation. I don’t necessarily disagree, although I think you’re hasty in dismissing the energy attribute, and biology is messy and we don’t see the clean delineations we see in technological systems, which are built to implement a symbolic processor design (such as a Turing machine).

            Biology comes at it from the ground up, starting with the mechanisms discussed above and opportunistically using what emerges. Which is to say, a particular molecule or interaction isn’t guaranteed to represent one arbitrary thing, but may represent several in different contexts and at different times. A molecule which is an inhibitory neurotransmitter in certain locations may be a neuromodulator at others.

            Like

      2. Very informative Mike. My guess is that given the existing “pipeline”, the cell membrane would be the path for chemoreception. Going through the cytoplasm would to me seem problematic.

        On the passage that you quoted, it’s interesting how standard chemical processing delays may effectively be used as “memory”. At this point to me it seems most appropriate to consider this stuff highly advanced mechanical function, though certainly with information processing traits. It’s definitely cool stuff, but “goal-directed”? Here they might have confused teleonomy with teleology.

        I’m pleased that you’ve suggested, “the computations of the tiny computer are what we consciously work out, essentially the computations we consciously engage in”. Yes. Thus even a “Pinocchio” would possess the conscious form of computer as I define it, sans brain. And apparently our brains do a great deal of computation in order to produce consciousness. It’s the ends rather than the means that I’m referring to with this term.

        What I envision is sentience, or reality’s value dynamic, as consciousness. Just as electricity drives technological computers, under my models sentience drives functional conscious computers. Then beyond the sentience input there is an informational form such as sight, and a memory form where past consciousness is somewhat retained. The conscious entity naturally interprets its three forms of input and constructs scenarios about how to make itself feel better. I unoriginally call this stage “thinking”. The only non-thought output I know of is then called “muscle operation”, also unoriginally.

        Unless of course I’m missing something…

        Liked by 1 person

        1. Eric,
          I think, like the border between conscious and non-conscious systems, the dividing line between teleonomy and teleology is broad and blurry. The thing is, biological goal oriented behavior, or at least goal-like behavior, precedes consciousness by billions of years. What we refer to as consciousness, sentience, etc, are systems in service of that pre-conscious goal/goal-like behavior.

          Indeed, I don’t think we get sentience of the type you discuss until we have time sequenced prediction systems. Before that point, sensory perceptions lead to automatic action. Only when we could assess multiple courses of action would actually feeling be adaptive.

          This new description of PE-consciousness seems compatible with a lot of theories of consciousness. I guess my only unease with it comes down to it reifying a model of consciousness our minds build, but that I don’t think reflect reality, except for subjective experience. In that sense, a better label, at least from my perspective, might be “subjective computer” or “subjective experience computer”.

          Liked by 1 person

      3. Mike,
        I guess that I interpreted the following line from Roth a bit more teleologically than need be: “We can interpret this entire process as the simplest example of goal-directed behavior known in nature.” But I now suppose that he might be referring to teleonomy there. And without the existence of “life”, we naturalists probably wouldn’t consider anything teleonomical to exist at all, let alone teleological. So I do share his enthusiasm about this.

        On sentience not being adaptive until after the emergence of time sequenced prediction, I certainly agree. But what about before sentience was adaptive? It seems to me that in order for sentience to become adaptive, it would first need to have existed epiphenomenally. Thus here a byproduct of brain function would be to create an entity which feels good/bad, though to virtually no behavioral effect. Then with enough iterations apparently “the little computer” did end up finding a niche. It seems to me that any naturalistic “why” explanation for consciousness (such as my own non mentioned “autonomy” account), will need to follow this format. Regardless, are you good with the emergence of sentience before it was adaptive for reasons of its evolution? If not, then would you say that sentience somehow evolved to be adaptive instantly?

        I’m pleased with my new “Pinocchio” thought experiment. Thus when I say that the conscious variety of computer does less that one thousandth of one percent as many calculations as the non-conscious brain that produces it, people should now have less difficulty understanding what’s being proposed. I can’t expect them to assess my models if they don’t understand what I’m referring to.

        With this clarification you’ve mentioned that you do still have some unease with the model. I’m not quite sure what you mean by this however. Could you try explaining your concern further? Specifically I’d rather not guess what you mean by “reifying a model of consciousness our minds build, but that I don’t think reflect reality, except for subjective experience.” In any case, beyond “consciousness” I do consider “subjective experience computer” to be appropriate as well.

        Liked by 1 person

        1. Eric,
          On Roth, I cut off some of his thoughts in the quote above. Here is the full paragraph of which that sentence you note is just the first.

          We can interpret this entire process as the simplest example of goal-directed behavior known in nature. It is goal-directed in the sense that it guarantees—at least for a short time—survival of the organism by enabling it to approach that which promotes survival (i.e., food) and to avoid that which is aversive (i.e., toxic substances). E. coli has neither a nervous system nor reason nor insight, and its behavioral repertoire is of the greatest simplicity. But it has a short-term memory, although it is—in contrast to protozoans—unable to “keep in mind” something for more than a few seconds and it cannot learn, i.e., acquire a new behavior based on individual experience. If there are more substantial changes in behavior, this happens by changes over generations, and not during the lifetime of an individual. Yet E. coli is one of the most successful organisms on earth.

          Roth, Gerhard. The Long Evolution of Brains and Minds (pp. 71-72). Springer Netherlands. Kindle Edition.

          “Regardless, are you good with the emergence of sentience before it was adaptive for reasons of its evolution? If not, then would you say that sentience somehow evolved to be adaptive instantly?”

          That’s not how I see it. You’re conceiving of sentience as something separate and apart from those time sequenced simulations (which I’ll call imagination). But I see them as two sides of the same coin. Feelings are inputs into the imagination simulation mechanisms. Each feeling is essentially a nomination from the lower level reflexive systems for a certain action.

          Consider nociception and pain. Nociception is what causes a withdrawal reflex, such as yanking our hand away from a hot oven. But nociception is not pain. We feel pain, not to impel us to withdraw our hand (the reflex takes care of that), but to remember not to put our hand there again. If we’re tempted to put our hand there again, our brains quickly run a simulation which factors in the input from the last time that action was attempted, and the planner overrides the reflexive desire to put our hand there again. Ongoing pain can also spur simulations about what to do to alleviate the damage, or what not to do to exacerbate it.

          To be clear, without the simulations, there is no sentience. Sentience is a part and parcel of those simulations. Put another way, sentience evolved in the interactions between the planner and the reflexes. I can’t see that it has any other existence beyond that.

          “Specifically I’d rather not guess what you mean by “reifying a model of consciousness our minds build, but that I don’t think reflect reality, except for subjective experience.””

          What I mean by this is that our brains build a model of selected aspects of its own processing. That model is adaptive as a feedback mechanism. But we have a tendency to view that model as representing something separate and apart from the processing that happens outside of it. We discussed this before on the difference between planning a sandwich vs altering heart rate. I think the difference is only in what the introspection mechanisms have access to. Any other difference is an artifact of the model, not an actual ontological fact.

          That said, it’s undeniable that we subjectively experience the model. So by accepting that the label “subjective experience computer” for the tiny computer, you insulate yourself from that concern. Although I suspect you disagree with what I said in the paragraph just above.

          Liked by 1 person

      4. Mike,
        Actually my models also work with your “two sides of the same coin” analogy. Note that this tiny subjective experience computer is theorized to exist as output of the vast non-conscious brain. Thus here we have two sides of the same coin — one is non-conscious, and then on the other we have its subjective product.

        You told me, “You’re conceiving of sentience as something separate and apart from those time sequenced simulations (which I’ll call imagination).” Well that depends upon how you’re using the “imagination” term there. The non-conscious brain outputs sentience and therefore it’s not separate and apart from sentience. It creates it. That’s the sensible interpretation for your “imagination” which puts us square.

        But theoretically you could use the term as what I call “thought”. This is the conscious processor which interprets inputs and constructs scenarios in the quest to feel good. If you’re saying that this must be present in order for there to be sentience then I think that you’ve gotten things backwards. This is what does those handful of computations that Pinocchio does magically. This is the tiny computer which does less than 1/1000 of 1% as many calculations as the vast non-conscious computer does. It’s incited to exist by means of sentience rather than creates sentience. It’s like the words in your head when you’re interpreting what I say. Surely you don’t mean that the tiny computer causes sentience? Heck, I’m the one who has only recently been able to explain what I mean by such a thing. Thus I’m confident that we can cross this off the list to the other one.

        On nociception and pain, my model deals with them essentially as you’ve mentioned. The massive computer is responsible for the jerking away of nociception, as well as the creation of pain. It’s the tiny computer however (like the words in your head) that consciously figures out what to do. Furthermore vague recordings of such experiences create “memory” for future uses. Here we can learn.

        You’ve brought up “ontological facts”, and maybe that’s part of the issue here. You can always fault my models for not being true, but even I don’t refer to them as such. I consider them effective epistemology. The field of physics is full of effective epistemology (such as F=ma) while our mental and behavioral sciences are not. So surely I can’t be faulted for proposing epistemological models?

        Liked by 1 person

        1. Eric,
          Actually, by equating your conscious computer with subjective experience, I now realize that we’re talking at two entirely different levels. From your perspective, I’m talking about the interactions between different parts of the big non-conscious computer. The generation of sentience from the non-conscious computer involves the movement planner receiving signals from the lower level reflexive circuitry. From your perspective, I’m just talking about the details of how the sentience gets made.

          With this equating, I’m tempted to think that your debate with Wyrd on how well humans do computation acquires a new relevance. Although I could see an argument that the computation we’re doing subjectively is of a different type (usually) than crunching symbolic equations.

          But I guess my question is, what does considering phenomenal experience to be its own computer buy us? I think everyone agrees the brain generates subjective experience. (Even those who consider it be an illusion acknowledges that the brain generates and perceives the illusion.) But what does calling this a computer give us that we didn’t have before?

          Liked by 1 person

      5. Wow Mike, if feels really good to get that out of the way. Yes in order to understand my own consciousness model, it must be taken at a different level of abstraction from how the brain creates sentience. I’m agnostic regarding that extremely (as I see it) hard problem, though I don’t mind you proposing a conceptual answer. We naturalists believe that the brain causes it somehow, which is where I leave the question. This is an ignorance which doesn’t really bother me. In my opinion there are far more critical and attainable questions to address.

        I certainly consider subjective computation to be of a different type than crunching symbolic equations (not that we can’t do that as well as you’ve implied). The model that I’ve developed makes this clear. I’ll go into this further below, but note that technological computers are compelled to function by means of electricity, while PE-conscious computers are compelled to function by means of sentience. Very different stuff indeed!

        So what does considering phenomenal experience as “computer in its own right”, buy us? In a word this analogy buys us a potentially useful “framework”. I consider virtually everything sentient to gain their understandings by means of the analogies that they make against previous understandings. If I didn’t have the “computer” framework to build from, then I doubt that I’d have been able to develop the consciousness model that I have. Where would my associated ideas otherwise be “put”? And of course our mental and behavioral sciences are in desperate need of a generally accepted definition for consciousness, as James Cross recently began with in your “A neuroscience showdown on consciousness?” thread. Furthermore I don’t mean to imply that the “computer” term must always be taken this broadly. Computer scientists may have reason to use far tighter definitions, which is why I agreed with the theme to Wyrd Smith’s long series of computation posts (here: https://logosconcarne.com/2015/11/10/transcendental-territory/#comment-27915). James of Seattle mentioned something similar to this theme just above as well.

        It’s a glorious Saturday over here right now, and I can think of nothing more fun than setting out my lawn chair and getting into the details of Philosopher Eric consciousness with one or more friends!

        Earlier in Earth’s history genetic material somehow came to exist, and was propagated by means of evolution. I currently refer to this function as “computation”. Nerve connections began to evolve in multicellular life as well, and this led to central nervous systems for efficient whole organism function, or another form of computer as I define the term. (Scientists actually identify “and”, “or” and “not” gates here, so at least this association is quite standard.)

        It’s commonly thought that non-conscious computers can figure out most anything, though evolution suggests otherwise. Apparently these central organism processors hit and overcame a difficulty some time around the Cambrian explosion. Regardless of how extensively or not they are programmed, more “open” environments challenge non-conscious forms of function because they can’t be outfitted with sufficient programming to deal with such contingencies.

        I propose that in some of these amazingly advanced non-conscious computers, sentience emerged as an epiphenomenal by-product of brain function. So here we have PE-consciousness, which is to say that something feels good/bad, though imparting no behavioral effect. But at some point in evolution this conscious entity must have been put in charge of deciding something for effective organism function, and did so well enough to evolve into what we see today. So what do we see today?

        Today we see vast supercomputers which output tiny phenomenal experience computers, and apparently these tiny computers address the “autonomy” weakness of non-conscious computers. They bring teleological function, which is to say a purpose driven variety. And what is the purpose? It’s to feel good and to not feel bad. Theoretically the better something feels each moment, the more valuable existence is for it over that period, with the opposite causing negative value. Thus here evolution didn’t need programming instructions for each situation that might come up in more open environments (which it couldn’t effectively provide anyway). Instead it was able to leave things at, “If this happens (such as body damage) then the conscious entity will be punished, whereas if this happens (such as eating nutritious food) then it will be rewarded.” Here a thusly created agent gains a level of responsibility for its own welfare, and thus has incentive to personally figure out what to do in certain regards. Conversely the non-conscious entity possesses no value dynamic, and therefore cannot even theoretically be compelled to function with such autonomy.

        I realize that here I’m presenting something which may seem a bit “spooky” to you right now. Sorry about that, but it is what it is. To deny its existence is to deny value, and I know damn well that existence can be anywhere from horrible to wonderful to me. So I do accept it. From this point I could go the way of Descartes (and I’m not ruling that out!), or rather the way that I have gone, or with perfect causality. Consider some other spooky but accepted dynamics in science today. Four examples would be gravity, electromagnetism, the weak interaction, and the strong interaction. Furthermore “value” is a spooky dynamic which I can be far more certain exists than any them!

        Like

        1. Eric,
          “We naturalists believe that the brain causes it somehow, which is where I leave the question.”

          I think this is probably the biggest difference between us. I am interested in how the mind works. Models that don’t address that might be interesting for other things, but those things aren’t why I keep reading books on the brain. Nothing wrong with interests being different. I know a lot of people who are endlessly interested in mathematics, but that’s a bug that I never got bit by.

          On a useful definition of consciousness, I’m currently reading Michael Gazzaniga’s latest book on consciousness. He discusses the history of the concept. It’s interesting that the Greeks didn’t have a specific word for it. Even Descartes, who somewhat coined the term, used it inconsistently, waffling between it being about thinking, or thinking about thinking, or something else. Its meaning has been muddled from its earliest beginnings.

          It’s also interesting the the term etymologically evolved out of “conscience”, the word we use for moral sensibilities. It seems to show how tangled the concept is with the idea of whether a particular entity is worthy of moral concern. Which just brings me back to the fact that consciousness only exists subjectively in the eye of the beholder.

          On flexible behavior, here’s where I think we’re likely to disagree again. I don’t think subjective experience, in and of itself, provides what you’re crediting it with. The ability to engage in time sequenced simulations is probably the most complex thing the brain does. It involves most of the thalamo-cortical system, 16 billion neurons. Crediting all that functionality to subjective experience, which strikes me as the froth on the top of the system, seems wrong. At best, the highest level reasoning happens at the level you’re discussing, but most of the sausage is made in the areas you relegate to the non-conscious computer.

          “From this point I could go the way of Descartes (and I’m not ruling that out!), or rather the way that I have gone, or with perfect causality.”

          I think we have enough neuroscience under our belt to thoroughly rule out the Cartesian view. (This is why philosophy without being familiar with and heeding the current science is impotent.) On the other hand, we have pervasive evidence for the fundamental forces you list. Unless objective reality is an illusion (in which case how do we know each other exists?), the mind is a complex composite system, an information processing system, a nexus of information flows and biological impulses.

          Liked by 1 person

      6. Mike,
        As I see it, you and I are interested in the same essential thing. We simply approach this mutual interest from two separate perspectives. That’s actually a good thing. Broadly speaking, you are an “engineer” while I am an “architect”. I need you, or at least people like you, while you need me, or at least people like me. The structure which science needs to build here simply cannot be accomplished without both engineers and architects straightening out their respective sides. It’s true that my side has tremendous problems, but your side is also harmed given the failure of my side. Without generally accepted principles of metaphysics, epistemology, and axiology, science suffers in general, and apparently most strongly in human related fields such as neuroscience and psychology.

        It’s the engineer’s job to care about how the brain creates phenomenal experience, so I’ll grant that I can be faulted for being less than supportive regarding your quest. Regardless there is also a need for people like me who don’t get hung up on specific technical matters. It’s instead our job to develop broad conceptual models. (And of course sometimes engineers like Lisa Barret Feldman think that their engineering creds also give them architectural creds. Her “We couldn’t find it so it doesn’t exist” platform offers a great illustration of this. Still I can’t blame her too much for trying to do what needs to be done, even when so ill conceived.)

        It’s interesting that you bring up the etymologically of “consciousness”, since yesterday a friend had me read something from P. M. S. Hacker which began that way. I’m quite aware that words evolve into existence and change given cultural agreements. It’s the ideas behind our terms that matter, not their packaging and etymologic history. Good enough for introductions I suppose, though both Hacker and Gazzaniga have me rolling my eyes there.

        What’s wrong with consciousness existing subjectively in the eye of the beholder? Here you speak of the only ontological fact regarding reality that you can ever be perfectly certain of. Everything that you perceive itself merely exists through it and so needn’t be True. As a standard explorer of reality, it’s this one known Truth which I speak of reverently.

        On the froth at the top of the system versus where the sausage is made, that’s a mutual theme (though expressed by you in engineer speak and me in architect speak). Note that I’m the one who even dares to say that the vast non-conscious computer does more than 100,000 times as many calculations as the tiny “Pinocchio” computer by which you and I experience existence.

        Last time I went further as well. Why does the vast supercomputer fabricate the little computer at all? I think because without a purpose driven element, it does not have the autonomy which more open environments demand. Evolution simply should not be able to program for endless contingencies. So what did it do instead? It outputted a tiny teleological computer to get done what it otherwise could not. Pace Chalmers, there’s nothing hard about the “why” of consciousness.

        Speaking of Chalmers and dualism, I’ve been meaning to bring him up again. When last he came up I agreed to stop calling him a substance dualist, pending evidence. I’d merely been referring to him this way through hearsay. Massimo Pigliucci is one person I’ve heard speak of him this way. (He seems to have a general disdain for the man, though either I haven’t heard that history or have forgotten it.) Peter Hankins is another, though in the “Chalmers” write up at his site he only mentions the “dualism” accusation briefly.

        By calling him a “pandualist”, you’re another source of this hearsay. To me this term implies “Everything is conscious given supernatural forces”. So I’ve now gone back to that post. You mentioned a paragraph on substance dualism, and then in contrast wrote this:

        Pandualists solve the mind-body problem by positing that consciousness is something beyond normal physics, but that it permeates the universe, making it something like a new fundamental property of nature similar to electric charge or other fundamental forces. This group seems to include people like David Chalmers and Christof Koch.

        So he claims to be a naturalist from the contention that consciousness requires special as yet undiscovered causal stuff? Hmm… I see no reason to get exotic by proposing new stuff. We know very little about biology and all sorts of other not new stuff. And why would this new stuff have to be everywhere? Sounds like idle talk to me, and made famous given that he’s got “good game”.

        In contrast, I begin by defining consciousness as sentience, or something that we have excellent evidence of. Then beyond my detailed architectural model for functional consciousness, there is my single principle of axiology. It reads:

        It’s possible for a computer that is not conscious, to produce a punishment/ reward dynamic (or “value”) for something else to experience. This may even drive the operation of a functional conscious form of computer.”

        No special new stuff proposed here. Of course there’s plenty that does remain speculative regarding biology, and so plenty of conceptual room for a “hard problem” to be overcome beyond our understandings. There’s certainly nothing universal about biological function however, and I’d think far less so for sentience.

        There are two reasons why I believe it’s best to technically remain open to the possibility of non-causal dynamics. The first is because it’s simply responsible to grant anything in the realm of possibility, to indeed be in the realm of possibility. Then the second is political. If you take this possibility off the table, then you can credibly be portrayed as being non objective. Our side seems to need all the help it can get. Furthermore, who on any side shall challenge my single principle of metaphysics? It reads:

        To the extent that causality fails, nothing exists to figure out anyway.

        Liked by 1 person

Leave a reply to SelfAwarePatterns Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.