Did smell lead to consciousness?

Smell has apparently always been a peculiar sense.  The sensory pathway of smell information to the brain runs completely independent from the other senses.  The pathways for the other senses run through the midbrain and thalamus and are then relayed to cortical regions.  But smell goes to the olfactory bulb behind the nose, and from there directly to various forebrain regions such as the amygdala, hippocampus, and prefrontal cortex).

Diagram showing the fish brain with the olfactory bulb, telencephalon, optic tectum, and cerebellum
Fish brain
Image credit: Neale Monks via Wikipedia

This independent pathway is ancient.  From the earliest vertebrates, it appears that smell has always gone directly to the telencephalon (the forebrain) while the other senses went through the optic tectum (midbrain) region.

This is strange, because while other sensory information, such as vision, hearing, and touch are routed to the forebrain in mammals, allowing the formation of sensory images in the cortex, this does not appear to happen in amphibians and reptiles.  The creation of sensory images in the forebrain, other than smell, appears to be an innovation of mammals and birds (perhaps independently, making it an example of convergent evolution).

This has led many biologists to conclude that the telencephalon in fish and reptiles is basically just a “smell brain”.  This seems borne out by experiments with fish, where their telencephalon was destroyed, and the fish seemed able to go about their normal lives.  However, the fish did lose the ability to learn or anticipate consequences, including new spatial navigation.  In other words, the fish lost the ability to remember and imagine, which seems to indicate that there is more at work in their forebrain than just smell.

But if you think about it, smell, unlike the other senses, is much more entangled with memory.  Smells of predators and prey linger after they’ve departed.  For an animal to make use of smell information requires memory and imagination, accessing past associations of the smell, whether it indicated a predator or some food source, and thinking about what the smell in the current situation means.  This isn’t necessarily true for vision, hearing, touch, or taste, where reacting reflexively to current stimuli can still be adaptive.

In other words, the rise of smell might have led to the rise of memory and imagination.  And as I’ve written before, sentience, the ability to feel, is only adaptive if it can be used for something.  This is why most neuroscientists see feelings as we consciously perceive them being linked to the same regions where imagination is coordinated, the frontal lobes in mammals, or more broadly the forebrain in non-mammalian vertebrates.  Which is to say, that smell may have been what led to the evolution of sentience.

Todd Feinberg and Jon Mallatt, in their book The Ancient Origins of Consciousness, discuss and argue against this proposition.  For them, it seems far more reasonable to see vision as the sense which drove consciousness.  And strictly in terms of image based consciousness, they may be right.  But in early vertebrates, most of that image based consciousness seemed focused on the midbrain region, a region that doesn’t appear capable of memory and nonreflexive learning, behaviors typically associated with sentient consciousness.

An interesting question to ponder is, if vision, hearing, and the other senses are processed primarily in the optic tectum, the midbrain region for fish and reptiles, how much of that sensory information actually makes it to their telencephalon, that is, into their memories and imagination?  Humans have no introspective access to the low resolution images formed in our own midbrain region, only to the ones we form in our cortex.  But is the telencephalon of an amphibian or reptile able to access the visual information from its optic tectum?

John Dowling in his book, Understanding the Brain, points out that a frog, which can catch and eat flies with its tongue, can only see a fly if it is moving.  A frog in a cage stocked with fresh but dead flies, will starve.  Dowling asks what the frog is actually “seeing” in that case.  It may be that the frog’s optic tectum can generate reflexive tongue movements, but that the frog itself has no conscious access to a visual image of the fly, or any other visual images.

And yet, the telencephalon of these species can inhibit the reflexive reactions from their tectum.  In order to do so effectively, it seems like they should get some information from their tectum, their midbrain region.  In fact, Feinberg and Mallatt in their book indicate that some visual and audio information has been shown to make it to the telencephalon.  But it seems likely, similar to how we receive processed information from our midbrain, that this comes in the form of feelings rather than detailed sensory information.

This has led many biologists to conclude that amphibians and reptiles aren’t conscious.  As I’ve noted before, whether to call a particular species “conscious” is ultimately a matter of interpretation.  However, we can say that their experience of the world is very different from ours.  That experience does not appear to include visual and auditory images, although it may well include olfactory ones.

So it’s possible that we are conscious today because of smell.  This proposal is strange and counter-intuitive to us because we’re primates, a group of mammalian species where the sense of smell has atrophied.  But for most animals, smell is a major part of their worldview.

What do you think?  Did we use smell to climb the ladder of sentience and then, as primates, kick that ladder loose?  Does the lack of visual images mean fish aren’t conscious?

54 thoughts on “Did smell lead to consciousness?

  1. Worlds of visual cues, worlds of auditory cues, and worlds of olfactory cues. Intuitively, it makes a lot of sense that the integration of these cues might have given rise to rudimentary forms of consciousness, Why not tactile and gustatory too? Great post!

    Liked by 1 person

    1. Thanks Mike. The interesting thing is that our experience is of all those cues combined. The idea that for many species, some of them never get combined, or they only get combined in a very limited manner, is a powerfully counter-intuitive one, but it may be reality.

      Liked by 1 person

  2. It seems like you’re pretty dismissive of Carroll’s and physics explanation for dark energy. Is there another one that you favor?

    No. But scientists themselves admit that they have no idea what dark energy is.
    Just have to be such a wonderful force.
    Because without this miraculous force, nothing would make sense.
    Although in this particular case it was specifically about (repulsive) gravity but still the dark energy (in this) is essential.
    (in the creation and action of repulsive gravity).

    Liked by 1 person

  3. It is true that smell is different from other senses, mostly because its qualia are so multitudinous. Consciousness people seem to like color vision very much, and talk about color theorists who have not experienced red, or inversion of red with green, and so on. But the qualia of olfaction are the most complex of any sense we have. Where our color vision breaks down to three primary colors, and taste breaks down to 5 (or 6, if capsaicin sensitivity is counted), our noses have receptors that differentiate between more than a hundred different odorant molecules, producing trillions of possible combinations, qualia that we cannot properly talk about (descriptions by wine tasters are one futile example). Unlike elements of color and taste, no culture has ever given names to “primary smells”, assuming such things even occur in nature. We name smells with the same names as the things that produce them, because our brains label smells with the things that produce them. This is why smelling something that you have smelt before can help recall things related to it.

    (Trillions, by the way, is not an exaggeration here.There was an article in Science a few years ago, that estimated that an average human is capable of differentiating at least 1 trillion different smells. (I researched this topic recently for a blog post of mine.))

    Liked by 1 person

  4. For an alternative view …

    Maybe there’s smell Consciousness in one place, and visual consciousness in another, and tactile consciousness in another, etc. Maybe the ability to combine the output from more than one of these is an advanced cognitive ability.

    As a side question, is there any information on the evolutionary history of things known to be involved in memory like the hippocampus? Do fish have anything analogous? Reptiles?

    *

    Like

    1. A while back I read something about ant brains where they have circuits for major capabilities that don’t seem to be integrated into any overall system. I have a hard time seeing an ant as conscious with that kind of fragmented system.

      Feinberg and Mallatt’s interpretation is that consciousness in fish and reptiles resides in the midbrain region. But given that we don’t have conscious access to our own midbrain processing, and the frog example in the post, I’m increasingly finding their logic in this case unconvincing.

      Just looked it up. F&M do confirm that fish and reptiles have structures like the hippocampus, amygdala, and striatum (basal ganglia) in their forebrain.

      Like

        1. James, it always comes down to what definition of “conscious” you’re using. What do you consider the necessary and sufficient properties for a system, or sub-system, to be called “conscious”?

          Like

          1. Mike, you are, of course, inviting a very long discussion. But we can start at ten thousand feet:

            I say Consciousness is about processes. A system is conscious to the extent that it has the capability to perform certain kinds of processes. I call the basic, simplest kind of process which counts as a consciousness-type process a psychule. Compare with the smallest unit of a substance – the molecule.

            So when I say people have different definitions of Consciousness, I mean (or expect) people have different ideas what constraints on a process make it a psychule. When I hear/read about a theory, I translate it into what that theory requires for the psychule.

            So I can do this for panpsychism, functionalism, representationalism, etc. But I also have my preference, and here it is:

            A psychule is a process of the form: input — [mechanism/physical context] —> output, wherein the input is a symbol (a “symbolic sign vehicle” for you Peircians), the Mechanism is organized for a purpose, and the output creates value relative to the meaning of the symbol.

            This is my preferred choice because this kind of process begins the explanation of qualia, which is pretty much a reference to the meaning of the symbol in question. Further, the more complicated versions of consciousness, like higher order thought, global workspace, etc., can be explained by combinations of (my idea of) psychules.

            So one of the fallouts of this theory is that any system can be evaluated in the context of its consciousness. And sub parts of systems can have their own consciousness. So rocks would not have consciousness (usually), and most robots and computers would have at least simple consciousness. A single neuron probably is not conscious (long discussion here), but two in sequence might be, depending.

            Plain enough?

            *

            Like

          2. Thanks James. I like long discussions so if this turns into one, that’ll be fine with me 🙂

            So, using your psychule concept (which you’ve described before), it seems like anything that processes information is conscious. Or perhaps you’re saying that consciousness is a psychule with certain constraints, with the varying constraints leading to the different definitions of consciousness that people use.

            If so, the constraint that I think most of us hold is that the psychule must function similarly to the way we do (the conscious we), that is, the way it processes information must be similar to the way we process information, in other words, it must be a like us, or possess “us-ness”. The less it resembles the way we work, the less likely it is to trigger our consciousness intuition.

            But if so, then calling an aspect of ourselves that we can’t consciously access seems to fall outside of that. My lower spinal cord processes information when my patellar tendon is struck leading to my knee jerking, but most people wouldn’t call that “conscious.” If we go higher up to the human midbrain region, which most neuroscientists see as primarily (if not completely) a reflexive system, that system appears to meet the definition of a psychule (doesn’t it?) but not the us-ness criteria, at least not fully.

            So when we look at an organism that has all of its sensory information processsed in the midbrain except for smell, which is processed in the forebrain, then the midbrain region in that organism again seems like it’s a psychule, and the forebrain could be considered another one. Which is more like us? The fish midbrain probably functions along the same lines as ours, but our forebrain includes integrated sensory information, while the fish’s may only include smell information. Both forebrains seem like psychules, but the fish one seems very different from the human one.

            I’ve remarked a few times that the midbrain region could be considered a sort of subterranean consciousness, but it doesn’t appear to be our consciousness, although our consciousness receives feelings from this lower level system and obviously allows or inhibits its impulses. Obviously definitional issues abound here.

            Okay, I’ll stop now and see what you think of these ramblings.

            Like

          3. Hey Mike, everything you said seems pretty much correct. What I would like to point out is that most people are working with their intuitive view/definition. Historically this intuitive view is insufficient for understanding what is really going on. That’s the point of coining “psychule”. We can know intuitively a lot about water and what it does, but knowing about H2O brings our understanding to a whole, ‘nother, level. Thus, with this understanding of the psychule, I can explain how Integrated Information Theory is mostly right, as well as global workspace theory, and others. And then again, qualia.

            Although once someone gets an intuitive idea of how things work, it becomes really hard to sell them on something which contradicts even a small part of their (possibly mistaken) understanding. Case in point, see my ongoing discussion with Brent Allsop here on Conscious Entities blog. (I jump in at comment 20).

            *

            Like

          4. James,

            “Historically this intuitive view is insufficient for understanding what is really going on.”

            I have two responses to this. First, I think “what is really going on” is shown in cognitive neuroscience, which is making steady progress. But many people look at that progress and deem it only progress on what Chalmers calls the “easy problems”. (There’s nothing easy about them, but they’re at least addressable, as opposed to the hard problem. Of course, the assumes the hard problem is anything more than the psychological difficulty of accepting that dualism is false.)

            Which leads to my second response, that “what is really going on” in terms of consciousness presupposes that there is a fact of the matter, and as I’ve indicated before, I don’t know that anyone can really establish it, at least without making decisions about what definition of consciousness they’re working with, and any such definition, at least in objective terms, will be controversial.

            I completely missed that CE thread. I don’t have strong thoughts about inverted spectrum. I’d like to discover that there is some necessity to the colors we see, such that both our reds are the same, but I’m not sure we know enough about nervous systems yet to say that. Or have I missed something?

            Like

          5. I guess I would respond that you can use the psychule framework to describe what is going on as a fact of the matter at an abstract level, and without ever using the word consciousness. Also, yes, neuroscience is necessary for finding out what’s going on in the human brain, and you could just sit back and wait for what falls out, but I think there is some value in looking at “what’s going on” at an abstract level. There are ethical issues, especially regarding AI, that really should be resolved before neuroscience answers all the easy questions.

            *

            Like

          6. Have you heard of Antonio Damasio’s CDZ and CDR concepts? CDZ (convergence-divergence zones) are regions where neural hierarchies converge for a particular mental concept. Of course, the concept itself exists throughout the hierarchies, but the CDZ is the culmination point where the concept becomes “registered”. CDR (CDZ regions) are clusters of CDZs. Damasio sees CDZs, which are microscopic, existing in the “many thousands” in the brain, but there only being a few dozen CDRegions.
            . https://www.cell.com/trends/neurosciences/pdf/S0166-2236(09)00090-3.pdf

            I mention this because I see some resonance between his concept and the psychule. I know the psychule is meant to be more abstract. I suspect you’d consider both the CDZ and CDR to be psychules, as well as maybe the whole brain?

            Like

          7. Hey Mike, well, I have [heard of CDZ’s] now. 🙂

            Actually, the link you gave had a pay wall, but I used this review article.

            In fact, in my model, the next more complicated cognitive ability I consider after the most basic psychule is the creation of a concept, which would be a process where the input is more than one symbol, and the output is a single symbol which essentially represents the combination of the input symbols as a single concept. As you would suspect, that would be a convergence in Damasio’s model. I could see how a reverse-type process could create a divergence.

            As for the regions, I don’t think I see a single region as instituting a process as much as being an area where many highly convergent processes, so high level concepts, have their outputs.

            When considering the main mechanism for human consciousness, the one which Damasio refers to as the autobiographical self, the question becomes: where can you find a single mechanism that can take all the multitudinous possible concepts as inputs and combine random or arbitrary sets of them.

            Like

          8. Sorry James. Didn’t mean to send you to a paywalled resource. I need to pay more attention when pulling these up while on my university network.

            On finding a single mechanism in the brain, if I understand what you’re asking, I don’t think there is one. There are lots of integrating regions, but no one region that integrates everything. Each sensory region integrates information about that one sense, the precuneus in the parietal lobe integrates across all the senses, the frontal lobes integrate information for action planning, the basal ganglia for habitual action, and the midbrain region for final action decisions, but no one region has access to everything.

            Like

          9. Who said it was in the cortex? 🙂 Based on the Damasio paper I read, the hippocampus is a pretty good candidate. All of the sensory modalities “converge” right into it. In fact, I don’t think it’s the hippocampus mostly because you can lose it and still manage all those inputs. (Memento was such a great movie.) There are other sub-cortical candidates, like the thalamus. I hypothesize that those connections from the cortex to the thalamus constitute the same kind of convergence as seen going into the hippocampus. But I also suspect that some combination of sub-cortical structures, possibly including the hippocampus, constitutes the mechanism in question, i.e., the autobiographical self.

            *

            Like

          10. Also, note: when I refer to a “single mechanism”, mechanism = isolatable system where you can identify inputs of said system and outputs of said system. The system can have sub-systems/sub-mechanisms. Theoretically, the system could be your brain plus your left shoe. The question would then be how does including your shoe help anything?

            Also note that the system of interest does not necessarily equal the mechanism of interest, especially when the output of the mechanism can later feed in as an input to that same mechanism. This idea becomes more important when discussing the basic philosophy of the psychule. The mechanism of a psychule has to have a purpose, but this purpose comes from the thing that organized the mechanism, and this latter thing may no longer even exist. This latter thing will also be a “mechanism”, and possibly a psychule mechanism, but also possibly a mechanism in a very broad sense. “Natural Selection” is one such non-psychule system/Mechanism that creates other mechanisms.

            Chew on that!

            *

            Like

          11. I actually think your best bet for the single mechanism would be the brain overall. No one structure, including the thalamus, gets it all. (Consider the smell pathway I discussed in the post, which runs completely independent of the thalamus.) You might be able to narrow it down to the brainstem / cerebrum system, excluding the cerebellum (although even the cerebellum has recently been implicated in some cognition).

            Your description of other mechanisms like natural selection reminds me of another Damasio concept: biological value, that which aids in the preservation of homeostasis and overall survivability, although I think Dawkins’ selfish gene may be a more elegant, if less poetic, description.

            Like

          12. Mike, you said:

            ”Consider the smell pathway I discussed in the post, which runs completely independent of the thalamus.”

            I would have wagered that this was inaccurate, but I had to take a peek. This paper suggests differently. I only got as far as the abstract. Here’s the sentence where I stopped:

            In fact, anatomical evidence firmly demonstrates that the [mediodorsal thalamic nucleus] receives direct input from primary olfactory areas including the piriform cortex and has dense reciprocal connections with the orbitofrontal cortex.

            If you think I didn’t go deep enough, or you have countervailing papers, let me know. I still like my chances with the thalamus. 🙂

            As for isolating mechanisms, my current interest is in finding the edges and details of the “autobiographical self”. That is the mechanism most people are referring to when they talk about consciousness. That is the one we can generate reports from and about. My understanding of the state of the art in neuroscience says there are clearly some processes that feed into this mechanism, and which we can report, and some that don’t, and we cannot report. You may be right that there will be no readily discernible locus, but my gut instinct is that there should be one. And like I said, from what I read, the thalamus (etc.) is looking pretty good.

            *

            Like

          13. The first hit I get is a ScienceDirect article, which has his snippet:

            For many years it was thought that the olfactory pathway also passes through the thalamus, from olfactory cortex through the mediodorsal thalamic nucleus to prefrontal cortex. However, recent careful anatomical studies have shown that the pathway between olfactory cortex and prefrontal cortex is mostly direct, with only a small contingent of fibers going to mediodorsal thalamus (Ongur and Price, 2000). Within prefrontal cortex, the primary olfactory area consists of the medial and lateral orbitofrontal cortex.

            https://www.sciencedirect.com/science/article/pii/S0896627305002333

            From which I draw two conclusions (one of which I already knew). First, I need to be more careful about absolutist language, particularly when talking about the brain. The smell pathway appears to be mostly independent, but mostly is not completely. Mea culpa on my part. But second, if you go searching the scientific literature for a snippet you want to find, you’ll find it, but that doesn’t mean it’s where the lion share of the evidence points.

            On the autobiographical self, I think you have to regard the entire thalamo-cortical system as contributing to it. I used to think the core self could be located a bit more specifically, but I no longer think that, although a lot depends on how we define “self”. What we call consciousness is composed of several interacting systems, and it can be present in varying extents.

            The closest things to a soul might be the pulvinar in the thalamus or the anterior cingulate cortex, but even saying that is an incredible oversimplification since these regions are convergence areas for patterns that form throughout the brain, meaning that by themselves, they are nothing.

            Like

          14. Mike,

            Regarding the olfactory/thalamus connection, I never said or suspected that most (or any) connections go first to the thalamus. I know that the non-olfactory senses go to the thalamus and then out to the cortex, but under my current hypothesis none of those (probably) contribute directly to the autobiographical self. My current hypothesis is that all of the significant inputs are coming from the neocortex. This includes the olfactory.

            Regarding systems, I hypothesize an input —> [mechanism] —> output framework, but I do not identify consciousness to just the mechanism. The system can include the medium of the output, especially if that output medium is or can become the input medium for subsequent processes. In the case of the autobiographical self, I currently suspect that the primary output is a workspace (presumably sub-cortical, but not necessarily in the thalamus), and yes, that (global) workspace. Actually, now that I think about it, maybe that workspace is inside the mechanism, and a given experience isn’t finished until there is output from the workspace. Hmmm …

            *

            Like

          15. James,
            What about the sub-cortical structures makes them attractive candidates for the mechanism? I ask because you seem to have an ongoing conviction that it must be sub-cortical.

            When you say “medium of the output”, what do you mean by “medium”?

            When you say the workspace may be in the mechanism, do you mean all of its contents, or just the “pointers” to the content?

            Like

          16. Mike, I try not to be convicted (ahem), but every angle I look at it from seems right. Look at the neocortex. It’s basically a sheet of two dimensional repeating units – cortical columns. This seems like a good place to store concepts. You’ve heard of the Jennifer Aniston neuron. I will lay money that it’s more likely a Jennifer Aniston column. So it makes sense to send a two dimensional array of pixels there and let them “converge” to concepts (lines, motion) in neighboring columns, and converge some more (shapes, textures) into neighboring columns, etc., until you get to, say, persons, and then particular persons. You tend to get similar things clustered together, which is fine.

            But now, say you want to associate things from far away in the sheet, like a person and a particular house. Or say you want to associate a smell you smelled 4 hours ago (while eating) with the terrible stomach pain and nausea you are feeling now. Or say you want to associate a tiger with a freshly dead gazelle with your need to eat tomorrow with your friend with your friend’s ability to make a loud noise with your fear of being eaten with a sharp stick with a possibility that the tiger won’t respond as you hope. It doesn’t make sense to hook up every column with every other column. It makes better sense to send one axon from every (or most, or maybe just some) column to a central place. Then you could devise mechanisms to decide which of those connections are currently most important (i.e., worth attention). Ideally you can send it to a structure or workspace that can uniquely represent any of those concepts, and any combination of those concepts, with a far fewer number of neurons. I expect the structure of neurons in such a workspace would be significantly different from the structure in the neocortex. Just a guess. I expect some of the concepts in this workspace (say, names like Alexandria and Occasio and Cortez combined with woman and congress and dance moves) will be worth remembering, in which case there should be some mechanism to recruit and alter appropriate cortical columns. Seems to me these mechanisms would best be approximately equidistant from the various cortical columns.

            Re: “medium of the output”, I think I may have been conflating ideas of output and storage. By medium, I had in mind a workspace structure (for Semantic Pointers) as opposed to the cortex structure of columns. My use of “output” almost always means neurotransmitters, but in a particular organization of target sites. So if the workspace is outside the mechanism, neurotransmitters impinging on the workspace neurons might be the output, whereas if the workspace is inside, neurotransmitters impinging on cortical columns might be the output.

            Whatcha think?

            *

            Like

          17. James,
            The trick is to be convicted of the right things 😉

            Your conception of the distinction between the cortex and sub-cortical structures matches one I used to have, that maybe the sub-cortical structures are the execution engines and the cortex is more or less just storage. Given the physical differences between the cortex and sub-cortical regions, and the resemblance of this conception to how modern computer technology works, it’s a compelling narrative.

            But I had to abandon my version of it as I read more neuroscience. Biology just doesn’t make those kind of neat distinctions. The cortex is definitely structurally different, but I’m not sure that difference, in and of itself, should be taken as denoting functional difference. The cortex appears to be just as much involved in execution as the sub-cortical structures.

            Indeed, cortex near a particular sub-cortical structure often has the same or closely related functionality to that structure. For example, there is continuation between the hippocampus functionality and the functionality in the neighboring entorhinal cortex (spatial mapping and spatial-temporal mapping), or between the amygdala and the close by ventral medial prefrontal cortex (both are heavily involved in emotional feelings).

            It seems increasingly more likely to me that evolution discovered a mechanism for generating additional computational substrate (the cortex) and ran with it, and as it did, functionality migrated into that additional substrate. And, of course, all that additional substrate allowed for far greater elaborations and enhancements of that functionality than had been possible with just the nucleus clustering architecture that had previously dominated.

            It’s worth remembering that there are an average of 16 billion neurons in the human cortex compared to one billion in all the sub-cortical structures combined. In other words, 94% of the computational capacity of the brain (outside of the cerebellum) is in the cortex. We shouldn’t be too surprised that the sub-cortical structures provide foundational support functionality, but that a lot of the stuff of cognition happens in the cortex.

            Of course, the cortex is crucially dependent on the thalamus for long range communication between different regions. That dependency seems to be ancient. The thalamus is part of the diancephalon, a developmental structure between the forebrain and midbrain that appears to go back to early vertebrates. I suspect the type of work the forebrain does has always required the network hub functionality of the diancephalon.

            I think what a lot of people struggle with is in assuming that consciousness must be part of the foundational support functionality offered by sub-cortical structures. It’s a very intuitive notion. But the evidence for it just doesn’t seem to be there. Of course, that could change at any time.

            Global workspace theory has always struck me as obviously true. My only beef with it is that it avoids getting into the details, so people tend to see all kinds of possible sites for it, including the prefrontal cortex, the precuneus, the hippocampus, and lot of other structures, or spread between multiple regions. GWT seems compatible with just about any conception on how the brain works, a slipperiness I don’t regard as a strength in a theory.

            Like

  5. I suppose that I can challenge the idea of smell leading to consciousness the same way that I’ve challenged Feinberg and Mallet’s visual image map speculation here. Just as it’s conceptually possible for one of our non-conscious robots to use light information (“vision”) as input to process for output function, or chemical analysis of the air (“smell”) for this, it should also be possible for a non-conscious creature to use such information for output function. If sense input information is able to exist before there is any consciousness (as our robots plainly demonstrate), then it may not be productive to say that consciousness was set up to facilitate a conscious variety of smell, regardless of how various brains happen to be wired. It might be better to look for something unique to conscious function, and so something that our robots do not have.

    A more fertile potential might thus be “sentience”, or a trait that I consider to “fuel” the conscious form of function. The theory is that the non-conscious brain uses sensory input information, and then outputs things like “smells” and “images” to exist as input to a tiny conscious form of function each moment.

    How could consciousness be a tiny output of a vast non-conscious brain, when it’s all that we know of existence? But that’s exactly why. Because consciousness is all that we know of existence, we are anthropocentrically led to believe that consciousness is a far more prominent aspect of brain function than it happens to be. Instead the non-conscious brain that produces it should be massive. Consciousness itself however may be effective to consider as a tiny outputted form of function that does less than one thousandth of one percent as many calculations as the machine which creates it.

    Liked by 1 person

    1. Eric,
      So, should I take it that you’re relegating amphibians and reptiles to non-conscious status? It’s a conclusion many biologists reach. In their view, only mammals and birds are conscious. Of course, we can’t know what kind of experience, if any, amphibians and reptiles have in their forebrain, only that what happens there seems very different from what happens in ours.

      “A more fertile potential might thus be “sentience””
      I actually addressed sentience in the post. Of course, in my mind, sentience requires imaginative simulations (whether of the past or future). Given your model, I would think you’d agree with that point, but I know you consider feelings to be something that can exist “nakedly” by themselves.

      “How could consciousness be a tiny output of a vast non-conscious brain, when it’s all that we know of existence? But that’s exactly why.”
      I don’t really understand your point here. But if I do understand how you currently conceive of the tiny computer, it’s a production of the whole brain. From a certain point of view, I can see that matching up a view of consciousness being produced by the whole brain, although I’m still not sure how productive calling the result a “tiny computer” really is. It implies that there’s a separate piece of machinery there, when you seem to actually be talking about either a virtual or logical machine, or maybe some emergent phenomenon. Almost a form of naturalistic dualism?

      Liked by 1 person

      1. Mike,
        I’m not sure what about my comment implied that I presumed entirely non-conscious function to amphibians and reptiles.

        On sentience I’m actually more square with what you said in the post rather your reply to me now. There you said “And as I’ve written before, sentience, the ability to feel, is only adaptive if it can be used for something.”

        Right. I consider sentience adaptive as a crucial variety of input to the conscious form of function. But before functional sentience evolved, there must have been non-functional sentience from which it evolved. Here existence might feel horrible/ wonderful, though without an adaptive path.

        In your reply to me now however you said, “Of course, in my mind, sentience requires imaginative simulations (whether of the past or future).”

        So here you and I have things backwards. While I have sentience motivating imaginative simulations, you seem to have imaginative simulations outputting sentience. (Actually I think the implications of imagination can incite valence too, though consider valence to be the true “fuel” for the process.)

        On consciousness as a “tiny computer”, in a discussion with a mutual friend who’s also opposed to this perspective, I may have unwittingly outmaneuvered him somewhat about that. I said “Here you’ve taken the “computer” term, and then defined it such that it can essentially only exist as a specific variety of intelligently designed machine.” To this he said, “No. Let me be clear about what I mean: A “computer” is something that “calculates.” Modern use implies a device, but the term dates back to the 1600s where it meant a person who “calculates.” During WWII, Bletchley Park employed many such “computers,” and NASA employed them into at least the 1970s. The important definition is “calculate,” and it’s well-defined in computer science. Which, as I said, predates actual computers by centuries.”

        Isn’t this something Mike? Each of you have been in opposition to my “consciousness as a computer” analogy, and yet apparently the term originated way back in the 1600s as a form of what I’m proposing now: conscious function! Another good quote from him. “There’s a common phrase for new CS students: “Computer Science isn’t about computers any more than astronomy is about telescopes.” (https://logosconcarne.com/2015/11/10/transcendental-territory/#comment-28203 )

        People compute by means of consciousness and so figure things out. Here I theorize informational inputs such as vision and hearing, as well as memory inputs where past consciousness is somewhat recalled, as well as valence inputs such as pain and shame. Apparently they’re interpreted and scenarios are constructed in the quest to promote valence based interests. I call this “thought”, with it’s only outputs as “muscle operation”.

        I don’t mind saying that my consciousness is a “logical machine”. Does it help if I refer to consciousness that way? No device there. But I cannot possibly (to me) be a “virtual machine” (as in “virtual reality”). My consciousness is real.

        Naturalistic dualism? You might just as well have said “monistic dualism”. Or a leftistic rightism. Or openistic closeism. I believe that it’s possible for something that is not conscious, to produce a punishment/ reward dynamic for a thusly created entity to experience. I am such an entity, and this will be the case even if my monistic metaphysics happens to be wrong. “Computation” occcurs either way.

        Liked by 1 person

        1. Eric,
          On amphibians and reptiles, I think it was your remark that a non-conscious computer could produce the same outputs in a pre-conscious manifestation. But I must have misunderstood what you meant.

          On sentience, sorry, I could have worded my statement more carefully. I didn’t mean that sentience is output by imagination, I meant that it exists only as part of the imagination mechanism, as input from lower level survival circuits, used as valenced input for assessing the simulations. It actually sounds like we agree on this point.

          Where we disagree, I think, is in whether it is plausible that sentience ever existed independent of those imaginative simulations. To me, the feeling without the imaginative functionality is simply the lower level survival circuits, the reflexes. Imagination is what decouples the reflexes, separating stimulus from action, changing it to a propensity for action rather than a mechanism for automatic action.

          Put another way, to me, a feeling is simply the reaction of a reflex used as input into the simulation engine to decide whether that reflex should be allowed or inhibited.

          I think we agree about cognition being computation. I’m just not sure how constructive it is to regard consciousness as being some sort of separate computational system. I don’t think that’s an accurate picture. In truth, I think there is just cognition, with no distinction in the brain between the conscious and non-conscious variety.

          The distinction arises because our introspective mechanism only has access to some of that cognition. We label what it does have access to as “conscious” and come up with labels like “unconscious”, “subconscious”, “non-conscious” for everything else.

          Naturalistic dualism is a valid philosophical distinction. David Chalmers considers himself one. Frankly, I think any naturalist who considers consciousness as something that objectively exists is one, whether intentionally or unwittingly. (This is a lot of people.)

          For me, consciousness is an interpretation and exists only relative to that interpretation: https://selfawarepatterns.com/2019/01/27/consciousness-lies-in-the-eye-of-the-beholder/

          Liked by 1 person

      2. Mike,
        I’m pleased with our consistencies here! Especially that you consider imagination to use sentience rather than to produce it as I’d thought. And at least as good is that you consider consciousness/ cognition to preform “computation” as the term was originally used. This seems like a reasonably good mutual foundation from which to work.

        On disagreement about whether or not sentience has existed, or can exist, outside of the imaginative simulations framework, at least know that I try not to reference what ultimately exists (ontology), in favor of useful humanly defined terms such as “gravity” and “sentience” (epistemology). Neither of these “really” exist, though may be useful as constructs. And sure, in an evolutionary sense I don’t understand why the sentience construct would only have a potential to exist under a fully functional simulation engine. I have no idea how to answer “the hard problem of consciousness”, so I certainly won’t say you’re wrong about that. The following seems to be your conceptual framework from which to answer that question:

        To me, the feeling without the imaginative functionality is simply the lower level survival circuits, the reflexes. Imagination is what decouples the reflexes, separating stimulus from action, changing it to a propensity for action rather than a mechanism for automatic action.

        Put another way, to me, a feeling is simply the reaction of a reflex used as input into the simulation engine to decide whether that reflex should be allowed or inhibited.

        . The following statement seems very ontological:

        “In truth, I think there is just cognition, with no distinction in the brain between the conscious and non-conscious variety.”

        Okay, but if we take this over to epistemology, might there be a useful conscious/ non-conscious distinction between “I’m going to make a sandwich”, and “I’m going to make my heart beat faster now that I’ve become nervous”? So of course it can be useful to distinguish what we consciously do differently from what the brain does beyond us.

        On Chalmers, note that there is plenty of reason for him to want to be perceived as a naturalist. Science depends upon casualty, and therefore the most distinguished people in academia should tend to be naturalists. Nevertheless the premise of pure causality invalidates his theory. Therefore he seems to have used his skills and credentials to join two opposing positions in order to remain somewhat under the title that he prefers. Essentially he’s got “good game”. If science and academia in general were in better shape than they are today however, then I don’t think that such a person would be able to plow through such personally useful contradictions.

        On your “Consciousness lies in the eye of the beholder” post, at that time I was in other discussions, but did keep my eye on it. Now that you’re posting so often it’s probably not best for me to engage in almost daily conversations with you. So whenever I have other good discussion partners (and I’m always looking) I do consider it healthy to lighten up over here. But since you’ve now mentioned it, yes let’s open that post up as well.

        On there being no evidence for dualism, that’s a bit too subjective a claim for my taste. You and I don’t see any good evidence for it, though many people do. I’m as strong a naturalist as you’ll ever meet, and yet still concede that causality might fail in the end. I can’t ever Know that my metaphysics is solid in this regard.

        Then as for evidence on our side, this generally seems subjective as well. The only quite good piece I know of is that Hogan twins story, or history’s only documented case of two people who can somewhat feel/ sense what the other does, and so overcome “the problem of other minds”. This supports naturalism of course because a shared thalamus seems responsible. We’d surely never hear the end of it if supernaturalists had such evidence on their side. Instead the story seems buried as a human interest freak show thing.

        To your point that when Bob ponders whether Alice is conscious, he’s basically thinking about how much Bob-ness she has, well it could go that way, or even commonly does go that way, but to me that’s not generally a useful definition for the “consciousness” term. Just as physics needed a useful definition for “force”, our mental and behavioral sciences will surely require a definition far more useful than “similar to us”.

        All of which is to say, I think asking whether a system is conscious, as though consciousness is a quality it either possesses or doesn’t, is meaningless. Such a question is really about whether it has a soul, an inherently dualistic notion. Our judgment on this will come down to how much like us it is, how human it is.

        Agreed. What you’ve done here is help demonstrate why defining consciousness as “what’s similar to the human” doesn’t give us a very useful idea to work with. And the Turing Test only reinforces this.

        I instead define consciousness as “sentience”. Something might be extremely different from us and yet be sentient. And if naturalism does hold, then even one of our computers should be possible to built this way — different indeed! While that should pretty much end “ghost in the machine” speculation in science at least, I can’t say whether or not the human would ever get such a machine built. What we teleologically build seems many many orders below what evolution is able to build non-teleologically.

        Liked by 1 person

        1. Eric,
          “Okay, but if we take this over to epistemology, might there be a useful conscious/ non-conscious distinction between “I’m going to make a sandwich”, and “I’m going to make my heart beat faster now that I’ve become nervous”? ”

          Sure, but when we’re actually talking about consciousness, I think it matters why something falls on one side of that divide vs another. In your model, the contemplation of the sandwich happens in the tiny computer while the heart beat change is in the larger computer. In my understanding, they’re both just brain processes with nothing particularly special about the sandwich one, except that it’s accessible by the introspection mechanisms.

          ‘’Nevertheless the premise of pure causality invalidates his theory. ”
          I think you’re making assumptions about his ideas that might not be accurate. I don’t agree with his view, but the idea that there is something produced by the brain with its own operations seems, to me, to be similar to the idea of the brain producing a tiny computer. If those aren’t similar, what would you say is different about them?

          “Now that you’re posting so often it’s probably not best for me to engage in almost daily conversations with you.”

          No worries. You’re under no obligation to comment on any post. I just linked to it because it seemed relevant.

          “On there being no evidence for dualism, that’s a bit too subjective a claim for my taste”

          I find this an interesting statement. If I say there’s no evidence for the luminiferous aether, phlogiston, humors, or geocentrism, are those subjective statements? If not, what makes them more objective than my statement about no evidence existing for substance dualism? What would be necessary to make it more objective?

          “I instead define consciousness as “sentience”.”

          I actually think everything I wrote about consciousness also applies to sentience. Can you define sentience without using the word “feeling”? Or define “feeling”? I think no matter what you come up with, there will be people who dispute it. And whether another system is sentient will always be a matter of judgment, a judgment that ultimately depends on how much like us it is.

          If you can think of a way that either sentience or consciousness could exist objectively, aside from some form of substance dualism, I’m very interested to hear it.

          Liked by 1 person

      3. Mike you said:
        “In my understanding, they’re both just brain processes with nothing particularly special about the sandwich one, except that it’s accessible by the introspection mechanisms.”

        Apparently you’ve been under the impression that I don’t consider “the sandwich one” to be a product of “brain processes”. Well I do. And apparently each of us consider there to be something “special” about that. (I say this given that you’ve mentioned this as an exception.) What you’re calling “introspection mechanisms” seems to be what I’m calling “the conscious form of computer”. Furthermore apparently this is the very kind of machine which gave the “computer” term its name as early as the 1600s. So here I’m simply trying to revive the form of the term which was used at its inception. I’m pleased that you agree about the computational nature of consciousness, and I share your concern about implying a mechanism beyond the brain.

        Let’s consider a scenario. Countless inputs are processed in the brain for output (such as for heart function) which occur obliviously to introspection mechanisms (i.e. consciousness). But sometimes there will be brain output such as “toe pain” as well that should effectively be taken as input for conscious function. “Hurting” should motivate this entity to figure out why the pain exists so that something might be done to relieve it. Or this might even be a general lesson to not do the kinds of things which cause toe pain. Either way a neuron based computer will be producing a valence based one. Therefore in correspondence with our metaphysics, this is all entirely a product of brain processes.

        The pertinent presumption I’m making about Chalmers here, is that he’s a dualist. This is to say that he believes brain processes are not entirely a product of “causal dynamics of this world”. If you consider him differently however then I’d be happy to consider what you have to say about that.

        I agree that if I believed that there was something produced in the brain “with its own operation”, then yes I myself would be a dualist. I don’t believe that at all however. I merely believe that one type of causal computer (the brain) can and does produce another type of causal computer (consciousness).

        I’m going to revise my critique of your position that there is “no evidence for dualism”. Rather than say that this is too subjective a thing to state, I’ll say something that you’ll probably agree with. It’s that supernatural dynamics may be perceived, though upon further inspection such evidence doesn’t seem to hold up to scrutiny. Thus instead of saying that “no” evidence exists for dualism, I’m anally retentive enough to appreciate when a person says “no good” evidence exists. After all, this does just require one extra word.

        I personally am not fond of using the “feelings” term to convey the nature of sentience, though I certainly do need to use at least some terms to describe what I mean. Furthermore I depend upon mutual understandings for terms to communicate effectively. When I say that sentience concerns existing as something that feels anywhere from “horrible to wonderful”, I suspect that people generally understand what I mean. And when I define anything which displays sentience to also be “conscious” at that point in time, this should also be understandable. But who out there is able to understand what I mean, and honestly state that existence has never been positively or negatively valuable to them? Furthermore notice that gravity is not “like us”, though we can comprehend the idea of it. Sentience/ consciousness needn’t be defined as “that which is like us” for us to effectively use the idea either.

        I can’t prove that sentience/consciousness as I define the terms exist to people other than myself (that is should anyone other than myself exist). But I can do so for myself. Furthermore if you or others exist and are conscious/ sentient as I define the terms, then I’d think that you and others could prove this to yourselves as well. Furthermore if we had a respected community of people with such a common understanding, then I presume that this community could develop various useful associated models. There will always be those who doubt the institution of science, though this is the institution that concerns me here.

        Instead of one person proving the existence of something to others, couldn’t we have rational people proving sensible things about themselves to themselves? If some refuse to say that existence can be horrible/ wonderful for themselves, and so refute my “proof” that sentience/ consciousness as I define the terms exist, then there isn’t much I can do in defense. But who shall do so? Your posts suggest that you for one are indeed sentient.

        By definition, all of reality must exist objectively. I however am a product of reality rather than “a god”. This is to say that while I can know with perfect certainty that I exist, everything that I believe will be subjective rather than objective. It’s the nature of the beast.

        One thing more Mike. Do you acknowledge that above you’ve provided your own answer to David Chalmers’ truly hard problem of consciousness? (And I’d say that you did so in a far more coherent way than F&M did, not that I think anyone today presents more than guesses.)

        Liked by 1 person

        1. Eric,

          “Apparently you’ve been under the impression that I don’t consider “the sandwich one” to be a product of “brain processes”.”

          I wasn’t under the impression that you saw the sandwich contemplation as completely separate from the brain. But your recent descriptions of the tiny computer as being “produced” by the brain seem to imply that it is a sort of emergent phenomenon, that it isn’t a physically identifiable component of the nervous system. Is that not an accurate understanding?

          “What you’re calling “introspection mechanisms” seems to be what I’m calling “the conscious form of computer”. ”

          Either I’ve lost track of what the conscious computer is supposed to be (a very real possibility) or you don’t understand what I mean by introspection mechanisms, because, based on your descriptions of the tiny computer, I don’t really see those concepts as the same. Introspection is the brain examining some of its processing in a metacognitive feedback mechanism. There are identifiable brain regions involved in it (anterior prefrontal cortex, anterior cingulate cortex, temporal-parietal junction, etc). But it, in and of itself, doesn’t feel good or bad, although it can reveal those feelings, and what it shows can certainly lead to other feelings.

          And as we’ve discussed before, what is commonly called primary consciousness doesn’t require introspection. The vast majority of animals don’t seem to have it. Neither do human babies. Older children appear to have a more limited version of it. It doesn’t seem to come into full bloom until after puberty. And the ability of even adult humans to do it seems to vary considerably.

          “I can’t prove that sentience/consciousness as I define the terms exist to people other than myself ”

          I think this is the key fact. It seems reasonable for each of us to assume the other has sentience/consciousness similar to our own since we’re both adult humans. But the further we move away from that category, the shakier the assumption becomes. Mammals seem like a better bet in this regard than birds, reptiles, fish, or insects.

          Do octopuses feel good or bad in any manner that we’d recognize if we could access their cognitive processes? Or, like colors, might their experience be so radically different from ours as to be incomprehensible to us? Their behavior seems to imply that they have something at least analogous to our own, but the architecture of their brains is radically different from ours.

          “One thing more Mike. Do you acknowledge that above you’ve provided your own answer to David Chalmers’ truly hard problem of consciousness? ”

          I actually perceive that I’ve provided numerous answers over the years, all of them, in my view, compatible with F&M’s descriptions of the subjective-objective divide, but I’m curious what in particular I said that strikes you as an answer.

          Liked by 1 person

      4. Mike you said:
        “But your recent descriptions of the tiny computer as being “produced” by the brain seem to imply that it is a sort of emergent phenomenon, that it isn’t a physically identifiable component of the nervous system. Is that not an accurate understanding?”

        No that sounds more like a “Chalmers” type of perspective. I’m instead proposing an entirely causal product of the brain for what’s generally associated with “consciousness”. What might be odd to you here is that I have absolutely no clue about how the brain actually gets this accomplished. I do believe I’m able to provide a useful account for the “what” of consciousness, as well as the “why”, though the “how” isn’t something that I have any grasp of in an engineering capacity.

        I went back to an old discussion between us where I asked you what F&M propose as an answer for the hard problem of consciousness. Your reply (https://selfawarepatterns.com/2017/01/17/two-brain-science-podcasts-worth-checking-out/#comment-16080) mentioned something about how they thought that an answer would not seem like an answer. Quite prophetic, since I still have no idea what their answer happens to be! 🙂 Given all the talk about F&M here for over 2 years, and me still wondering what they propose, I suppose this is why I’ve come to presume that they must have “hand waved” something or other. If they do have an answer however, and you understand it, then could you try to explain it? (By the way, back then I believe we were under the impression that they thought insects weren’t conscious, or something which is inconsistent with their “distance senses” theory, though have since decided that they do consider insects conscious at a primary level.)

        Regardless of their theory, you’ve provided such a framework of your own five comments above. I don’t know if it’s more than a guess, but at least it’s understandable. There you said:

        Where we disagree, I think, is in whether it is plausible that sentience ever existed independent of those imaginative simulations. To me, the feeling without the imaginative functionality is simply the lower level survival circuits, the reflexes. Imagination is what decouples the reflexes, separating stimulus from action, changing it to a propensity for action rather than a mechanism for automatic action.

        Put another way, to me, a feeling is simply the reaction of a reflex used as input into the simulation engine to decide whether that reflex should be allowed or inhibited.

        If this is indeed your account of how phenomenal experience becomes created, then I wonder if you’d like to go into more detail? What brings you to believe this? If you have enough convictions here I suppose that at some point a full post might be in order.

        On introspection mechanisms, that was definitely my own mixup. And since the concept has been discussed here plenty, I really don’t have a good excuse for it. I suppose that “introspection” hasn’t resonated with me given that my own consciousness model exists at a far more basic level, or back at primary consciousness. Furthermore a special category is never provided for the many manifestations of “thinking about thought”. Instead I simply address thought, though something with thought and a conceptual understanding of it theoretically could think about thought. The closest I get to something like metacognition is a second mode of conscious processing known as “natural language”. (Actually, as I recall F&M theorize the nature of primary consciousness just as I do, which might put their consciousness model closer to mine than yours?)

        If an octopus or a fly happens to be sentient as I define the term, and thus has a valence based form of function (or the “little computer” that might decide something), then I believe that if you or I were to experience what they do for a moment, then from memory of such an experience, a bit later we’d be able to say things like “that felt good” or “that felt bad”. This should be the case for existing as anything that’s sentient and nothing that’s not. Does that sound right to you?

        Liked by 1 person

        1. Eric,
          On F&M and the hard problem, I’m not sure how much else I can add to the explanation you linked to. Maybe the best thing I can do is let them speak for themselves.

          This subjective-objective divide is expressed in the idea of auto-ontological and allo-ontological irreducibilities (figure 10.9).45 Auto-ontological irreducibility means that the subject cannot experience the workings of his or her own neurons, and allo-ontological irreducibility means that an outsider cannot access the subject’s experiences.

          Feinberg, Todd E.. The Ancient Origins of Consciousness: How the Brain Created Experience (MIT Press) . The MIT Press. Kindle Edition.

          Let’s consider auto-ontological irreducibility first. This conundrum stems from the fact that the neural processes that create sensory consciousness refer all feeling states away from the brain itself to something else. This forms a gap between being a brain that it is in a feeling state and observing or examining a brain in that feeling state.

          Feinberg, Todd E.. The Ancient Origins of Consciousness: How the Brain Created Experience (MIT Press) . The MIT Press. Kindle Edition.

          Allo-ontological irreducibility is the opposite of the auto-ontological. Just as the subject lacks objective access to his or her brain that generates experiences, the outside observer lacks access to these experiences as they are felt by the subject (see “4” in figure 10.9). Only the subject has that access. In summary, from the outside viewpoint the brain is observable but not the experience, whereas from the inside the experience is observable but not how the brain constructs that experience.

          Feinberg, Todd E.. The Ancient Origins of Consciousness: How the Brain Created Experience (MIT Press) . The MIT Press. Kindle Edition.

          To give you more I’d have to dump that entire section of the book out, which would probably not fall into the “reasonable use” of copyrighted material. But hopefully that gives a clearer picture?

          “If you have enough convictions here I suppose that at some point a full post might be in order.”

          Thanks for clarifying which bit you were discussing. I actually have done a post on this.
          https://selfawarepatterns.com/2018/10/28/the-construction-of-feelings/
          Although I’ll almost certainly do more in the future since this seems to be a point of contention for many people.

          On the Octopus, I suspect we wouldn’t be able to even interpret anything we got from them. To “experience” their feelings, they would have to first be translated into the primate/mammalian equivalent. But such a translation would inevitably be an interpretation of the original, an interpretation that would have to make assumptions about what was equivalent, assumptions that would render our access to their experience a charade, a lie.

          In short, I don’t think we could ever do it, not even in principle.

          Liked by 1 person

      5. Mike,
        On F&M, this does seem to be the same answer that you provided before. It’s just that I didn’t consider it to address the question from my own such understanding of that question. So no problem, surely I must have had the question wrong. A simple google search however brought up this.

        The hard problem of consciousness is the problem of explaining how and why sentient organisms have qualia or phenomenal experiences—how and why it is that some internal states are felt states, such as heat or pain, rather than unfelt states, as in a thermostat or a toaster.

        Yeah that was my impression as well (though in truth I consider the “why” question to be quite easy, since I believe that I’ve developed a very effective answer for it). I do agree with what F&M said — I can only ever experience what I experience rather than the function of my neurons, and no one can ever experience what I experience. But I don’t understand how that addresses what needs to occur in order for me to actually “feel” itself. A true answer as I see it, would be in the form of “Build a robot [this way], and it will thus become sentient”. And that’s indeed the sort of thing that I consider you to have proposed above.

        On your “Construction of feelings” post, yes that was a good one. I suppose that without a formal declaration of “Here is my own account for the [truly] hard problem of consciousness” it didn’t quite sink in how bold you were being. Well okay then, yes we’ll continue assessing your [truly] hard problem answer, just as you do.

        On feeling what an octopus and fly feels, I should have been more clear that I wasn’t talking about something possible. This was instead nothing more than a conceptual thought experiment. If by means of magic (and of course you and I don’t consider magic possible) you could exist as another conscious entity, then afterwards I’d think that if registered in your memory somewhat, you could say that the experience felt “good”, “bad”, or whatever. Obviously the more non-human, the more strange we’d expect such an experience to be. Do you agree?

        Liked by 1 person

        1. Eric,
          On F&M and the hard problem, I’m afraid I’ve confused things. Sorry, my bad.

          I think the discrepancy here is F&M are addressing in this sequence why we perceive that there is a hard problem. Their overall solution to the hard problem would be the framework they describe involving exteroceptive, interoceptive, and affective consciousness, which they spend much of their book discussing and building a case for.

          The case I make in my post on feelings is largely inspired by their analyses, although I think F&M make a mistake by not having a more thorough discussion of how affects and motor systems interrelate. F&M treat an affect as a sort of sensory perception, but I think that’s wrong. It’s communication from the reflexive parts of the brain to the reasoning parts, to allow the reasoning parts a chance to intervene in action selection.

          On the Octopus and using a magic means to have its experience, okay, I’ll accept that for discussion. But suppose I take this magic mechanism and try to have the experience of my laptop? You might say that’s meaningless since my laptop doesn’t have experiences. Suppose then I point the magic mechanism at a self driving car, which arguably has a form of exteroceptive perception. Or the Google Deepmind system, some version of which include an incipient form of imagination. Would I perceive any experiences there?

          Or do we say that experience can only come from a system using carbon based compounds arranged in a nervous system? So I successively point it at a c-elegans worm, an amphixious, a garden snail, a crab, an ant, a fly, a bee, and so on up the chain of complexity ( https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons ) until I get to an octopus.

          At what point do I start to receive good and bad experiences? Are there in-between states between non-experience and experience? What if the experiences of invertebrates are as alien as the experience of Deepmind?

          Liked by 1 person

      6. Thanks for that clarification Mike. So I guess since they spend so much of their book discussing and building a case for their hard problem answer, that I shouldn’t expect a concise account of it? Well okay. Of course that does nothing to temper my previously expressed skepticism that they have anything useful to say about the matter.

        It’s interesting to me that you consider it wrong of them to treat affect as sensory perception. If one happens to be in “pain” for example, couldn’t it be useful to assess this as a sensory perception? Or an input for the conscious entity to deal with? That’s certainly how I see it so I can’t fault them for this. Furthermore it seems to me that your provided answer could be interpreted this way as well. You said “It’s communication from the reflexive parts of the brain to the reasoning parts, to allow the reasoning parts a chance to intervene in action selection”. Isn’t that about like the non-conscious brain outputting pain for the conscious part to potentially deal with as input somehow?

        On the magic device thought experiment from which to experience what something else does, let’s see if we can spice this up further. I’d like you to be able to predict my response, and me yours, so that “working level” rather than simply “lecture level” understandings could be demonstrated. So here’s my impression of what you’d say about this:

        A standard laptop computer? Nothing. A self driving car? I suspect you’d say that such function does produce a faint touch of phenomenal experience, though less than a Deepmind system with incipient imagination. Only carbon based systems with neurons? I don’t think you have reason to restrict phenomenal experience to only this sort of thing. I doubt you’d think the a c-elegans worm or amphixious feel much of anything given their neural simplicity. But from the garden snail and higher I suspect that you consider at least trace levels of phenomenal experience to exist, though in a sliding scale capacity. Here more advanced life is more conscious than less advanced life, and this should reflect how extensively existence can be experienced.

        I’ll now provide my own account, but first try to imagine what I’d say:

        Existing as a laptop computer, or indeed anything that we build, should feel the same as it feels to be under perfect anesthesia. This is to say that nothing should be felt at all. Still I do believe that it would be possible for a humanly fabricated computer to output phenomenal experience for a conscious entity to experience. My great respect for what evolution creates when compared against what we do however, leads me to somewhat doubt that we’ll never knowingly be able to build such a thing.

        On basic forms of life, a central organism processor would be a minimum and thus plants and fungus should have no phenomenal experience. Furthermore I suspect that the c-elegans worm and amphixious lack the conscious form of function. But from maybe garden snails up my suspicion is that the second form of function exists in some capacity, and thus there is “something it is like” to exist in this way, or sentience. And though a given conscious entity may be quite primitive, this will not mean it’s “less conscious”. As I define the term, consciousness exist with its level of valence at any given moment. So if you could feel the pain that a fly feels for example, it might be amazingly horrible, while existing as yourself at a given moment might not feel like much at all.

        Liked by 1 person

        1. Eric,
          On F&M’s attempt to answer the hard problem, the most concise description I can provide is the series of posts I did on their book a few years ago, particularly this post: https://selfawarepatterns.com/2016/09/16/types-of-sensory-consciousness/

          On affects and sensory perception, I think we have to be careful not to conflate interoception with affects. Interoception, such as a feeling of emptiness in the stomach, is a perception. But hunger, the affect, is a valenced interpretation of the perception of emptiness.

          Pain is a special case since it has dedicated pathways in the peripheral nervous system. Nociception is, I think, only interpreted negatively. Still, for it to be pain, it has to be interpreted. If the connection between the insula cortex and the anterior cingulate cortex is severed, a person reportedly doesn’t feel pain.

          Crucially, this interpretation happens in the motor and premotor systems, not the sensory ones. So it’s not information from the senses to the reasoning center, but from the reflexive motor centers to the action planning ones, all on the action side of the brain.

          On the magic device, your response more or less matches what I would have thought, but I think mine is going to surprise you. If we have a magic device that translates the information processing of whatever systems we’re examining into the equivalent information processing of our system, then I think we get something in every case.

          Of course, what we get from the laptop is utterly alien and incomprehensibly limited, but I think we do get some kind of perceptions from the self driving car, just without any feeling or sense of self attached to it. I’m not sure what we get from Deepmind, but it remains very alien.

          So how would that compare to the invertebrates? Remember that we and the invertebrates deviated from each other before the evolution of brains, or even central nervous systems. So they use neurons, but the circuitry would be utterly different from ours. Since our magic device is translating, we do get something, although like the laptop, the feed from a c-elegans would be incomprehensibly limited. I agree that feelings don’t come in until at least the garden snail stage, and possibly not until higher. But due to how long ago our evolutionary lines diverged, I suspect we’re still talking about something very alien.

          It’s not even clear that we get any kind of unified experience from the octopus. It appears to delegate a lot more to its peripheral ganglion than we do to our peripheral nervous system. Is it like our delegation of movement to the cerebellum, or does it delegate actual cognition? How does our device translate that? Do we receive separate feeds for the ganglion in each of its arms?

          We may have a similar conundrum with ants. I read not too long ago that their nervous system is arranged in a series of independent circuit clusters which do not appear to integrate with each other. How does that translate into our own integrated awareness? Does it translate? What does our magic device have to do, just at a level of deciding what to include or not include, to make that translation successful?

          Liked by 1 person

      7. Mike,
        Apparently there was a bit of a mix up with that thought experiment. Even I might answer as you did if the magic was set up to translate the function of something else into “human”. (I say “might” given the potential that it’s translated over to the non-conscious side entirely.) So let me try be more clear.

        I don’t believe that any part of my body, or even my whole body, experiences any valences. I do however believe that my body somehow creates something which does experience valences, or “me”. (If you believe that any of your body parts experience valence, then just go with that as “you”.) Furthermore I don’t believe that any of our computers produce anything that experiences anything in a phenomenal capacity. Therefore if I were to magically feel what a human made computer feels, I should feel nothing. This is to say that the magic would render me not conscious. Conversely if I were to magically feel what you do, then you should know exactly what I’d be feeling.

        So from this perspective, or being able to feel what something else does if it does, I was speculating that you believe some of our technological systems do have phenomenal experiences somewhat, as well as life which harbors central organism processors to the magnitude of their complexity. Therefore this magic would impart what they feel over to you. Correct me if I’m wrong about this.

        I agree that we need to try to separate senses from affects. Notice that “smell” is commonly taken as something to classify in a single category. I instead call it “affect” to the extent that good/bad feelings are felt, and “sense” to the extent that information is provided, such as what the type of thing the smell is associated with. So yes an “empty” stomach feeling would be sense, and “hunger” would be valence to the extent it feels bad, as well as sense to the extent that it provides information.

        I’d wondered if pain wasn’t special. This leads me to suspect that affect consciousness began with it.

        On F&M, I’ve gone through your posts again as well as Jon Mallatt’s Brain Science interview. I do like that they look for basic consciousness in more primitive life, or “the ox cart” rather than “the Enterprise” starship. That’s my approach as well.

        I guess my problem with them is that they didn’t leave well enough alone with just affect consciousness. This is to say that they decided to interprete image maps as “inner worlds”, or add interoceptive and exteroceptive forms of consciousness. If that’s the case then there shouldn’t be anything “hard” about consciousness in this regard. That goes along with the “there was never a first consciousness but only more and less consciousness” perception of how you see the matter. Then I guess with these forms of consciousness so easy (not that they say it’s easy, and Jon certainly didn’t associate it with an autonomous car in the interview), affect should become easy as well. So I guess that’s how they effectively answer the hard problem of consciousness. People like myself who believe there is something quite amazing about phenomenal experience, will not be convinced.

        Liked by 1 person

        1. Eric,
          I don’t know that there was a mix up. I think we just have different understandings of consciousness. You see it as something separate and apart from all the information processing that goes on in the brain (the tiny computer vs the larger one). So in your view, if technological computers don’t generate their own tiny computer, there’s no consciousness there to translate from.

          I don’t see it that way. In my view, what we call “consciousness” is an amorphous package composed of exteroceptive, interoceptive, affective, metacognitive, and other cognitive processes. There’s no sharp distinction in the brain between what is or isn’t conscious. This is true to the extent that we can often consciously retrieve memories that may originally have been acquired below the level of consciousness.

          So if we have a robot, such as a self driving car, that builds exteroceptive models of the environment, then we have a vehicle that has part of the package. Of course, it doesn’t have the whole package, certainly not enough to trigger our intuition of a fellow conscious entity. But if our magic device can translate, we still receive exteroceptive information. This information, built from lidar, radar, GPS, and cameras, remains unimaginably alien, and comes without any feelings. From our perspective, it would be a starkly numb empty kind of experience. But if our device is translating, we’d get something.

          All of which is to say, I disagree with your critique of F&M’s breakdown of consciousness into exteroceptive, interoceptive, and affective processing. As I’ve mentioned before, I do think calling each of these “consciousness” in and of themselves may be a bit provocative, but I have no problem seeing them as components of what we mean by “consciousness.”

          I do have a lot of issues with their evaluation of affective consciousness (sentience). I think they fail to make an adequate distinction between reflexive survival circuits and actual feeling ones. And they largely seem to miss why affects are affects, that is, what distinguishes them from complex survival circuits.

          “I’d wondered if pain wasn’t special. This leads me to suspect that affect consciousness began with it.”

          This is a common sentiment. It’s was also one F&M held when they first started investigating. But the evidence for it is more limited than you might expect. Fish only seem to have the nociceptive fibers associated with sharp pain, not the kind associated with long burning pain, the kind that leads to suffering. Teleost fish do have some c-fibers (the burning pain variety), with about 5% of the axons in the nerves, but far below the amount in people who have a very low number of these c-fibers (25%) who are so insensitive to pain that they are in constant danger of hurting themselves without realizing it. In other words, pain as we understand it may be a relatively late development.

          Liked by 1 person

      8. Mike,
        You don’t seem to be backing down from the picture that I’ve painted above. Fair enough. Here there isn’t anything particularly special about being conscious as far as I can tell, other than having a “computational” nature, which doesn’t actually seem very special since even the idiot human is able to build such machines. So the more involved that a given computer happens to be, the more conscious it should tend to be as I understand it. Conversely to me it’s more effective to consider consciousness as a purpose driven form of computer (or teleological), that’s produced and facilitated by a non-conscious form of computer which thus lacks an inner purpose.

        If they didn’t mention that any of our technological systems qualify for “exteroceptive consciousness”, then I doubt that F&M would endorse your position. This one could be yours however. I think you’ll find that the “singularity” folks will be most pleased by it, which may be disconcerting given that this includes people like the zany futurist Ray Kurzweil. Furthermore if there is no “hard” distinction between something that is and isn’t conscious, you seem to slide far closer to the panpsychist side of things than you may be comfortable with.

        Then as for my own “two computers” model, lately you’ve been implying dualism here, though it’s a difficult association to make stick given how strong a naturalist I happen to be. What is “naturalistic dualism”? Similar to “upistic downism”, or bigistic littleism I suppose, or any other contradicting pair of terms. Regardless most in science (and so beyond the small contingency of dualist) seem to think that there is a pretty “hard problem” associated with creating something sentient, which is to say, creating something that it’s like to be. So in this regard you seem to have a steeper hill to climb than I do.

        Regardless, may the most functional of our theories prevail in the end, and may each of us remain objective enough to acknowledge where competing theory also happens to be superior theory.

        Liked by 1 person

        1. Eric,

          “So the more involved that a given computer happens to be, the more conscious it should tend to be as I understand it.”

          That’s too general in my view. To trigger or intuition of consciousness, a computer has to process information similar to the way we do. But regardless of whether it triggers that intuition, it can have individual capabilities, such as exteroception, that are similar to ours.

          “If they didn’t mention that any of our technological systems qualify for “exteroceptive consciousness”, then I doubt that F&M would endorse your position.”

          Actually, based on comments they make in Consciousness Demystified, I’m certain that they would not endorse it. They see consciousness as inextricably linked to biology. And I think I’ve mentioned before that I think they give far too much credence to the idea of an objective consciousness. I think their analysis is weakest when they’re trying to explain that objective existence, particularly when it doesn’t line up with the appearance of consciousness in creatures that don’t have the traits they identify with that objective existence, such as invertebrates.

          On giving comfort to zany futurists, I would hope I wouldn’t let whether my conclusions line up with particular people’s views have any influence on those conclusions. If a conclusion I had lined up with, say, Deepak Chopra’s, it might give me pause, but I hope I’d stick to my guns if I had good reasons for that conclusion.

          I’ve said before that naturalistic panpsychism isn’t outright wrong. Panpsychists are right that there’s no sharp distinction between conscious and non-conscious systems. But that doesn’t make their outlook a productive one. It’s like saying that because there’s no sharp distinction between the winds of a strong tropical depression and a storm, that therefore a mild breeze is a storm. An outlook that calls a system “conscious” that has no discernable perceptions, attention, memory, or emotions is not one I find interesting.

          On dualism, I actually wrote a paragraph about that in my previous reply but deleted it because I wasn’t sure how you’d respond if I mentioned it again. But when you say that no part of the body, including the body as a whole, has valences, I do find the sense of dualism to be pretty strong.

          I should note that I think anyone who posits an objective consciousness is a dualist of one sort or another. It might not be Cartesian substance dualism. It could be a form of purely physical dualism, such as the people who point to a particular spot in the brain and say it is conscious. Or it could be a type of emergent dualism, which is where I perceive you to be.

          If it makes you feel any better, as someone who accepts the possibility of mind copying, I’ve been accused of, and accepted, a certain form of software / hardware dualism.
          https://selfawarepatterns.com/2014/05/26/the-dualism-of-mind-uploading/

          Liked by 2 people

      1. I’m not quite sure what maneuver you have in mind, but the ability of humans to use arithmetic algorithms does not in any way suggest or hint at “consciousness as a computer.” As I have said, I find the idea without merit.

        Keep in mind people need to learn math, and many claim they are incapable of it. Few have any real facility with it. It’s a learned skill. Those “calculator” people were highly trained.

        Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.