Consciousness and intelligence

The other day, when discussing Mark Solms’ book, I noted that he is working to create an artificial consciousness, but that he emphasizes that he isn’t aiming for intelligence, just the conscious part, as though consciousness and intelligence are unrelated. This seems to fit with his affect centered theory of consciousness, and it matches a lot of people’s intuition.

But I’ve often wondered why this intuition is so prevalent. I suspect it comes from the impressions we have have about our own experience. Subjective phenomenal experience seems like something that just happens to us. We don’t appear to have to work for it. On the other hand, doing things that are typically considered intelligent, such as working with mathematical equations, handling complex navigation, or dealing with complex social situations, typically require effort, notably conscious effort, at least until we learn them well enough for it to “be natural”.

From this, it seems easy to reach the conclusion that these are very different things. It doesn’t help that the definitions for both concepts are controversial. I’ve talked about the definitional issues for consciousness many times. Intelligence faces similar issues, although its definitions don’t seem to vary across nearly as wide a range as the consciousness ones. It’s worth noting that some of the definitions in its Wikipedia article overlap with particular definitions of consciousness. But intelligence is usually regarded as something that can exist to highly varying degrees, including far below human level intelligence.

The problem is that the impression of separate phenomena is based on introspection. And as I’ve discussed before, introspection, while adaptive and effective enough for many day-to-day purposes, isn’t a reliable source of information about the architecture of the mind. There is extensive psychological evidence showing that our knowledge of our own mind is limited.

We don’t have access to a vast amount of unconsciousness and preconscious processing that happens in the brain. For example, the firing patterns in the early visual cortex are topographically mapped from the retina. The retina has a small central area of high visual acuity (resolution), the fovea, but that acuity falls off dramatically toward the periphery of the retina, as does the number of color receptors. And we have a hole in the center of the fovea where the optic nerve is. That and the eyes are constantly moving reflexively.

But we don’t perceive the world through a constantly shifting acuity tunnel with color only at the center. Our visual system does an extensive amount of work to produce the experience of a stable colorful field of vision. And that’s before we get into detecting things in motion as well as object categorization and discrimination. When we recognize something like a red apple, it seems like something that happens effortlessly to us. In reality, our nervous system has extensive layers of functionality, functionality hidden from introspective awareness. A lot of processing has to take place for us to recognize that apple.

This is also true for the initial quick evaluations the nervous system makes about the results of all that perceptual processing, the initial evaluations we usually call “affects” or “emotional feelings.” Again, all leading to an impression that sentience simply happens to us, rather than something our brain puts in a lot of work to produce.

Artificial neural networks that recognize images or make evaluations are considered artificial intelligence, which implies that the natural versions are also a form of intelligence. In other words, conscious experience is built on intelligence. Most of it is unconsciousness intelligence, but intelligence just the same.

That doesn’t mean that consciousness and intelligence are equivalent. There are plenty of systems that meet many definitions of intelligence but show no signs of meeting typical definitions of consciousness. Often intelligent technological systems involve encyclopedic information, not the world or self models more likely to trigger our intuition of a conscious being. And plants and simple organisms like slime mold often exhibit intelligent behavior that few regard as conscious.

But it does imply that consciousness is a type of intelligence, that’s built on lower levels of intelligence and enables more complex forms of it. As most commonly understood, it seems to be a form of intelligence involving the use of predictive models of the environment, the self, and the relationship between the two.

This is a conclusion many seem loathe to accept. Insisting that consciousness is something separate and apart from functional intelligence inevitably makes it more mysterious, more difficult to explain. It’s notable that in his 1995 paper in which he coined “the hard problem of consciousness”, David Chalmers explicitly noted the functional areas of intelligent processing, labeling them “the easy problems”, but then declared that wasn’t what he was talking about, in essence excluding them as possible answers to the problem he was identifying.

The result is a phenomenon many seem to think science can’t solve. It’s some of the factors that I think lead to outlooks like panpsychism, biopsychism, or property dualism, as well as appeals to theories involving quantum physics, electrical fields, or other highly speculative notions. But if consciousness is just a type of intelligence, then studying its mechanisms is possible, and everything we need should be in the mainstream cognitive sciences, including cognitive neuroscience.

What do you think? Are there reasons I’m overlooking to regard consciousness and intelligence as separate phenomena? If so, what distinguishes them from each other? What properties does consciousness possess that intelligence lacks, or vice-versa?

Featured image source

185 thoughts on “Consciousness and intelligence

  1. Yes, I see consciousness and intelligence as related. But the relation isn’t all that obvious.

    Intelligence has been misunderstood, since forever. It is still misunderstood.

    Somebody solves a problem using logic. And people decide to equate intelligence with logic. That’s a huge mistake. Logic itself is just mechanical rule following, as we see with our logic machines. There really isn’t anything intelligent about that. It is just mechanics.

    Somebody sees a problem. So he comes up with a logical model for the problem, and then solves that with logic. But the intelligence is not in the logic. The intelligence is in the ability to construct useful models.

    Liked by 2 people

    1. Interesting point about intelligence. Although I wonder if it’s actually a distinction between a narrow and more general intelligence.

      If intelligence is the ability to construct useful models, what is happening during that construction? What’s needed beyond value and logic? Creativity? But what is creativity other than the use of unexpected logical models? (“Unexpected” meaning it’s not in the observer’s current model of solutions.)

      Consider when I use Google Maps on my phone. I’m essentially giving it my destination and letting it figure out the optimal path to that destination, taking into account known construction, traffic issues, etc. Often, when I use it for a destination I already know how to get to, I’m surprised by the shortcuts it comes up with. (Although sometimes I’m also underwhelmed by its blind spots.)

      The question is, when I figure out a path to a destination, am I doing something substantially different than Google Maps? Certainly I make take into account additional factors (and my own blind spots), but beyond that, what am I bringing that the navigation system lacks?

      Liked by 1 person

      1. What’s needed beyond value and logic?

        Perception, experience, pragmatics.

        Our models are derived from our understanding of the world around us. Computer based attempts at perception are quite poor, and computers don’t really do pragmatics though they may attempt to emulate it.

        Your Google Maps example is interesting. My wife recently printed out the map to a destination a few miles away. The suggested route required turning at a difficult to find intersection. So I just went by my more normal driving instincts instead. In retrospect, the google route would have been shorter, but probably slower.

        Yes, I think you are doing something different from google. Presumably, google is using metrics. Perhaps you are too, but your metrics are likely biased toward what is familiar to you.

        Liked by 1 person

        1. The question is what’s involved in perception, experience, and pragmatics aside from value and logic.

          The story you describe resembles my experience in the early days of navigation systems. I remember my Garmin doing things like that to me. But in recent years, I’ve found those kinds of occurrences increasingly rare, although they definitely still happen. Of course, I sometimes make a mess of things when I just do it myself

          Liked by 1 person

          1. What’s mostly involved in perception, is information. And I’ll get back to that shortly.

            Yes, pragmatics depends on value. Our biology gives us a natural value system, which we call “emotion” and we often decry it. That’s a mistake. My current assumption is that emotions are biological, and are the way that our biology tells us about the state of our life support systems — but, of course, that’s surely an oversimplification. In any case, emotions provide us with feedback from our biology.

            Back to information. We make a huge mistake by assuming that information is a natural kind. It is all human constructed. And sure, other biological organisms are constructing information too. If you want to understand consciousness, then start by trying to understand perception. And if you want to understand perception, then look to how we construct information.

            To construct information, we categorize the world (divide it into categories). Information amounts to information about categories. And the more finely we categorize the world, the more detailed the information that we can have.

            Newton’s laws worked so well because of the way that he reconceptualized motion. With the Newtonian conception, we could now have friction as a force, air resistance as a force. These are new categories which weren’t available to Aristotle. And those new categories gave us access to new information and better predictions.

            Liked by 1 person

          2. I agree that information is a crucial part of all this. But I think it is a natural kind. Talk to any physicist and you’ll hear a story about physical information, which is everywhere. (The black hole information paradox is meaningless without it.) The key thing is, any physical information has the potential to become conscious or useful information, which is what I think you’re using “information” to refer to. But the physical information exists before we make it useful info and can continue to exist after we cease using it.

            Emotions are biological, but they’re also something our brain constructs based on what it senses in that biology, an automatic evaluation based on what is being sensed about the system’s homeostatic state, which we only know as conscious feelings. It’s all information processing, although it doesn’t seem like that from the inside.

            Liked by 1 person

          3. What physicists mean by “information” is not the same as what most people mean. Physicists are typically talking about the transmission of signals. Most people are concerned with semantics rather than with raw signals.

            Liked by 1 person

  2. While reading your explanation and the examples you presented, I was struck by the ambiguity of the word intelligence.

    Much of what appears to be ascribed to intelligence might be more akin to the label “awareness”. AI, currently is not intelligent. Perhaps we should be calling it AA, artificial awareness. Linking awareness to behavioral response might look like intelligence, but it’s not, right? Slime mold might react given some biological awareness it possesses, but it’s not “smart”. I would think that to apply the word intelligent, that some intentional decision making would be involved. Choices to be made.

    Maybe there’s a hierarchy here:
    • Awareness
    • Sentience
    • Intelligence
    • Consciousness
    and an overlapping spectrum within each level, each one blending into the next?

    Liked by 1 person

    1. There is definitely a lot of ambiguity attached to “intelligence”. When we use the word, it’s often context specific. Is a bee intelligent? The question only makes sense in comparison to something else. Compared to a dog? No. But compared to an ant it’s a super mind. On the other hand, “intelligence” seems less ambiguous than “consciousness”; I don’t see anyone claiming a rock or proton has intelligence.

      I like the way you’re thinking with the hierarchy! Unfortunately the other terms are all pretty ambiguous themselves. It makes having productive conversations challenging. We always have to be on guard that we’re not talking past each other with different definitions. (I’m convinced most philosophical disputes are definition disputes.)

      For example, does it make sense to say a self driving car that brakes because of a pedestrian in the road is “aware” of that pedestrian? Or to say in that Tesla crash that killed a driver that the car was “not aware” of the truck it slammed into?

      Sentience we could at say is awareness of an initial primal evaluation. But many people would say that’s also consciousness. And quite a few would insist it’s more primal than awareness. 😛

      Like

        1. Thanks, I have. And a number of people have pointed it out to me. There are some fans of EM theories here, but I’m not enthusiastic. Strictly speaking they’re not impossible, but they’re just disconnected from where most of the data in neuroscience is pointing.

          Anil Seth did a brief Twitter thread on them a while back.

          Liked by 1 person

        2. Hey Anonymole, it sounds like you may be interested in McFadden’s proposal! I certainly am. Note that his is falsifiable while the standard sort of theory favored by prominent people like Anil Seth (which you might notice to always depend upon certain information that’s processed into other information), is not falsifiable. So while these people must stomach all sorts of funky thought experiments from the likes of Searle, Block, Schwitzgebel, and I, the fate of McFadden’s theory will instead depend upon scientific experimentation.

          What sort of experiment would support or refute his theory? I emailed McFadden a proposal on Saturday, and now that it’s Monday I’m hoping that he’ll get back to me. But who knows since he’s a prominent person while I’m just a fan. Anyway here’s what I sent:

          Hello again professor,

          Lately I’ve been thinking about how to practically test your cemi, and well beyond how it’s already observed that more synchronous neuron firing tends to occur when a person “recognizes” something in their field of view, such as their glasses on a cluttered desk. It seems to me that the following approach should be relatively irrefutable, though let me know if I’m not thinking about this properly:

          If an either wired or wireless transmitter were implanted in the head, could we not simulate the synchronous neuron firing associated with standard neuron function, and so from your theory affect that person’s consciousness if an applicable frequency were achieved? I’d think that a compensated test subject who is fully aware of your theory would sit down with researchers and tell them if he/she were to notice anything strange while various combinations of simulated neuron firing were tried, of course with various physiological forms of monitoring as well.

          Regarding the matter of highly invasive surgery to get this done, I’d think that many people who are already having brain surgery wouldn’t mind being compensated for having an otherwise benign transmitter left in their heads before they’re patched up. If wired perhaps these would be like hairs? Then subjects would be further compensated for lab study.

          There are so many ridiculous and unfalsifiable consciousness theories out there that I think it would be refreshing to finally have one theory be validated or refuted. Surely if validated this would be on the order of what Newton or Einstein achieved. That’s why I’m writing. Given what you’ve set up over the past couple of decades, could it really come down to what I currently perceive as a relatively simple experiment?

          Eric
          ….

          Liked by 1 person

          1. Two thoughts, 1) running interference on a signal source should be doable from outside the brain. If true, then one might believe we could “turn off” consciousness this way.
            2) As I lay in bed, my internal monologue running rampant, I thought, is there an EM field being generated that’s allowing me to have these thoughts?

            Liked by 1 person

          2. Actually Anonymole, one of the major charges against McFadden’s cemi is that if true then consciousness should be altered by all sorts of exogenous EM fields, though this isn’t observed. When the associated physics happens to be calculated out however, this rebuttal may be dismissed. He addresses his in his 2002 paper entitled “Synchronous Firing and It’s Influence on the Brain’s Electromagnetic Field”:

            Prediction 6. The high conductivity of the cerebral fluid and fluid within the brain ventricles creates an effective ‘Faraday cage’ that insulates the brain from most natural exogenous electric fields. A constant external electric field will thereby induce almost no field at all in the brain (Adair, 1991). Alternating cur- rents from technological devices (power lines, mobile phones, etc.) will generate an alternating induced field, but its magnitude will be very weak. For example, a 60 Hz electrical field of 1000 V/m (typical of a powerline) will generate a tissue field of only 40 μV/m inside the head (Adair, 1991), clearly much weaker than either the endogenous em field or the field caused by thermal noise in cell mem- branes. Magnetic fields do penetrate tissue much more readily than electric fields but most naturally encountered magnetic fields, and also those experienced dur- ing nuclear magnetic resonance (NMR) scanning, are static (changing only the direction of moving charges) and are thereby unlikely to have physiological effects. Changing magnetic fields will penetrate the skull and induce electric cur- rents in the brain. However, there is abundant evidence (from, e.g., TMS studies as outlined above) that these do modify brain activity. Indeed, repetitive TMS is subject to strict safety guidelines to prevent inducing seizures in normal subjects (Hallett, 2000) through field effects.

            My take is that the brain seems to be relatively insulated from standard EM radiation, whereas the static nature of the MRI variety leaves that potential relatively benign, and then the non static nature associated with Transcranial Magnetic Stimulation will of course have physiological effects, since here electric current becomes inducted in specific parts of the brain. I guess it’s used therapeutically to stimulate neuron firing, though must be used carefully given its neural effects. My embedded transmitter proposal seems to get around all this by doing the same sorts of things that groups of neurons do right in the head, though unlike neurons which are wired up parts of the brain, this transmitter would only produce EM radiation to see if this could alter a theorized EM wave that exists as consciousness. If this isn’t a good way to test his theory, then I’d love to know why.

            Regarding your curiosity that perhaps there’s an EM field which allows you to think, technically the theory is that there’s an amazingly complex EM field which actually constitutes all of your thoughts, memories, perceptions, and qualia in general. So theoretically as this wave changes, your consciousness changes in associated ways.

            I’m certainly here for more if you (or others) have questions or comments, though the following is a spot on McFadden’s homepage where he gives a general overview, offers his two 2002 papers on the subject, a book chapter he wrote for a 2006 consciousness book, two 2013 papers, and most recently his 2020 paper, all accessible from PhilPapers. https://johnjoemcfadden.co.uk/popular-science/consciousness/

            Liked by 1 person

  3. My two cents.

    I wonder if anybody tried to look at similarities and differences between consciousness and intelligence ONLY from a definition perspective, without diving into his understanding of what those things are? That could be a different option for how to view the problem.

    How the most accepted definitions evolve? Are they going to be more strict or broader? Do they overlap more or less? Do we have more clarity or more ambiguity with more recent definitions? The list of questions goes on and on. How this approach relates to our typical discussions on the topic? Is it helpful, neutral, or misleading?

    I don’t have data, but I feel that nobody tried to apply such an approach to any subject.
    On a cheerful note – AI could probably, use this approach much better than us.

    Liked by 1 person

    1. I don’t know if there have been many studies along those lines. I actually googled around yesterday to see if anyone had addressed it. I came across a page or two which had fairly mystical takes on consciousness. There were a few philosophical papers, but not as much as you might expect, and nothing from neuroscience that immediately jumped out. It seems like something that should be explicitly studied.

      I suspect most people who see access consciousness as consciousness would have a take similar to mine. On the other hand, people who see phenomenal consciousness as something separate and apart from access consciousness I tend to think would see them as very different.

      Like

  4. It depends rather strictly on how one defines those terms.

    With regard to Chalmers, the functions of the mind — it’s intelligence, so to speak — is a task that does seem within sight (“easy”). That aspect of mental function seems clearly to fall within the rules of physics. But why there is “something it is like” to be conscious has no physical explanation that we know of. Which why some resort to some form of panpsychism; it provides an explanation where there is none. Others (yourself, I believe) feel a complex enough function just has an aspect of there being something it is like to be that mechanism. You may turn out to be right, but there are currently no physics that explains it.

    (BTW: second paragraph, first two sentences. “…why people this intuition…” and “… impressions we have have about…” Looks like a “have” did a runner!)

    Liked by 1 person

    1. I definitely agree it depends on the definitions. Unfortunately, this is an area where ambiguities are pervasive.

      Along those lines, the question I would have is, what does that very common phrase, “something it is like”, mean? (Other than its synonyms: subjective experience or phenomenal consciousness.) Until that can be clarified, it feels like saying there’s no physics to explain it is equivalent to saying there’s no physics to explain the color of 5.

      If we mean having a point of view, a perspective from a certain physical location, a certain way of processing information, or due to a particular history, then it doesn’t seem like anything a robot couldn’t have and so nothing particularly mysterious.

      (Thanks for the catch! Last minute edits always bite me. Updated!)

      Like

      1. Yep, and those ambiguities make the discussion rather fruitless. FWIW, my own definition of “intelligence” — the ability to intentionally figure out a problem — makes it quite different from consciousness — for which I do think Nagel’s famous phrase makes a good litmus test.

        With regard to that, comparing it to “the color of 5” says more about your disdain than anything else. I see it as quite simple and clear-cut. For humans it’s the sum total of everything it means to be human — something that is well recorded in our stories and our art. For a dog, it’s something different and, to us, so far quite as elusive as it is for Nagel’s bat.

        To return to the overall point, the “hard problem”, it’s something that no physics we know would account for in any robot we’re capable of constructing. As far as we know, there is nothing “it is like” to be any kind of machine.

        Like

        1. It’s true that I don’t have much regard for the phrase. But I’m open to the possibility that’s a hasty judgment. I could see it as a possible tag for something more complex, similar to terms like “natural selection” or “supply and demand”. But if I ask what those mean, answers that allow a deeper dive are readily available, with each successive explanation being in ever more basic terms.

          What is it about the sum total of what it means to be human that implies no possible physical explanation? Certainly there remain plenty of gaps in knowledge, but I can’t see anything that can’t at least plausibly be explained in terms of psychology, neurology, biology, chemistry, or physics.

          I know we won’t settle this here, but this failure to be able to drill down to exactly what is unexplainable is why I don’t see the hard problem as a real problem (other than as the sum total of the “easy” problems).

          Like

          1. What physics accounts for phenomenal experience? Can you drill down on what physics, what mathematical equation, accounts for it?

            What you’re missing here is that terms such as “natural selection” and “supply and demand” represent well-understood and well-defined concepts. There is no debate over their meanings. But consciousness is currently so opaque we don’t have an agreed upon definition, let alone an understanding of it. That alone speaks to it being a “hard” problem.

            Nagel’s phrase, as I said, makes a pretty good litmus test in my eyes. Is there something it is like to be a rock? How about an insect? How about our laptops? Is there something it is like to be a laptop?

            What about a fish? My fishing buddy and I have often discussed whether fish have mind. Does the apparent distress and terror of a landed fish mean what it seems to mean, or is that just reflexes trying to return to the water? A telling point, I think, is that one could program a robot fish to act indistinguishably.

            But what about a dog? Can anyone who has looked into the eyes of their dog doubt there is something it is like to be that dog? Can a robot dog come anywhere close to the real thing?

            When we finally achieve some real understanding of consciousness, then we’ll be able to drill down on the something it is like to be a conscious entity, but until then it remains a challenging mystery.

            Like

          2. The problem with asking what physics accounts for phenomenal experience is that’s just another way of asking what physics applies to “something it is like”. We’re just going in circles. Getting more specific might give us a chance of addressing it.

            For example, do we mean being able to discriminate between a red flower and a green leaf? Or telling between hot and cold surfaces? Or what emotions the red flower triggers? There are cognitive neuroscience insights on these more specific questions. It’s far from complete, but there’s nothing in principle indicating the answers aren’t available.

            The problem I see with Nagel’s phrase is it seems like just a front for our intuitions and biases, just another way to say we think X is conscious but not Y. As I’ve noted before, you could just say you think a dog is like us but a fish or a laptop isn’t, and you’d be expressing pretty much the same sentiment. But there’s no obvious metaphysical mystery in degrees of similarity or difference.

            Like

          3. The interesting thing about intuitions and biases is that everyone has them. (It’s part of the something it is like to be human.) The real question is how well they’re grounded and what informs them.

            I quite agree asking what physics accounts for the something it is like and for phenomenal experience is the same question. It’s the question I’m asking; there’s nothing circular about it; it’s a direct question. (I used the latter phrase because you earlier mentioned it was a synonym, and I completely agree.)

            Red flowers and green leaves, along with hot and cold surfaces, have myriad distinguishing features, but what has that to do with what it is like for humans to experience them? The question about the emotions a red flower triggers is closer to the mark, but only repeats my question.

            You keep saying it isn’t mysterious, but it does seem currently a big mystery.

            I’ve said twice that I see Nagel’s phrase at a litmus test, so, yes, it is a way to get at whether X or Y is consciousness (with the proviso that we equate consciousness with there being something it is like — a proviso I’m comfortable with).

            I did not say a dog is like us; I said there is something it is like to be a dog. I’m not sure there is something it is like to be a fish — there might be, but I’m inclined to doubt it. I’m quite certain there isn’t anything it is like to be a laptop or rock. (A big point Nagel was making is that the something it is like for a bat is beyond our ability to understand. I suspect that’s true of dogs as well.)

            Liked by 1 person

          4. I would say recognizing a red flower or a green leaf is an example of an experience. Of course, the evidence is that any perception in the brain that can be conscious can also be unconscious, so it’s only a conscious experience if the system is focused on it at that point. In the case of the emotions they might trigger, we know that evaluative circuits running from the brainstem to the amygdala, nucleus accumbens, ventral medial PFC, and other regions are involved. All indications are it’s all neural processing (supported by glia).

            Like

          5. Well, as you said above “there remain plenty of gaps in knowledge” and our understanding is “far from complete” but I have always agreed that “there’s nothing in principle indicating the answers aren’t available.”

            To take this back to my original comment, Nagel’s notion that there is something it is like to be a brain — be it human, dog, or bat — speaks to an important distinction in science: It’s a notion that only applies to brains. Further, it may only apply to sufficiently advanced brains. It may not apply to fish or bugs, for instance. (Or it may; we don’t really know.)

            Unless one ascribes to panpsychism, there isn’t any question whether there is something it is like to be an atom, a tectonic plate, a thunderstorm, or a clock. The question only arises with regard to brains… and putative systems we might create to emulate them.

            To me the value of Nagel’s phrase is in, firstly, pointing out the differences between those two distinct classes of system, and secondly, in pointing out the differences between different kinds of brains. (I’ve spent considerable time wondering about the something it is like to be a dog, but to quote W.G. Sebald: “Men and animals regard each other across a gulf of mutual incomprehension.” Nagel’s main point was that it’s just not possible.

            And I think this divide between systems for which there is something it is like to be that system, and those for which there isn’t, is what Chalmers’s “hard problem” is getting at. As I’ve said before, that there is currently no consensus on what “consciousness” even means speaks to the difficulty of the problem.

            In neither case is anything mystical meant. Only that brains have an important distinction between them and all other systems we study, and that the something it is like offers a challenge unlike most challenges in science. I think that, in such light, they are simple and clear-cut and that your disdain for them is indeed misguided.

            Liked by 2 people

      2. “Along those lines, the question I would have is, what does that very common phrase, “something it is like”, mean? (Other than its synonyms: subjective experience or phenomenal consciousness.)”

        I’ll try to clarify it on example of seeing colors. “What does it like to see red color?” means “How does color red look?”. When robot detect color with camera, this color haven’t got any look to it. For people colors look. By the same token, for you it is like to feel pain and so on. The whole of these things constitute what does it like to be you. It is your internal movie.

        Liked by 2 people

        1. Hi Konrad,
          I appreciate the effort at clarification. You’re basically talking about qualia. The thing about qualia is that they’re composed of both sensory processing and affective reactions. When a color is “a look” to us, it’s the visual discrimination coupled with all the associations triggered by that discrimination. So when you see red, your brain is processing a pattern of firing patterns from your retina, which in turn trigger a vast galaxy of concepts and feelings associated with that pattern. This is our experience of “the look”. It’s why the experience of redness feels rich.

          On qualia and the movie, I’ve done some posts on these.
          For qualia: https://selfawarepatterns.com/2020/02/11/do-qualia-exist-depends-on-what-we-mean-by-exist/
          For the movie: https://selfawarepatterns.com/2020/11/28/the-problem-with-the-theater-of-the-mind-metaphor/

          The TL;DR is that the movie metaphor is problematic.

          Like

          1. “When a color is “a look” to us, it’s the visual discrimination coupled with all the associations triggered by that discrimination.”
            It is albo something else. There is also non-functional property of your color experience (how this color looks for you). This thing is knowed by Mary when she sees the red color at the first time. I can’t convince me that this is illusory. Event it is conceivable that external world does not exist but non-existence of non-functional property of color experience is equally inconceivable as my experience not occuring.

            Even if qualia are illusory, there is internal perception of himself. It is difficult to speak about self-consciousness without this internal perception.Mere functional organisation does not cause that somebody realizes that himself exist.

            Liked by 1 person

          2. I would say the look of a color is functional, in the sense that the look of red distinguishes it from green, or any other color. Colors are an evaluation our brain makes based on the pattern of signaling from opponency cells in the retina, which in turn are stimulated by the pattern of Long, Medium, and Short wavelength receptor cones. (These are often described as red, green, and blue cones, but the matches are much less precise than that implies.)

            Each color is essentially a categorization, the triggering of which is extremely complex. Which colors we see are often the result of an enormous range of factors. It very much is an evaluation of our nervous system, one that different brains don’t always reach the same conclusions on, or even the same brain at different times.

            For a particularly famous example, see: https://en.wikipedia.org/wiki/The_dress
            For another example, the two circles at this url are the same color: https://www.sciencealert.com/here-s-why-you-re-fooled-by-this-classic-visual-illusion

            Introspection is definitely real. It’s also been shown through extensive psychological research to be unreliable. This is powerfully counter-intuitive, but I think understanding that unreliability, and coming to terms with it, is the first step in understanding consciousness.

            Like

          1. “I would say the look of a color is functional, in the sense that the look of red distinguishes it from green, or any other color”
            The look of red is not merely functional in the sense that look is not reduced to the causal role of color.

            “Introspection is definitely real. It’s also been shown through extensive psychological research to be unreliable. This is powerfully counter-intuitive, but I think understanding that unreliability, and coming to terms with it, is the first step in understanding consciousness.”
            Introspection is not reliable in every situation but some things are not possible. It is not possible that introspection indicate feeling of pain when I don’t feel antything. It is not possible that introspection indicates that I see red when I see green. Optical illusions are not introspection errors. In the case that you see green object as red, you have quale of red color. Introspection unreliably indicates this red quale.

            Like

  5. I think I can make a reasonable case that Solms at least has a roughly correct perspective here Mike, as well as that the argument you’ve provided may be considered a bit anthropocentric.

    Consider the following hypothetical situation: A human baby is born, immediately immobilized, and then hooked up to a machine which keeps it alive in isolation. Furthermore this being is also caused to feel horrible pain perpetually, though possibly permitted to sleep from time to time before it’s woken up for more such treatment. Holly shit!

    The question for you here is, because it’s at least biologically a normal human, would you say that it could display any intelligence? If so then as such a non lingual and paralyzed experiencer of pain, what might an example of this intelligence be?

    I’d say that this being would merely be a receptacle of suffering rather than anything intelligent. From this perspective affect exists as an input from which to potentially drive intelligence, though intelligence will not be inherent.

    Theoretically before functional subjectivity existed there were simply non-conscious brains. At some point however certain brains should have done some things which caused epiphenomenal experiencers of affect, probably in the form of the right kinds of neuron produced electromagnetic fields. Furthermore at some point such experiencers must have been given the opportunity to affect organism function given their inherent desire to feel good rather than bad. Thus the evolution of a second form of computer which could display “intelligence” as we know it, and so effectively deal with more “open” circumstances that it didn’t need to specifically be programmed for. This is the serial mode of function by which existence is experienced by us, unlike the parallel mode of brain function which harbors no subjectivity.

    Liked by 1 person

    1. Eric,
      I would say whether the baby could display intelligence is no more relevant than whether it could display its suffering. Given that its nervous system would be receiving sensory input of some kind and performing an evaluation of that input, its system would still be doing a lot work, work we’d consider “intelligent” if it were in an ANN. It has to do that work in order for it to be aware of that evaluation, that is, to feel pain. It would be incapable of making use of that information, but only due to artificial constraints.

      But I think your description of the scenario, and what you try to derive from it, is a demonstration of the intuition I discussed in the post. You’re taking the introspective impression that pain is something that happens to us, rather than a conclusion our nervous system reaches. At best it’s something that happens to the conscious us. But only after an enormous amount of preconscious processing.

      Liked by 1 person

      1. Given that its nervous system would be receiving sensory input of some kind and performing an evaluation of that input, its system would still be doing a lot work, work we’d consider “intelligent” if it were in an ANN.

        I guess that’s one issue here Mike. Even if one attempts to clarify that they’re referring to “subjective intelligence”, standard watered down usage of the term may open it up to a non subjective idea.

        What I’ve done here is demonstrate that affect consciousness should exist before the sort of “intelligence” that you and I display subjectively. Yes that baby’s non-conscious brain will be what creates affective states, which you might thus want to call “intelligence”, though this should be considered quite different from the “subjective intelligence” which it’s prevented from displaying in my scenario.

        So maybe a compromise is possible? Here we could say that there is non-subjective intelligence associated with ANNs and so on, as well as a subjective kind which may or may not occur by means of affective states of existence as I explained above, and so what I presume Mark Solms was referring to? And note that I didn’t even exclude the possibility for subjectivity to exist when certain information is processed into other information. As you know I consider instantiation mechanism required for all events in a natural world, including subjectivity.

        Liked by 1 person

        1. Eric,
          I actually alluded to what you’re calling “subjective intelligence” and “non-subjective intelligence” in the post.

          But it does imply that consciousness is a type of intelligence, that’s built on lower levels of intelligence and enables more complex forms of it.

          I’m not wild with the adjective “subjective”. It implies maybe the intelligence isn’t real. But I think we can agree that consciously enabled intelligence depends on non-conscious intelligence. It’s the non-conscious intelligence that even the unfortunate baby in your scenario would have. Although remember that I see consciousness itself as a type of intelligence. So it would still have at least glimmers of that type of intelligence, just be unable to do anything with it.

          Like

  6. It’s some of the factors that I think lead to outlooks like panpsychism, biopsychism, or property dualism, as well as appeals to theories involving quantum physics, electrical fields, or other highly speculative notions.

    Which of these things is not like the others? Which of these things just doesn’t belong? (H/t: Sesame Street) Actually it’s not quite Sesame Street, as there isn’t just one answer; three of them don’t belong: biopsychism, quantum physics, and electrical fields.

    Consciousness requires intelligence (of the sort that some AI researchers have worked very hard to develop for visual information processing, for example). Ok, sure, but the brain is chock full of biology, quantum fields, and electrical fields. The jury is out on the latter two doing serious intelligence heavy lifting, and definitely in on biology: it’s absolutely involved in everything the brain does. So if you grant that intelligence is vital to consciousness, then all three of these things have to be in the running. And if you don’t grant that intelligence is vital to consciousness, then that doesn’t make these three ideas more attractive.

    Liked by 1 person

    1. It seems like maybe you’re replying to the literal meanings of the labels rather than the theories they refer to. There’s no doubt that biology is involved in animal consciousness, since all animals are biological. Likewise, there’s also no doubt that quantum physics is involved, because it’s involved in everything. And of course an electrical field is involved in moving ions across the neural membrane in an action potential.

      But those aren’t what the referred theories are asserting. They’re asserting either than all life is conscious, or that only life, in principle, can ever be conscious. Or that quantum effects are involved in a way they’re not involved in every other physical process (and in a manner that requires new physics). Or that neurons communicate with each other through EM fields (other than through electrical synapses). Any of these could conceivably be true, but they’re not propositions driven by the data, just speculative guessing people try to fit into the current gaps in the data.

      Liked by 1 person

      1. Take the idea that all life is conscious. Well, all life does a fair amount of computing, in some fairly straightforward sense of the term. Proteins, enzymes, RNA strands are directed where they need to go. So this kind of biopsychic could take your discussion here as evidence in favor of their hypothesis.

        Quantum entanglement could serve computational functions, at least in some broader definition of computing than the Church-Turing one, and probably in that narrow sense as well. And electrical fields on a many-neuron scale, likewise. Indeed, advocates of the larger-scale electrical fields claim as much.

        Liked by 1 person

  7. Okay, here’s my take:

    “But it does imply that consciousness is a type of intelligence”

    This is backwards, cart before horse. I say consciousness is information processing (specifically defined), and intelligence is a measure of sophistication of information processing. Thus, intelligence is a measure of the sophistication (phi?) of consciousness.

    I appreciate that you, Mike, will invoke “the eye of the beholder”, and that you require a level of sophisticated information processing of which humans are certainly capable and other entities might be capable. But then your project becomes determining the specific features of that information processing which is necessary and sufficient, and then deciding what other entities may have that. I suggest that this project is like trying to figure out what a “computer” is by starting with Watson, and then trying to decide if my old Apple IIc is a “computer”, allowing the conclusion that the IIc is not a computer because it has no means to connect to the internet and thus cannot consult Wikipedia. Whether something is a “computer” is in the eye of the beholder.

    *
    [so there]

    Liked by 1 person

    1. Well, we could substitute “information processing” everywhere I used “intelligence” in my statement about the relationship between the two. I’m not sure we’re doing much other than semantic chair rearranging though.

      Identifying intelligence with phi is interesting. But I think Christof Koch would strongly disagree. He’d point out that a computer system can show considerable intelligence while having zero (or very low) phi. I think he’s a strong advocate for consciousness being separate from intelligence.

      It is worth noting that the visual cortices that perform all the work that eventually results in our visual awareness do have a substantial phi associated with them. But the actual IIT has its exclusion postulate that requires we focus on phi-max. (There’s a new paper exploring problems with phi, which I haven’t had a chance to read. Don’t know if I will. I’m not sure if IIT is worth the time anymore.)

      On your “computer” analogy, I take it you think I’m saddling consciousness with too many requirements. Maybe I am. It depends on our definitions. When someone proposes a very liberal definition of consciousness, all I can do is point out what’s missing and what it includes in club consciousness. When they propose a very sophisticated version, all I can do is point out what gets excluded. We all have to decide where on that spectrum our intuition of a conscious system is satisfied. (“Eye of the beholder” indeed!)

      [so there back 😉 ]

      Like

      1. Hmmm. To clarify:

        First, referring to phi was being a bit flippant. In no way do I think phi is identified with intelligence. Although I will say that higher intelligence implies higher phi, but not vice versa. [While writing the previous post, I was this close to talking about intelligence as more soPHIsticated info. proc.]

        Second, I don’t really mean to equate consciousness with information processing except as to say that information processing is the basic unit of consciousness. When people ascribe some small amount of consciousness to bacteria, they are more or less referring to information processing. When people refer to the “what it’s like-ness” of human consciousness, they’re basically referring to results of certain very sophisticated information processing. Ditto for predictive processing, higher order thought, global workspace, what have you.

        *
        [still trying to figure out exactly how info. processing can be done via electronic brain waves]

        [and while I have your attention, don’t know if you’re interested in the neurolink stuff, but here is a good (~ 30 min.) video explaining where it’s at now: https://www.youtube.com/embed/rzNOuJIzk2E%5D

        Like

        1. Ah, thanks for the clarification. I guess I took it much too seriously.

          I agree that information processing is the meat and potatoes of consciousness. (I think the same thing of intelligence.) And that a lot of what inclines people to say that a bacteria is conscious is its information processing.

          The “what it’s like-ness” point is interesting. I’m not sure too many people who use that phrase would necessarily agree with you, at least not without elaboration. For example, Chalmers might agree, but he’d likely stipulate what he sees as the dual aspect nature of information, implying that information has inherent experential qualities. But I think that’s different from what cognitive theories (GWT, HOTT, etc) propose, that experential qualities are composed of information.

          Thanks for the neuralink video! I’ll try to watch it. I did see the news release and a shorter video about the monkey playing pong with its mind.

          Like

          1. Just so ya know, the “what it’s like” explanation involves unitrackers, aka, pattern recognition units. I’m still trying to work up the best way to ‘splain it, but I keep wanting to start with “Okay, you’re in the Chinese room …” and then I think better of it.

            Real short version: If you have a system with X unitrackers, to that system, everything is “like” one or some combination of those unitrackers turning “on”, or it’s not like anything. When Mary sees red, she learns which unitracker she has for red, but can only reference it by saying “the unitracker that turns on when I see red.” [That is, until we put in the neurolinks and track down the specific one. :). ]

            *

            Like

          2. Yes, please just walk away from the Chinese room temptation. 🙂

            I just finished discussing redness with someone. I think red has to be far more than just one unitracker. It seems much more likely to be a vast galaxy of unitrackers, all triggered by certain patterns of activation coming in from the retina.

            Like

          3. I didn’t mean to suggest that unitrackers were independent, and I didn’t mean to suggest you have only one for red, except you probably do have one for red which is connected to fire-engine red and cherry red and rose red. I expect there is feed forward and feedback among them. And you repurpose them when you learn. You can go from having one for the taste of “red wine” to having ones for “smoky tannins”, “grassy finish”, etc. And you can repurpose the ones for colors to get more nuance from sounds (if the color ones aren’t being used).

            I’m really getting convinced that cortical minicolumn = unitracker. Keeping my eyes out for evidence one way or another.

            *

            Liked by 1 person

          4. I tend to think minicolumns are more of a ontogeny thing rather than a functional one. It seems like there might be numerous unitrackers within a column, and that some might have “territory” in multiple columns. But maybe the physical topology imposes a functional topology.

            Looking for evidence (or counter-evidence) is the way to do it!

            Liked by 1 person

  8. I’ve always felt the two terms to be different but somewhat related. However, that much discussion goes on with both terms without precise definitions. I’ve noted elsewhere that it is difficult, for example, to find a definition of “intelligence” in Nick Bostrom’s book on SuperIntelligence :).

    I’ve considered (probably in contrast to many people) that intelligence can arise in nature without consciousness. I don’t consider slime molds conscious, but I do think they are intelligent. I think evolution itself is somewhat of an “intelligent” process. The intuitions of intelligent design believers are right. I think they are just wrong about the need for a Designer. Darwinism itself produces quasi-optimal solutions and complexity through evolutionary processes. A foot in a mammal evolving to live in the water gradually becomes webbed and eventually a fin. Slime molds though means not totally understood can form themselves to “solve” mazes and find shortest paths to nutrients.

    Consciousness is extension of evolutionary intelligence. To create quasi-optimal solutions and complexity in living organisms in real time, complex learning is required. Hence, consciousness is highly associated with the learning process as posited by Simona Ginsburg and Eva Jablonka and others.

    Liked by 1 person

    1. James, I agree with everything you write here. (Well, I’ve never read Bostrom’s book, but based on his presentations, I can see him never actually defining intelligence.)

      I’ve always been struck by the seeming creativity that evolution displays in solving adaptation problems. Biologists often talk about species “coming up with innovations” when referring to capabilities they evolved. It’s a natural way of describing things, although I’ve encountered creationists and IDers who misunderstood what they were saying, so it requires understanding natural selection and the level that the innovations are taking place at. But avoiding that kind of language takes so much pedantic work that it’s better to just periodically clarify.

      Liked by 1 person

  9. Animals have the capacity to assess a posteriori evidence or they would not survive as a species. This skill set could be considered intelligence which it is, but that intelligence is restrained and limited by the five senses. I would say that meaningful intelligence is twofold: First, it has the capacity to overcome the self-imposed limitations of our own confirmational biases and secondly, it is intelligence that transcends empirical limitations, an intelligence that would be a measure of one’s ability to draw “correct conclusions” based solely upon a priori and synthetic a priori analysis.

    A priori and synthetic a priori judgements are almost like “spooky action at a distance”. They are long range skill sets, capabilities that are acute; and those skill sets are much more reliable that a posteriori judgements. They are skill sets that go beyond the necessity of mere survival alone and open the door to the underlying nature of reality and ourselves. As a species, homo sapiens have not evolved to the point to where this type of intelligence is the norm. It does exist within the species, but it is very rare indeed.

    Liked by 1 person

    1. I think I understand what you’re saying here about animals Lee. I would just note that I think animals do have limited reasoning abilities, so some limited non-symbolic a priori reasoning. It’s just that human do have symbolic thought, allowing our reasoning to go much further. Michio Kaku, in a recent interview, remarked that we can’t teach our pets the meaning of tomorrow, while humans often live in our tomorrows.

      On being able to see the underlying nature of reality and it being very rare, maybe. The problem is that many people have concluded they are the ones that have that. But others have also reached a similar conclusion, but with different answers. How do we figure out who is right and who is wrong? Ultimately, our a priori explanations need to predict empirical observation. In other words, the a priori must reconcile with the a posteriori. The ones whose models do that, may be closer to the truth, or at least less wrong.

      Like

      1. “How do we figure out who is right and who is wrong?”

        There is no “we” in that assessment Mike, there is only the one who knows and that a priori state of “knowing” is not transferable from one person to another. So you can rest assured that any and all of the new age gurus the likes of Rupert Spira and Deepak Chopra are just modern day snake oil salesmen.

        Liked by 1 person

        1. I’m with you on Chopra. I don’t know enough about Spira to say one way or another. Googling him gets a lot of New Agey stuff, which is not my cup of tea, but it wasn’t immediately clear he’s the shyster Chopra is, but I’ll admit I didn’t spend much effort on it.

          Like

      2. “Ultimately, our a priori explanations need to predict empirical observation. In other words, the a priori must reconcile with the a posteriori.”

        Ultimately is the right context to frame your compelling assessment Mike. But we seem to forget, simply do not understand or conveniently choose to dismiss that “ultimately”, only a priori knowledge can lead to the type of understanding that will resolve the hard problem of matter and the hard problem of consciousness. And that “ultimate criteria” is “intelligent” logical consistency where a predictive model based solely upon a priori intuition is developed, one that can be intellectually verified through thought experiments alone to be 100% inclusive, one that has no exceptions or contradictions for any and all systems throughout all of time.

        A posteriori analysis does not have that capability nor does it have the reach to accomplish that goal because it is limited to the five senses; and due to that constraint alone, there will always be a possibility that an exception or contradiction will be found in any scientifically developed empirical model. A further evolutionary advancement of the species is necessary to achieve this capability. And when that species arrives on the world stage it will encounter the same restraints that exist between species in general; and that restraint is communication. This species will be able to effectively communicate with each other but they will not be able to convey what they “know” with the Cartesian Me species any more than a Cartesian Me can convey the idea of tomorrow with a dog.

        Maybe the Cartesian Me species are all philosophical zombies??????? 😟

        Like

        1. Lee, this seems like the old rationalism vs empiricism debate. History seems to show that we need both. Any attempt to only use empirical data (which gets us close to logical positivism) or armchair rationalism, is too narrow, giving up too many tools. It seems like a circle, making observations, theorizing about those observations, testing the theories with new observations, adjusting theories, new observations, etc. We have to look at the world to learn about it, but all observation is theory laden.

          The possibility that we’re all philosophical zombies that compute we’re something more, is one that doesn’t get discussed often enough. 🙂

          Like

          1. “The possibility that we’re all philosophical zombies that compute we’re something more, is one that doesn’t get discussed often enough.”

            It is somewhat amusing to think about it that way. I mean after all, we are all just ” dead men walking”. I can’t help but think of Jordan Peterson who views our existential existence as a tragedy. Jordan is a relatively recent inspirational speaker to arrive the world stage. His views catch a lot of flack from the more trendy new agers.

            Like

          2. Peterson is another one I’m not that familiar with. Most of what I’ve heard is from liberals unhappy with something he said. Although someone did comment a while back about his theory of truth, which sounded pragmatic, along the lines of William James.

            Like

          3. Jordan Peterson is very pragmatic, I’m sure you would like him. His book “12 Rules for Life” has rule number one (1) as: “stand up straight with your shoulders back.”

            Liked by 1 person

    2. Lee,

      I’m a little dubious of this :”based solely upon a priori and synthetic a priori analysis.”

      That is to say, I am not convinced that it actually exists.

      What is the evolutionary origin of this? Isn’t it just something else acquired bottom-up and possibly subject to different but similar limitations?

      I’m not an expert on Kant but I’m guessing this is coming from him.

      Like

      1. James,

        You’ve spent enough time on Kastrup’s blog site to know that idealism is a dead horse that continues to be flagellated. So if you are looking for an evolutionary origin for a priori and synthetic a priori representations, one must be compelled to consider what matter is in and of itself. Not only what it is in and of itself, but its origin; fair enough…..

        You are right, it does come from Kant. Kant was a genius in his own time but his ontology is incomplete. What is not incomplete is his fundamental assumption that; “although we are denied complete understanding of the ‘thing-in-itself’ we can look to it as the ’cause’ beyond ourselves for the representations that occur within us”, regardless of whether those representations are empirically acquired through the five senses or a priori representations that are acquired through intuitional insights. So if you are dubious that a priori and synthetic a priori representations exist you must be equally dubious that a empirical representations exists. That is of course, unless you think Kant was full of shit.😎

        Liked by 1 person

        1. I’m not why you brought the Kastrup blog into the discussion.

          I’m okay with the idea that our sensory representations are just that -representations. They have arisen for their ease of use and their utility for survival and acting in the world, not for their fidelity to some out there reality. But why would the a priori representations be different? They might be if they really were a priori but that is precisely what I am doubting. I’m suggesting they arose as second order abstractions for the same evolutionary utilitarian reasons that sensory representations have arisen. In other words, they really are a posteriori . Nothing is acquired independently from experience, although by experience we need to the entire evolutionary history of organism.

          Where that puts in regard to Kant I don’t know. 🙂

          Like

          1. “They have arisen for their ease of use and their utility for survival and acting in the world, not for their fidelity to some out there reality.”

            Absolutely…..

            “I’m suggesting they arose as second order abstractions for the same evolutionary utilitarian reasons that sensory representations have arisen.”

            I do not dispute that aspect either. Nevertheless, are we as the top predatory species on the planet arrogant enough to believe that the evolutionary process has reached the apex of its capability? And if not, what would the next evolutionary level of experience look like? Would it be that the new species is now equipped with the intelligence, an intelligence that enhances the utility of logic to explore that underlying reality which drives the species? Now you personally may not be interested in that quest, and that position is perfectly acceptable.

            “Nothing is acquired independently from experience,”

            That’s the point I tried to express in my original reply to Mike; a priori representations are themselves experiences, very personal and private experiences, experiences that convey meaning beyond the scope of the five senses. It is the meaning of those experiences that are not transferable. The utility of meaning obtained through a priori representations can only be shared when another individual has the exact same experience.

            I am in agreement with Kant that a priori representations and a posteriori representations are not different versions of the same thing. One can reach out and touch, see, hear, smell and taste a posteriori representations and one can equally weigh, measure, test and place in a haldron collider a posteriori representations. A priori representations only exist in the quantum system we call mind. In that light, one is now once again compelled to consider: Is the fundamental nature of reality matter, or is it mind?

            Liked by 1 person

          2. “Nevertheless, are we as the top predatory species on the planet arrogant enough to believe that the evolutionary process has reached the apex of its capability? And if not, what would the next evolutionary level of experience look like? Would it be that the new species is now equipped with the intelligence, an intelligence that enhances the utility of logic to explore that underlying reality which drives the species? Now you personally may not be interested in that quest, and that position is perfectly acceptable”.

            Definitely interested in that question.

            Like

  10. Although I am not a property dualist (I think I’d rather be a full epistemological or metaphysical idealist), Chalmers was not wrong in separating p-consciousness from the “easy problems,” which are related to a-consciousness.
    This distinction was made by Ned Block when he stated that a-consciousness is the functional aspect of the mind while p-consciousness is its experiential or phenomenal aspect. One can conceive of a completely functional mind lacking phenomenality the same way one observes an ANN as being completely functional without having phenomenal experiences.
    Whatever the answer might be to the HPOC, I think it might probably require abandoning metaphysical functionalism.
    As long as talk of minds is done in terms of functions, states, and events, disregarding its phenomenal aspect, we will probably never get rid of this problem.

    Liked by 2 people

    1. You could be right, but obviously I disagree. I think Block’s move, treating phenomenal consciousness as something separate and apart from access consciousness, actually turns a complex galaxy of scientific problems into an intractable metaphysical mystery.

      For me, it’s like trying to understand how a car works, but ruling out any discussion of engines, fuel lines, axles, transmissions, break pads, etc. If we insist none of that has anything to do with how the car allows the driver to get to places, then its function becomes a deep, perhaps intractable mystery. But it’s a mystery we’ve manufactured.

      I think it’s much more productive to accept that phenomenal consciousness is access consciousness. They are one and the same, just viewed from different perspectives, one from the inside, the other from the outside. The access mechanics are the internal mechanisms of phenomenality. The only reason we have the impression they’re different is due to the unreliable nature of introspection.

      But maybe I’m wrong. If so, then we might be looking at panpsychism or idealism to explain phenomenal consciousness, because the science doesn’t currently indicate there’s anything exotic about the physics in the brain.

      Liked by 1 person

  11. I have a horrible and uncomfortable suspicion that you might be right. That consciousness is an illusion generated from bodily processes and that we are prediction machines with memory. Partly that seems to be what even the dear old Buddhists have told us for 2,500 years.

    I don’t know whether you have ever read, watched or listened to Robert Sapolski – the biologist and neuroscientist from Stanford. I find him fascinating, convincing and immensely likeable. Partly because I agree with his view on society and our shocking capitalist economy.

    In any event he is a pure determinist and admits to finding his conclusions depressing. I believe he also suffers from depression, reading between the lines.

    The lack of free will makes it difficult in a way to understand change in society and human behaviour. And animal behaviour in general. Nevertheless I can appreciate the argument.

    I think, therefor I am. Or perhaps not. Who or what is writing this response to you? A jumble of atoms and subatomic particles it seems.

    Liked by 1 person

    1. I’m not on board with saying consciousness is an illusion. I prefer to describe it as emergent. It’s really the same ontology, but “illusion” is too eliminativist, and makes too many people view the proposition with the same horror you do. Regarding it as emergent seems less cold and spartan in our ontology.

      I’m a fan of Sapolski. I actually started to read his book a while back, but got distracted. At some point I need to swing back around to it. He’s a pretty hard core, particularly in being a free will skeptic. I agree he manages to convey it in a likable manner. Thanks for reminding me!

      Liked by 1 person

  12. Just a few thoughts Mike.

    “The definitions for both concepts are controversial.”

    We can’t say clearly what the word consciousness means and we can’t clearly say what the word intelligence means, so the two are surely related. Which reminds me that consciousness and quantum physics are both mysterious so consciousness must be produced by quantum effects. Impressive logics, yes?

    If IQ tests are measuring what we call intelligence then intelligence must be “pattern recognition and pattern matching.” Higher intelligence would be more skillful pattern recognition/matching across multiple domains. By the way, that pattern matching is done unconsciously and we have no conscious access to it.

    You ask, “… what distinguishes [consciousness and intelligence] from each other?”

    I’d say that intelligence as pattern recognition/matching contributes mightily to the contents of consciousness but since consciousness is a simulation in feelings of an organism centered in a world, consciousness is not a type of intelligence.

    BTW, I don’t know why biopsychism appears in your sentence with panpsychism and dualism. The first Google hit for a biopsychism article explains that:

    “In 1892 Ernst Haeckel coined the term ‘biopsychism’ to refer to the position that feeling is ‘a vital activity of all organisms.’ … In contemporary terms, biopsychism can be described as the thesis that all and only living systems are sentient.”

    Wikipedia’s Sentience article says:

    “Sentience is the capacity to be aware of feelings and sensations. The word was first coined by philosophers in the 1630s for the concept of an ability to feel, derived from Latin sentientem (a feeling) … .”

    While I disagree with the ‘all’ in Haeckel’s “all organisms” it seems obvious that feelings are a biological phenomenon and biopsychism is a reasonable position, unlike panpsychism and the like.

    Liked by 1 person

    1. Stephen,
      That’s not the logic I use in the post. If you think it is, I hope you’ll consider looking at it again.

      Interesting idea on consciousness being pattern recognition. I think that’s part of it, particularly for the preconscious portions of perception. But I’m not sure it fits for simulating action scenarios, or figuring out a complex navigation. I guess you could stretch it to fit those cases, but it feels too simple.

      I know we’ve discussed this before, but I’m still not clear what “simulation in feelings” means.

      I listed biopsychism because it asserts either that 1) all life is conscious, or 2) only life can ever be conscious. 1) only seems true only if we take a very liberal conception of consciousness. It’s not as liberal as panpsychism, but it’s still extremely liberal. 2) strikes me as a very strong statement, one I think isn’t warranted, unless we make it true by definition.

      Like

      1. I realize that’s not the logic you used. I was trying to draw attention to the great amount of theorizing that takes place in the absence of shared precise definitions of the terminology being discussed. It’s no wonder that confusion reigns.

        I didn’t claim that consciousness is pattern recognition but that intelligence is fundamentally rooted in pattern recognition/matching. That’s also what underlies the ‘prediction’ (I prefer ‘expectation’) you talk about. Incoming sensory information is continuously processed as a story and an ongoing comparison with remembered stories results in the formulation of expectations. We generally experience what we expect to experience based on this mechanism. All of this, of course, is strictly unconscious processing—we do not consciously simulate action scenarios or resolve complex navigation as you sometimes seem to imply.

        As regards biopsychism, I also disagreed with the ‘all’ in Haeckel’s “all organisms.” I believe he would agree that biological feelings are the only kind of feelings that exist. That’s consistent with all of our observations, which is why I believe that biological is definitional. A single counterinstance would be sufficient to establish that feelings are not necessarily biological, but no single counterinstance has ever been confidently observed. We can only properly define both sensory and emotional feelings by example so I’m curious how those who want them to be non-biological would define a single non-biological feeling. Would you (or Eric if he reads this) care to propose a definition of a single non-biological feeling?

        I’m still not clear what ‘simulation in feelings’ means.

        You’re referring to my definition of consciousness as a streaming biological simulation in feelings of an organism centered in a world. To explain, I’ll start with saying that a simulation is a representation. A simulation of weather, for instance, is a computerized mathematical representation of weather but no one would confuse it with the weather itself.

        The contents of consciousness are feelings which are representations of sensory events (physical feelings) and neurobiological states (emotional feelings). The two easiest to understand are sight and sound. From my “The Consequences of Eternalism”:

        “The external world contains an ocean of energetic photons, some of which enter our eyeballs and are absorbed by molecules which then change in shape and ultimately create signals transmitted to the brain by neurons. But we don’t see photons or differing wavelengths of light—photons and wavelengths don’t look like anything. Instead, the brain’s processing creates and ‘displays’ the familiar colorful visual world. Sound, too, doesn’t exist in the world. The external world contains waves of pressure propagating through gases, liquids and solids, some of which enter the ear, vibrating the eardrum and other structures that ultimately create neuron-transmitted signals sent to the brain. But, as with vision and photons, we don’t hear sound waves. The external world, including that room you believe is pulsating with the sound of your favorite music, is completely silent.”

        So, what we see and what we hear are representations of photons and differing wavelengths and representations of compression waves. These representations are feelings—you know the feeling of seeing the color green and you know the feeling of hearing a singing voice. “Representation in feelings” and “simulation in feelings” are the same.

        But, unlike a simulation of weather, which no one would confuse it with weather itself, the simulation in feelings of our experience is completely and literally taken as the actuality of ourselves centered in the world. We externalize it all, from believing that a rose is actually red and the world is a noisy place. We further externalize the streaming nature of consciousness itself as a flowing time in the world—a flowing time that doesn’t exist.

        Have I cleared up what I mean by “simulation in feelings”?

        Liked by 1 person

        1. Thanks Stephen. I had forgotten about the way you use “feeling”.

          In terms of your question about examples of non-biological feelings, in the sense you use the word, what would you say distinguishes the sensory representation in a brain from the representations in a self driving car that it builds from its various sensory apparatus (lidar, cameras, etc) and uses for navigating the road?
          https://www.businessinsider.com/how-googles-self-driving-cars-see-the-world-2015-10

          Like

          1. Sensory | ˈsensərē |

            adjective
            relating to sensation or the physical senses; transmitted or perceived by the senses: sensory input.

            Mike,
            I’m not sure how Stephen will respond to your post, but the use and application of the word sensory in your self-driving car example is misplaced and therefore should not be used….😎

            Like

          2. I guess it is whether you consider that physical senses and sensations can be non-biological. I wouldn’t engage in a semantic argument about it but acknowledge that various electromechanical devices can act like senses. That still leaves open whether there is anything remotely like phenomenal consciousness is happening somewhere in the car.

            Liked by 1 person

          3. That’s why I quoted the dictionary Mike. Now, do you really believe that the New Oxford American Dictionary was referring to anything thing other then the five senses of human experience in its definition of “sensory”? If these word games concerning consciousness are all about moving the goal post every time somebody is expressing a point, then it’s just a game. You keep claiming that consciousness is in the “eye of the beholder” and unfortunately this is a tactic of the game. But if one is genuinely interested in establishing base line definitions for genuine edification, then we should resort the kiss principle: “keep it simple stupid”. For example:

            Consciousness: noun
            Together to know us; a state of being. Consciousness is derived from the Latin root words Con, (together); Scio, (to know); Us, (us) and Ness, ( a state of being). Conscious means; together to know us; Ness means; a state of being;

            Notes:
            The meaning of consciousness is derived from its Greek origin, a philosophy based upon the anthropocentric principle of Protagoras of Abdera; a manifesto which declares that “Man is the measure of all things.” Since the time of the Greeks, man and man’s experience of consciousness has become the standard by which everything else is adjudicated to be conscious or not. ThIs discrete definition of consciousness is limited and restrained therefore, the term cannot be used for anything other than a comparison to the human experience. If the human experience itself is the standard by which all other systems are judged to be conscious, then those systems will be limited and few.

            Nothing wrong with discussing what that experience entails and Stephen’s point is narrowly focused on that experience.

            Like

          4. James,

            Since consciousness by definition is limited to the system of mind then no, there is not anything remotely happening like phenomenal consciousness in the car.

            If you read my previous post you should be able to extrapolate that the only thing happening in the car are the valences of the nuclear force both strong and weak, electromagnetism, and the attraction of gravity, all of which are non-conceptual “representations” of value, representations that do not require any form of awareness or cognitive function in order to be felt.

            Grab a magnet and an iron nail and play with them on your kitchen table, then you can decide if the reason the nail moves to the magnet is because of valences. The nail is not aware of the feeling that causes it to move toward the magnet because it has no cognitive function or awareness, but the feeling of this thing called electromagnetism exists nevertheless, and that feeling is a valence, which is a non-conceptual “representation” of value. It’s all about “representations” James.

            Like

          5. I have no idea what the lexicographer who wrote that entry had in mind Lee, but I do know a lot of technology fits what they did write. “Keep it simple stupid” has its uses, but not when it’s used in an attempt to shut down any challenges to our biases.

            My question was, given the description of consciousness provided involving representations used in simulations, what is missing in the car? (I definitely think there is a difference. See my hierarchy posts. But this isn’t about my view.)

            Like

          6. Lee,

            I think you are being a little too pedantic in your point.

            We all know that sensors

            sensor
            noun
            a device which detects or measures a physical property and records, indicates, or otherwise responds

            sensor (n.)
            1947, from an adjective (1865), a shortened form of sensory (q.v.).

            has the same derivation as sensory but are not the same as biological eyes and ears.

            Like

          7. Mike,

            My reference to “kiss” is not intended to shut down any challenges to your biases or anyone else. Kiss is simply a technique intended to be more “pedantic” as James would say. I think being concise in the use of our definitions is necessary so when I state to a fellow Cartesian Me that I “sense” the impending doom of American culture I will not get a response such as: “Dude, are you talking about a device which detects or measures a physical property and records, indicates, or otherwise responds…..

            Seriously guys????????????

            Like

          8. Mike, it’s certainly possible for a device to compute itself centered in a world but that’s completely different from an organism feeling itself centered in a world.

            Consider Commander Data of Start Trek fame. It continuously computes itself centered in a world but repeats several times in the series that it cannot feel anything. The writers of the show demonstrated their bias in understanding feelings as emotional feelings only, neglecting physical feelings (like touch for instance). Note that when Data acquires the ‘feeling’ upgrade chip it always goes off the wall emotionally and never comments about physical feelings at all, unless I missed an episode.

            It’s interesting to contemplate whether we would have a moral relationship with a device that computes itself centered in a world since damage to itself would be computed rather than felt as pain. Another topic … another day.

            Like

          9. Stephen, Star Trek is something of a mess when it comes to the mind, implying that a purely logical being is a coherent concept. But if we pay attention to Data’s behavior, he has preferences and is motivated to do things all the time. He springs into action when one of his crewmates is threatened. He takes care of his pet cat, Spot. And he clearly prefers survival when an AI specialist wants to dissemble him to figure out how he works. (The AI specialist also refers to him as “it” in the episode: “The Measure of a Man”. It’s worth noting that Data apparently has male equipment, having sexual relations with at least a couple of female crew members.)

            In terms of affects, he has valence and motivational intensity. What he seems to lack is the automatic arousal aspect of emotion, at least until he gets the chip. But with the valence and motivation, coupled with his awareness of himself and his environment, and his ability to flexibly learn, he clearly has something at least like consciousness. Personally, I don’t buy philosophical zombies as a coherent concept, so I’d regard someone like Data as conscious.

            Of course, this is a fictional being, so he’s essentially a thought experiment. For now.

            Like

          10. Mike, everything you mention about Data can be produced by its computational subroutines. Having male sexual equipment doesn’t imply that Data feels anything sexual, from desire to orgasm.

            The American Psychological Association defines ‘affect’ as:

            affect n. any experience of feeling or emotion, ranging from suffering to elation, from the simplest to the most complex sensations of feeling, and from the most normal to the most pathological emotional reactions.

            … none of which Data claims to have. And your “… valence and motivation coupled with [its] awareness of [itself] and [its] environment, and [its] ability to flexibly learn” can all be seen as computational outcomes.

            A Philosophical Zombie is behaviorally identical to a conscious person but lacks consciousness, which doesn’t apply in this case since Data isn’t behaviorally identical to any human. (I agree that Philosophical Zombies only exist in Philosophy).

            Regarding Data as conscious, as you do, means that you are inferring consciousness from behavior, a very questionable inference since both are the product of non-conscious processes. I’ve proposed that the strength of the inference of consciousness in another organism is dependent on bio- and neuro-similarity, which clearly doesn’t exist in Data’s case, so the strength of inference would be zero.

            I see no evidence that Data has something like consciousness. (What is something like consciousness anyway?) If Data claimed that it could feel, surely it could detail the processing and hardware that substantiated its claim.

            Like

        2. Would you (or Eric if he reads this) care to propose a definition of a single non-biological feeling?

          Thanks for including me Stephen!

          No I can’t propose a definition for a non-biological feeling, or at least not one that’s been scientifically validated. But where you seem to then take our lack of such an example to presume “…because it’s impossible”, I run through the following observations:

          I think it’s reasonable to define the term “machine” such than none existed before the evolution of life. So I don’t call stars, molecules, or any other such structures “machines”, though I do refer to evolved biological entities like trees and people that way. Richard Dawkins refers to evolution as “a blind watchmaker”, which I appreciate. Evolution is also referred to as “teleonomic” rather than “teleologic” in order to reference that it merely has apparent rather than actual purpose regarding its creation.

          We humans build teleological machines of course, though they’re pathetic when compared against evolution’s teleonomic examples. Does evolution build stuff with feelings by means of magic? In that case we’d also need magic to do so. Naturalist like you and I however believe that evolution builds things that have feelings by means of physics/chemistry. Thus theoretically if the human were to use that same physics/chemistry, then we could also build machines that have feelings.

          Did you notice my discussion with Anonymole above? In it I mentioned a letter I wrote to Johnjoe McFadden on Saturday regarding a potential way test his theory. If his or another such theory were sufficiently verified scientifically, then we should ultimately get the example that you’re asking for. In any case if scientists were able to alter and/or provide feelings through nothing more than the function of a transmitter in the skull, then this should incite a huge paradigm shift for our still extremely soft mental and behavioral sciences.

          Like

          1. Stephen,
            I don’t think you’ll convince many with a rhetoric flavored by “My beliefs are facts while my opposition’s beliefs are imagination”. Note that each of us have tremendous respect for Einstein. He’d of course have amounted to little without his amazing imagination. I doubt that he ever said much about “facts” or “truth”, but rather discussed potential explanations for inconsistencies in human understandings, and also as how such proposals might effectively be assessed.

            I’m hearing lots of defensive positions around here lately, though few seem to be proposing ways to empirically validate or refute their proposals. For any that are indeed falsifiable however, this might be an effective consideration.

            Like

          2. Eric, you wrote, “… theoretically if the human were to use that same physics/chemistry, then we could also build machines that have feelings.”

            Theory and speculation do not have the status of facts. We can imagine a nanotechnological programmable universal assembler but it’s not a reality and may never be. Wishing for something doesn’t make it so and until that something becomes factual it’s incorrect to claim that it is.

            I have not denigrated your imagination. My claim is that imaginary properties don’t belong in the factual definition of a phenomenon, in this case, consciousness.

            BTW Einstein was a scientific realist who greatly valued factual evidence. The inconsistencies between extant theory and confirmed observational facts were what drove his imagination in search of a resolution between theory and fact.

            Liked by 1 person

          3. Stephen,
            You’re the only person that I know of who uses the term “fact” regarding scientific speculation. I think there’s a great reason that the rest of us reserve that term for the realm of metaphysics. Because science is provisional rather than absolute, that term seems not to belong. No theory, nor definition, nor bit of evidence, is considered “fact”, or “truth”, or “noumenon”. In science they’re always provisional. And because the rest of us don’t use the term “fact” for science, we’ll never claim that it’s a fact that something lifeless could be humanly constructed to have feelings. Today we simply observe that if evolution uses mere physics/chemistry to construct things with feelings, then theoretically if we grasped those dynamics then we might also build something like that. Therefore today we’re not going to define the consciousness term to be inherently biological, since that would be making a grand extra claim that we have no evidence to support.

            While you could now tell me about the need for facts in science, which I would deny, here’s a potential way forward. I have recently proposed a way to test Johnjoe McFaddens cemi theory. It’s to put a transmitter in the head of a test subject that simulates the standard sorts of synchronous firing of thousands or millions of neurons, to see if this affects a theorized standing electromagnetic wave that exists in itself as feelings. Do you think that this would be a valid way to test his theory? If not then why not? If such a transmitter were to affect conscious experience, and perhaps in later years scientists were able to directly give a person various specific types of feelings by using certain signature simulated firing patterns, do you think that you’d eventually put stock in McFadden’s theory?

            Like

          4. Eric, you seem to be confusing observations with the scientific theories that attempt to account for those observations. Confirmed observations are indeed facts, a ‘fact’ being “something that is known or proved to be true.” Scientific theories are provisional and can be superseded or modified based on facts. If you would deny (as you write) the need for facts (observations) in science and further claim that the word ‘fact’ is reserved for metaphysics, you’re probably alone in your understanding about how science works.

            It is a fact that consciousness is a biological phenomenon. If you have observations of non-biological consciousness that can be confirmed by others you would have documented them long ago.

            EM wave consciousness theory is another evidence-free proposal whose proponents are responsible for demonstrating its validity. As with all such proposals I’m aware of, resolution of the contents of consciousness is being confused with the phenomenon of consciousness—they are not the same thing.

            Liked by 1 person

          5. Okay Stephen, I now see that you’re defining “fact” the way that most people use the term “observation” (with fact generally being associated with “truth”, and observation generally being associated with “experience”). So every time that you use the “fact” term I’ll now instead interpret it as “observation”. Thus — “It is an [observation (rather than fact)] that consciousness is a biological phenomenon.” Agreed!

            Except of course that I’m still confused why any naturalist would assert that consciousness must be biological simply given that no human made machines experience their existence, when naturalism mandates that some day the human might build such a machine? Why would you limit your consciousness definition to biological processes given mere human ignorance and ineptness today? In general when scientists define their terms (and certainly Einstein), they make sure to keep things as simple as possible and so not add stipulations such as “biological only” given the potential for technological mechanisms to work as well.

            Regarding EM wave consciousness theory as “another evidence-free proposal”, what I’ve done is propose what I currently consider to be a relatively simple way to validate or refute McFadden’s theory. This would make his proposal quite special scientifically if not unique. So here’s the question again. If scientists were to verifiably affect subjective experience by means of a properly run EM skull transmitter, or could even cause certain specific subjective experiences this way, would you then put credence in EM consciousness proposals? And if not then how would you fault such experimental observations? (I believe that you tend to call these “facts”.)

            Like

          6. Eric, I looked up the definition of fact of which several are available on the Internet. Cambridge Dictionary, for instance, defines fact as:

            “[S]omething that is known to have happened or to exist, especially something for which proof exists, or about which there is information”

            The lead definition from Google is “a thing that is known or proved to be true.”

            A fact is not the equivalent of an observation although repeatedly confirmed observations are facts.

            I wouldn’t volunteer to wear your EM headgear but its effects wouldn’t prove or disprove any consciousness theory.

            Liked by 1 person

          7. Stephen,
            Regardless of those definitions, I still dislike the “facts” term in science. Here people could claim that their preferred definition is “factual” even when other definitions happen to generally be more useful. For example “life” could directly be defined as an Earth based dynamic given the “fact” that we only know of Earth based life. Regardless, when you use “fact” I’ll now interpret it as “highly verified observation”. And yes, I do consider it to be a highly verified observation that what’s conscious seems to always be alive. I just don’t consider it productive to add this clause to a consciousness definition given that some day we might use the same physics that evolution does to create non life examples of it.

            Speaking of which… so you doubt that my proposal would be a valid way to test McFadden’s theory? That’s fine. For your doubts to be credible however you’ll need to provide some reasoning as well. Why would a transmitter in the head that produces EM waves very similar to the ones that synchronously firing neurons are known to create, and which either would or wouldn’t be observed to affect a given subject’s consciousness, not test the validity of McFaddens proposal? One way to go here would be to demonstrate that I’ve got the physics wrong.

            I suspect that you haven’t been interested enough in McFadden’s theory to understand how it works, and so can’t say whether or not my proposal would effectively test his theory. That would be fine as well. I probably ought to be discussing this issue with someone who does grasp it, such as James Cross. McFadden unfortunately hasn’t gotten back to me so I don’t know if he’s read my proposal. If he has and considers it valid, then why wouldn’t he tell me? And if he has and doesn’t consider it valid, why wouldn’t he tell me what he considers wrong with it? Those are some question I’ve been wondering about lately.

            Like

        3. Stephen,

          Although Eric is unable to come up with a single scientifically verifiable feeling I certainly can, four (4) to be exact: what we call the nuclear force both strong & weak, the electromagnetic force, and the force of attraction we call gravity. All of these forces are valences, feelings that are non-conceptual “representations” of value. This simply means that these feelings do not require the existence of a mind to conceptualize them in order to be felt. This is exactly what we observe and that is why I’ve developed a revolutionary and novel metaphysical model called pansentientism.

          It you could understand pansentientism properly, within the context in which I’ve developed it, (which you cannot because I have not explicated it on this blog site), you would immediately recognize that pansentientism corresponds and compliments your “simulation in feelings” hypothesis for consciousness.

          Like

          1. Since we’re quibbling over terms, where and in which dictionary are valences “feelings” except in the psychological sense of referring to living organisms?

            Like

          2. There aren’t any James except for the definition that is used for physics, which is obviously not the same. Based upon the psychological definition; valences are feelings, expressed as non-conceptual representations of value. In order to build a metaphysical model using this definition, one must be willing to recognize the use of the term non-conceptual in that definition. According to any dictionary non conceptual simply means; “not conceptual or related to ‘mental’ concepts”.

            Now, one can stop right there and eschew this meaning as useless to physics however, one must be compelled consider the mind/matter dichotomy before acting so hastily. If something is not related to mental concepts or mind, then by definition it is not limited to biology either and must include all forms of matter. Although pedantic, that’s a pretty simple and straight forward extrapolation, one that you may not be willing to endorse.

            Like

          3. James,

            I feel compelled to point this out; you cannot go out and purchase a book on Amazon that addresses the concepts I’m coming up with. I’m just a tired old hippie like yourself who’s a little off kilter, so this kind of shit comes natural to me. It is the actual vocabulary that’s the difficult part. Essentially, my model of pansentientism is grounded in a materialist framework. What makes it different from the existing materialism paradigm is that I substitute this notion of law, the laws of nature and/or the laws of physics with sentience. Furthermore, since sentience is universal, sentience becomes fundamental in explaining motion that results in form whereas all systems are effectively emergent systems utilizing the same fundamental sentient raw materials. And since sentience is universal, the sentient emergent system that we call mind with its unique experience of consciousness is no longer a paradox for materialism.

            Like

          4. Lee, you claim physics forces are feelings. In physics, a force is any interaction that, when unopposed, will change the motion of an object. I’m sure physicists would agree that nothing is felt in such interactions. I’m additionally puzzled by your “… feelings do not require the existence of a mind … in order to be felt,” although the word ‘feel’ certainly implies the existence of “one who feels.” You seem to be using creative definitions and equivocating terminology to substantiate your pansentientism metaphysics.

            Regarding your claim that “sentience is universal” would you please provide the evidence that supports that conclusion? I’m referring to the original meaning of the word ‘sentience’ which is “the capacity to feel”—sentient means feeling—and I take your use of the word ‘universal’ to mean “existing everywhere.”

            Like

          5. Stephen,

            As a metaphysician, I don’t take my lead from physicists I think for myself.

            “You seem to be using creative definitions..”

            Nope, I’m using standard agreed upon definitions for my model. Our current metaphysical model of physics is incomplete and has gaping holes in it big enough to drive a semi- truck through. First and foremost, it does not and cannot account for causation and second, it cannot account for the hard problem of consciousness itself.

            Physics does a pretty good job at providing descriptions, but that’s about it. As a metaphysical model, pansentientism not only provides descriptions, it provides reasonable, pragmatic explanations for what we observe, explanations that do not rely upon the magic of so-called laws, explanations that are 100% inclusive, explanations that contains zero contradictions and no paradoxes.

            Don’t get carried away now Stephen and start adjudicating my theory based upon evidence that I have not provided.

            Like

          6. Stephen,

            “Regarding your claim that “sentience is universal” would you please provide the evidence that supports that conclusion? I’m referring to the original meaning of the word ‘sentience’ which is “the capacity to feel”—sentient means feeling—…”

            Conduct your own experiment Stephen: Grab a magnet and an iron nail and play with them on your own kitchen table. Then you can decide for yourself if the reason that the nail moves toward the magnet is because of valences which are “non-conceptual” representations of value on a gradient from the “feeling” of something positive (+) to the “feeling” of something negative (-), like an electromagnetic charge. Or, you can choose to buy into the hocus pocus bullshit we’ve all been spoon fed since kindergarten that the reason the nail moves toward the magnetic is because of some law that COMMANDS unwavering, unquestioning obedience from its unknowing, unsuspecting subject.

            Is the nail aware of the “feeling” of something positive (+) or negative (-)? Dam straight. Does it have to internally process that information before it responds like panpsychism asserts? Absolutely not. Because those “feelings” are non-conceptual and according to the dictionary non-conceptual means: “not of or relating to ideas or concepts”. A valence response is an immediate action just like any involuntary reflex we experience ourselves.

            Have fun kids……

            Like

          7. Lee, the nail moves in relation to a magnetic field because magnetism is a force which, in physics by definition, changes the motion of the nail. No sentience is either observed or required. No law is commanding unwavering obedience. The consistent behavior of a nail vis-a-vis a magnetic field is an observation. I’m not a physics guy but I expect that current efforts at a Theory of Everything are seeking an explanation for these kinds of observations.

            There’s zero evidence in your nail-magnet experiment to substantiate your sentience claim that the nail is “non-conceptually aware” of anything but evidence is customarily irrelevant in metaphysics so carry on! 😉

            Like

    2. Stephen –

      I’ve been working slowly on my review of Solms’ book so I’ve been thinking some on the consciousness as feelings approach. So I have some issues with it and wonder what your take would be.

      If we use the notion of “feelings” as in Nagle’s “feeling like to be something”, then saying that consciousness comes from feelings seems to be just restating the definition of consciousness. It doesn’t provide us with any more insight.

      On the other hand, if feelings are something more, I can’t see Solms’ point that feelings are always conscious. Coming from a Freudian background, Solms, I would think, would realize that feelings can be unconscious. Is there a conscious feeling of sexual desire for the mother and fear of the father in the Oedipal Complex? It would seem that unconscious feelings abound throughout the various complexes and neuroses of Freudianism.

      In addition, I can’t see how feelings can be so easily detached from cognition and other mental processes. My feelings on seeing a snake might be curiosity with some degree of avoidance. For another person, it could be dread or fear. For a herpetologist, it be a desire to approach and examine. So the framing of feelings comes from cognition and other processes.

      Liked by 1 person

      1. James, I wouldn’t say that “consciousness comes from feelings” but that all of the contents of consciousness are feelings. If you’ve read my response to Mike, just above, you’ll see that I define consciousness as a simulation in feelings of an organism centered in a world. The contents of the simulation/representation are all feelings.

        Feelings, particularly emotional feelings, do not always become conscious as Damasio and others point out.

        Feelings are not detached from cognition, if by ‘cognition’ you mean ‘thought’. Thinking is itself a feeling. Curiosity, dread and trepidation when a snake is recognized (or imagined) are all feelings.

        Like

        1. So there is a difference between consciousness and the contents of consciousness?

          Solms, I think, also seems to try to make that kind of distinction.

          At any rate, with such an all-inclusive definition of feelings, it seems you are close to Nagle’s definition of consciousness. And, if feelings are not always conscious, then what is difference between conscious and unconscious ones?

          Like

          1. To clarify somewhat. I don’t think there is a difference between consciousness and its contents. I think part of the contents is the self and it is the self’s sense of “knowing” that is what consciousness “feels” like. But the sense of “knowing” is also part of the contents. In other words, the whole thing is bootstrapped up by itself and in the end indistinguishable parts of a whole system.

            Like

          2. James, yes there’s a difference: consciousness is the phenomenon of experiencing and the contents of consciousness are what is experienced—the ‘qualia’ if you will. I’ve been slowly rereading Edelman and Tononi’s A Universe of Consciousness and they discuss the contents of consciousness early on but then end up discussing consciousness and its contents as if they were interchangeable. I suspect that’s because a lot is known about how the brain goes about resolving the contents but we know absolutely nothing about the actual generation of a feeling of where in the brain that takes place. Eliminating a discussion of the contents of consciousness would make for a very short book … 😉

            My understanding from what I’ve read is that both conscious and unconscious feelings are resolved as pre-conscious contents but only the conscious ones are felt.

            Invoking the self and knowing is a bit too anthro in my opinion. Our discussions of consciousness shouldn’t neglect the consciousness of other organisms.

            Like

          3. “I suspect that’s because a lot is known about how the brain goes about resolving the contents but we know absolutely nothing about the actual generation of a feeling of where in the brain that takes place”.

            Maybe because it is the same? The experiencing and experienced aren’t really different. We just have an illusion that they are.

            “Invoking the self and knowing is a bit too anthro in my opinion. Our discussions of consciousness shouldn’t neglect the consciousness of other organisms”.

            Nothing in self or knowing precludes consciousness from other organisms.

            https://broadspeculations.com/2020/08/29/origins-of-qualia-and-self/

            Like

  13. Hello, Mike. I’m back again, although this time I just wanted to share with you a quote that I just read and reminded me of you and your position in the HPOC:
    “If by any chance a way to a deeper penetration into this matter should present itself, surely, considering the significance of memory for all mental phenomena, it should be our wish to enter that path at once. For at the very worst we should prefer to see resignation arise from the failure of earnest investigation rather from persistent, helpless astonishment in the face of their difficulties.”
    – Ebbinghaus, H. (1885).
    Hermann Ebbinghaus here wrote against the pessimism of experimental psychologists in his days, who thought about memory as an intractable problem for psychology. Authors like W. Wundt usually focused on perception and sensation, but Ebbinghaus proceeded with a step further towards higher-order cognition with his experiments using nonsense syllables.
    This could be applied to the HPOC too. Perhaps, one day, some clever H. Ebbinghaus will shed light about how to proceed in scientific research about qualia, showing how the HPOC is not as intractable as some people (like me, honestly speaking) believe it to be.

    Liked by 1 person

    1. Hi Fred,
      Definitely anything that is scientifically intractable today could become tractable in the future. I think about Einstein’s 1935 paper on quantum entanglement, which was scathingly criticized at the time for being metaphysical navel gazing. Three decades later, John Stuart Bell found a way to test it.

      But the thing is, I think we have modern day Ebbinghauses already in cognitive neuroscience. They are making steady progress. Often these folks won’t say they’re working on the hard problem, just to keep the philosophers off their back. But if you look at the details of what they’re studying, it heavily pertains to consciousness.

      And that’s before we get to scientists explicitly studying consciousness such as Stanislas Dehaene, Michael Graziano, Todd Feinberg, Jon Mallatt, Simona Ginsburg, Eva Jablonka, Antonio Damasio, Joseph LeDoux, Hakwan Lau, Victor Lamme, or Christof Koch, . If consciousness seems hopeless for science, the work of these folks are worth checking out. (Search my blog for their names for introductory posts.)

      Liked by 1 person

  14. ““Along those lines, the question I would have is, what does that very common phrase, “something it is like”, mean? (Other than its synonyms: subjective experience or phenomenal consciousness.)””

    I will try to explain it in a different way. It is worth to pay attention to difference between beliefs and seeing experiences. You have standing belief that New York is city in USA. You have this belief even at time when you don’t think about it. It is plausible, that belief is reduced to functional role, to your dispositions: for example disposition to say “USA” when somebody ask you in which country New York is placed.

    By contrast, when you see something there is something more than certain dispositions: disposistion to say what you see, disposition to walking to seen object et cetera. There is also image in your self, your visual field. Do you notice it? Of course, there are significant functional differences between seeing and beliefs but there is also difference indicated above.

    Like

    1. I appreciate the effort. I don’t dispute that there is visual imagery. I dispute that it’s something other than a functional conclusion (or more accurately a collection of conclusions, predictions, inferences). To illustrate, consider the two optical illusions below.

      Is this a young woman looking away? Or an old woman looking to the left? Which is the internal image?

      The surfaces of A and B are the same color. You can verify by covering the center with your finger. Is A and B the same color in your internal image when looking at them without your finger?

      http://brainden.com/

      The point of these examples is that visual imagery is not a movie playing in your head. It is predictive modeling by your nervous system. Visual illusions can disrupt that modeling, make clear that it’s a neural evaluation, one that can be affected by all kinds of factors.

      Like

      1. “Is this a young woman looking away? Or an old woman looking to the left? Which is the internal image?”
        Neither. This is matter of interpretation. Internal image contains merely color spots.

        “The surfaces of A and B are the same color. You can verify by covering the center with your finger. Is A and B the same color in your internal image when looking at them without your finger?”
        No. When I have illusion that colors are different, I have mental image with different colors.

        Like

        1. Right. Which woman you see is an interpretation.

          But the point I’m trying to make is which color you see is also an interpretation. To drive it home, consider these other color illusion examples.

          Both dogs are the same objective color.

          You can see more at: http://brainden.com/color-illusions.htm

          The dresses are the same objective color.

          https://xkcd.com/1492/

          Mental imagery, including colors and shapes, are a functional interpretation, an inference, a predictive model used for making behavioral decisions.

          Like

          1. But we have that effective mental imagery in the first place. Interpretation and prediction come later or higher in the cognitive chain. For instance, your first example, the young and old woman; everyone sees one or the other or both. We don’t see a spaceship or a turtle.

            Like

          2. “But the point I’m trying to make is which color you see is also an interpretation. To drive it home, consider these other color illusion examples.”
            But I also have color qualias. I haven’t got merely disposition to say that these colors are different. I have also differnet colors in private unobservable from the outside image in self.

            Like

          3. Wyrd and Konrad,
            The thing is, the effective mental imagery, the qualia, are interpretations, just at an earlier level than the interpretation of the shapes and objects. The brain doesn’t capture an image from the eye and then interpret it. There is no pre-interpretation image. You might think such an image could reside in the visual cortex, but as I noted in the post, what hits the retina and is mapped to the early visual cortex, is very different from what we actually perceive.

            Even what’s in the early cortex is an interpretation. The interpretation begins immediately, in the retina, with three layers of neurons before it’s transmitted up the optic nerve.

            This makes sense if you think about it. The brain does all the work of transducing the pattern that hits the retina into meaningful models. Why would it reverse that process to produce a raw image, which it would then need to re-transduce to interpret again? That work is done already, and the brain takes it and immediately moves on to further analysis.

            Our impression of a baseline image comes from the fact that we only have access to certain levels of the interpretation. No matter how hard we introspect, we can’t access the early patterns. The connections don’t seem to be there for it, likely because it wouldn’t be adaptive. What we get has already been pre-interpreted. And most of what we recognize is interpreted automatically and unconsciously. It just appears in our awareness as a complete perception. But it’s a result, a complex constellation of conclusions.

            Like

          4. Mike,

            Correct if I’m wrong on this, but I think I’ve read that the amount of information that actually makes it from the retina to the brain is actually a small fraction of available information.

            Like

          5. James,
            That’s my understanding. I don’t remember the number for the bandwidth of the optic nerve, but it’s surprisingly low.

            This raises the interesting point (which might be what you’re getting at) that our perceptions are mostly internal, from the inside-out, with sensory data more providing feed forward error correction for the feed back predictive processing, rather than solely driving all the perceptual processing.

            Like

          6. I went back and looked at We Know It When We See It by Richard Masland.

            Apparently every point in a visual scene gets handling by thirty different types of ganglia. Some of these are just tracking light movement – left-right, right-left, up-down, down-up. Others track intensity. Others edges. The most salient features get reported up the chain and the image gets assembled from the fragments. Nothing like a simple stream of pixels.

            Like

          7. I hadn’t seen that book on vision before. Looks interesting. Do you recommend it?

            That description matches what I’ve read, although I’m not sure about “assembled from the fragments”. It might be more accurate to say that numerous associations get triggered, some of which give us a “gist” about what we’re seeing. If we check out any of the details, that is, the system dedicates resources to processing one of the fragments, it’s only then that the information from that fragment is accessed. Since all the fragments are there when we (the system) check, it feels like the whole image is always there in its wholeness, but that’s only true for the all its conceptual associations being triggered and available for access in the brain overall.

            Like

          8. Recommend it, yes. It gives a great and pretty easy to read overview of current knowledge on how vision works and also how much is still not known about it.

            Like

          9. It also really gave me a somewhat different perspective on the whole brain and nervous system. He recounts how skeptical some scientists were when they kept finding different types of ganglia in the eye. There seemed to be an expectation that there would be just be a few types and they could interchanged but the evidence is that thirty or more highly specialized types. Apparently the whole brain and nervous system may be that way. Instead of a few types of neurons/ processors, there might be hundreds or thousands of different types with highly specialized roles.

            Favorite quote: “The world you think you see is not the world that actually exists”.

            Like

          10. Depends on which illusion we mean. The old woman/young woman is a deliberately ambiguous image we interpret at a high level. The color stuff is a different class and is related to how we still see “white” in a variety of color temperature illumination. That is, as you say, very low-level.

            A system that analyzed the color illusions wouldn’t be fooled because it would be looking at the actual pixel values. (I’m not aware of any ANN work on optical illusions along the lines of deliberately ambiguous images. I do know they can generally be fooled in ways that wouldn’t give a two-year-old pause.)

            Both types, for us, demonstrate the “something it is like” to be a perceptive brain, so maybe I’m not really seeing the point you’re trying to make?

            Like

          11. Interestingly the older you are the more likely you are to see the older woman first.

            We learn to see from our early contact with humans to interpret much of our social queues from faces. They raised some monkeys without the monkeys ever seeing faces and the monkeys and the parts of their brains normally used to process faces visually were being used to process hands.

            So facial recognition in particular is especially learned and interpretative.

            Like

          12. Indeed. Many westerners, for example, have trouble differentiating Asian faces, and some have trouble with Black faces. On the other end of that, it’s not uncommon to be able to recognize someone you know well from the back or from the way they walk.

            Like

          13. Not having spent a lot of time in large cities, I actually have a hard time recognizing locations. All the locations in the downtown part of a city look very similar to me. Things are easier as soon as we get to more suburb locations.

            Like

          14. The interesting thing about perceptual illusions is that they demonstrate ANNs aren’t alone in being able to be fooled.

            On the point, might pay to review Konrad’s sub-thread here.

            Like

          15. The flaw is in extrapolating from the pieces to an as yet unknown whole and assuming the extrapolation is the only possible answer. If it were that easy, I think we would have more clear answers by now. Instead, we can’t even agree on a definition of consciousness, let alone how it emerges.

            Like

          16. I’m working with the evidence we have now, which is extensive and constantly growing. If new evidence comes in that changes the picture, I’ll adjust. It’s worth noting that I’m not the one here making an assertion beyond the evidence.

            I think the difficulty in defining consciousness is a philosophical problem, not a scientific one. Cognitive neuroscience makes progress while mostly ignoring the issue.

            Like

        2. “Our impression of a baseline image comes from the fact that we only have access to certain levels of the interpretation. No matter how hard we introspect, we can’t access the early patterns. The connections don’t seem to be there for it, likely because it wouldn’t be adaptive. What we get has already been pre-interpreted. And most of what we recognize is interpreted automatically and unconsciously. It just appears in our awareness as a complete perception. But it’s a result, a complex constellation of conclusions.”

          It would be pluasible from external view-point, but I am self experiencing this movie directly. I can conceive external-world not existing, but I can’t conceive skeptical hypothesis that I don’t feel pains, don’t see colors, and that I haven’t got subjective (prima facie invisible from the outside) movie. All I get through this movie (even my functional states). It’s more conceivable for me that I haven’t got any functional states.

          There are two ways of being me: external and subjective accesible only to me. You correctly describe my external being.

          Like

          1. Konrad,
            If you check my points above, I don’t deny the subjective self. My point is that the subjective self and the objective self are one and the same.

            But we seem at an impasse, and won’t resolve it today. Thanks for the discussion!

            Like

          2. Do you think that your subjective being isn’t mystery? Notice that it is plausible that robots haven’t got any subjective being (merely external being).

            Like

          3. I don’t know if you’re familiar with the distinction David Chalmers made between “easy” problems and the “hard” problem.

            His examples of the “easy” problems:

            the ability to discriminate, categorize, and react to environmental stimuli;
            the integration of information by a cognitive system;
            the reportability of mental states;
            the ability of a system to access its own internal states;
            the focus of attention;
            the deliberate control of behavior;
            the difference between wakefulness and sleep.

            http://consc.net/papers/facing.html

            These are scientifically tractable (although not easy by any stretch), as compared to the “hard” problem, which he says isn’t. I think the “easy” problems exist, and an enormous amount of work is needed to solve them. It will likely take decades, if not a century. But progress on them is steady.

            The “hard” problem is, to me, just all the “easy” problems combined. Progress on them is progress on it. Chalmers omitted discussion of the neuroscience of affects from his “easy” problems, which is telling, because much of what he points out is missing relate to them.

            Chalmers’ mistake, and a common one, is to treat the whole as something separate from the components. Another philosopher, Gilbert Ryle, writing 45 years before Chalmers, pointed out that this is a category mistake, akin to a tourist to Oxford, having seen the lecture halls, dorms, administrative offices, meeting faculty, staff, and students, then asks, “But where is the university?” It’s easy to see the mistake here, but much harder with consciousness, because it’s about us in the most intimate manner, and very difficult to be objective about.

            Like

          4. “The ‘hard’ problem is, to me, just all the ‘easy’ problems combined.”

            Wow. Once again you demonstrate you can’t hear a word I say. I give up.

            Like

          5. I am familiar with David Chalmers and I think that easy problems combined can’t give hard problem. Easy problems are related to my external being (how my behaviour is caused). Hard problem is problem why I have also subjective being. Notice, that it is plausible that you have merely external being at the time of dreamless sleep (all about you sleeping can be observed from outside).

            Like

          6. The hard problem is a philosophical problem not resolvable by science. You can’t explain subjectivity from the outside because it is inside the experience. I’ve referred to it as trap for materialists and ultimately even unanswerable in philosophical terms because it is similar to the “why is there something rather than nothing” problem.

            Like

          7. External vs internal is a change in perspective. It’s kind of like seeing a building and the world from inside the building vs seeing it from outside the building. But unlike with a building, we can’t have the transition from one perspective to another, cluing us in that they are just different perspectives. We can only see from inside our own building, and see other buildings out there.

            But while we can never have someone else’s perspective, we can account for their (and our) perspective in external terms. And as that accounting is taking place, we have to be careful not to lose track of what’s already been accounted for. A lot has already been broadly accounted for, with many holes left to fill in, but nothing indicating that the enterprise as a whole is infeasible.

            The only thing that is infeasible is for us to have each other’s perspective in all its dimensions. It should be noted that that’s not unique to conscious beings. My laptop can never be in the same informational state as my iPhone. It may be able to contain it in a virtual machine, but that’s not the same as being in its state, processing information the way it does, etc.

            I’ve called this the “hard truth” before rather than the “hard problem”. It’s an epistemic limitation with unavoidable blind spots (which can be compensated for). It just recognizes that any system processes information from its unique perspective and mechanisms.

            Like

          8. “It just recognizes that any system processes information from its unique perspective and mechanisms.”

            I would say that this statement summarizes your own blind spot the best Mike. Information processing along with the process itself does not “feel” like anything, it is absolutely void of feeling, benign, inert and lifeless. This is the very reason researchers think that most information processing is unconscious, because it doesn’t “feel” like anything. But in contrast to the lifelessness of structure which is the intellect, our experience, the one that everyone values over information processing is full of “feeling”, the very ground of a life that is worth living. In spite of this “pure” structure of intellect, we as a species will trust the intuitive “feelings” grounded in sentience any day of the week over the cold hard facts of information alone?

            Only the cold, calculating, lifelessness of structured logic will solve the hard problem, but that type of intellect is rare, very, very, very rare indeed my internet friends.

            Like

          9. Lee,
            I think feeling (in the sense of affects) involves a valenced arousal, a disposition toward certain actions. This is causation, which I recall playing a central role in your own views on sentience. The thing is, I see information as causation. So to say that feelings are information processing is to say they are causal. This makes sense when you remember that all information is physical.

            Like

          10. All of this becomes much simpler (although still not solving the hard problem) if you actually posit some kind of physical substrate for consciousness. I think that more than anything else draws me to the EM field theories. Now it could be something other than EM fields, something we don’t know about, or something hidden in something we already know. The something else might even be information or something closely related to it, but, if so, there must be aspects of it we don’t understand.

            At any rate, it seems to me to work in a wave-like manner even if it is not electromagnetic.

            Like

          11. What about adding additional substrate (beyond neural) do you think makes it simpler? For me, if EM fields turned out to be causal in some manner (other than as stochastic noise), it seems like it would just be another information processing substrate.

            Like

          12. What does information on a disk drive cause? Nothing by itself.

            Information in a thermostat can cause the heating or cooling to turn on but there is a implementation mechanism that it trips. Information in the brain might trigger motor actions and it does but most of those are automatic without consciousness. Consciousness adds little to the mix.

            Yes. McFadden’s theory is an information theory. He is clear about that in his latest paper.

            Like

          13. When thinking about information on a hard drive or in a thermostat, we tend to look at it through the lens of their engineered purposes. So we think of the processor as “retrieving” information from the hard drive, which does imply something inert.

            But an alien scientist trying to understand what’s going on might interpret it as the patterns on the hard drive having causal effects (provided other conditions are right) in the processor, or the information in the thermostat having causal effects (again provided other conditions) in its motor mechanisms. (The other conditions requirement is similar to one input to an AND gate being on not being causal unless the other one is on too.)

            In technology, we tend to think of information storage as separate from action. Nature doesn’t seem to have that hang-up. Which, I think, is why storage and action are one and the same in the brain.

            McFadden’s theory being an information one makes it more plausible. What’s missing, for me, are reasons why we need it. Of course, new data could change that tomorrow.

            Like

          14. I could more readily accept a purely information theory as plausible if some condition was added that allowed me to distinguish between a human with a brain processing information and my laptop processing information. IIT tries to do that but I know you don’t like IIT. So, do you have something that makes the two different that explains why one is conscious and the other (I’m pretty sure) isn’t?

            Like

          15. Ironically, IIT is more of an identity theory about structure than an information processing one. I’m not a fan of identity theories because the identity relation ends up being a brute fact without explanation. (Some IITers take themselves to have provided an explanation, but I’ve never been able to find it comprehensible.)

            I think the answer to your question is that while consciousness is information processing, not all information processing is conscious. It’s the same relation that blogging has to information processing. Blogging is information processing, but not all information processing is blogging. What makes a particular system a blog? There are a list of attributes, like a web site, reverse chronology, commenting, etc. But we can argue about edge cases, for example Twitter was once billed as microblogging, but most people today don’t consider it a blog.

            In the case of consciousness, the system needs to include automatic reactions (reflexes), models of the environment and self (perception), prioritization of information to respond to (attention), and predictions of the automatic reactions for use in selectively inhibiting or allowing reactions (feeling and volition). It might also include longer range simulations (imagination), modeling of a selection of its own processes (metacognitive self awareness), etc. Just like with blogging, there are plenty of edge cases we can argue about.

            Your laptop probably shows plenty of automatic activity. It might even be like mine and log you in by looking at your face, but its modeling is very limited. It shows no sign of attention, volition, imagination, or metacognition. (If it does, please let me know the brand and model. 🙂 )

            Like

          16. Interesting grab-bag of traits inspired by your hierarchy. In each case, you have just described a trait found in living organisms in information processing terms (implying some theories in the process). You haven’t made any compelling case why information processing in any of these cases requires consciousness, why or how consciousness would emerge from them.

            To use your blogging analogy, unless you are arguing that everything is information processing, blogging involves information processing but is not just information processing. It is more than moving bits from a disk drive to a display on a monitor through the internet. That could be done with random bits without discernible content. So too it strikes me that consciousness involves information processing but it is more than information processing.

            Like

          17. I guess it depends on how you feel about philosophical zombies. If you accept the idea, then no capabilities can ever be sufficient evidence for the presence of consciousness. But I think the ones I list would at least make us wonder if we weren’t dealing with either a conscious entity or a zombie.

            Certainly the information processing can’t involve random patterns. To have a meaningful blog or meaningful consciousness, the information must have correlations with affordances such that the effects they generate are productive.

            Like

          18. It seems like the traits you mentioned, except for maybe the longer range simulations, could easily be traits of a Roomba.

            Your argument to me seems like:

            1- Consciousness is a set of attributes, traits.
            2- We can emulate each of those attributes with an information processing device.
            3- Therefore, consciousness is information processing.

            The only debate would over what set of attributes and whether there would need to be some critical mass of sophistication to qualify. It actually seems to reduce ourselves to philosophical zombies and consciousness a non-essential add-on.

            Like

          19. From a dualist mindset, broadly covering everything from substance dualism to McFadden’s matter / energy dualism, the functionalist view seems to be advocating for us all being zombies who think we’re something more. But from a functionalist perspective, we’re just talking about the functionality of consciousness.

            Like

          20. “The thing is, I see information as causation.”

            This is an erroneous conclusion Mike because in isolation, information is inert and absolutely useless unless there is something else to drive it. We need to be looking for a substrate like James suggested and that substrate would be sentience.

            The emergent “feelings” correlated with sentient properties are a priori, which means they only exist in one’s mind or, they are quantum systems. This assumption would suggest that sentience is imbedded in the structure and therefore cannot be detected by a posteriori methods. Whereas, since structure is a posteriori all the way down, it is a material substance that we can verify utilizing a posteriori methods.

            Since structure is a posteriori, we can utilize science and the instruments of science to analyze that structure. But if sentience is an intrinsic property of structure, one that is quantum, then we cannot verify its existence with a posteriori scientific methods. Only a priori and synthetic a priori assessments can make that determination, and that requires the structure of intellect which is a property of this system we call mind.

            As a side note: I am “convinced” of very little, but I am beginning to be gradually persuaded that mind is a quantum system that emerges from the classical brain.

            Like

          21. Lee, that makes sense in your view, which sees sentience as something fundamental. But that’s not my view. The reason is it seems like sentience, affective feelings, can be reduced or eliminated from various brain injuries. Conditions like akinetic mutism, abulia, or pain asymbolia, which seem like striking reductions in a person’s ability to feel, can result from injuries to various locations to the brain. From this, it seems pretty clear to me that sentience is functionality, functionality that can be impaired if the mechanisms are damaged.

            Of course, you could see it as those mechanisms concentrating sentience, so that when they’re damaged, it’s the concentration that’s impaired rather than the sentience itself. This, it seems to me, ends up being operationally the same as the information processing view. The metaphysics might be different, but it seems like we’re now in the territory of the unknowable.

            What “convinces” you that the mind is a quantum system?

            Like

          22. “What “convinces” you that the mind is a quantum system?”

            There are a several things but I will just list one for now.

            As a system, mind can hold all ideas and/or possibilities in a superposition until a mental measurement is made such as a decision. This intellectual measurement collapses all of the possibilities into a single, classical entity, a concrete idea.

            Like

          23. Interestingly, you’re describing a dynamic similar to the global workspace one, although in that theory, it isn’t a wavefunction collapse, but collections of circuits in competition with each other, with a particular coalition “winning”, inhibiting the others, and effectively becoming what is held in attention and drives decisions.

            Like

          24. What the rationale of global workspace seems to forget is that the brain is a united whole, and the concept of a united whole is very important. And as a united whole, its individual parts do not compete with each other “with a particular coalition “winning”, inhibiting the others, and effectively becoming what is held in attention and drives decisions.” The united whole, comprised of the individual parts work together, a team effort that gives rise to another separate and distinct emergent system.

            Emergence works that way at every level of the evolutionary process so why would this system we call the brain some how be different???🤨 The answer is simple: Because nobody has ever entertained that the system we call mind is emergent and that emergent system is quantum. Seriously, I’m surprised that none of the academic whiz kids who work in adult day care have thought of it.

            Like

          25. A united whole that is emergent can be emergent from competition. It’s actually pretty hard to avoid the word “competition” from what neurons do with each other. They try to propagate their own signals while laterally inhibiting their peers. In the retina, this actually makes detection of shape edges possible. In the brain, it means that every circuit not getting boosted by attention ends up being suppressed.

            Put another way, competition at one level of description is just the emergent system doing its thing at a higher level.

            Like

          26. “They try to propagate their own signals while laterally inhibiting their peers”.

            Is that the way it works? I thought inhibition was primarily done by inhibitory interneurons rather than neurons being able to do both roles. Key roles for inhibition are keeping the brain from running amok with uncontrolled firing as happens during epileptic fits and controlling the timing so neurons can fire in synchronization.

            Like

          27. Remember that whether a neuron excites or inhibits another neuron is in the type of synapse between them. Based on what I’ve read, most of the inhibitory synapses go to the soma of the downstream neuron. So a single neuron can excite some neurons while inhibiting others. (It’s even possible it might have multiple connections to the same downstream neuron, some excitatory and some inhibitory.)

            Strictly speaking, what happens in the cortex isn’t lateral inhibition, but local inhibition. I’m basing this on this passage from Michael Graziano:

            The neuroscientist Robert Desimone helped to describe this competitive scrimmage in the cortex and aptly named it “biased competition.”24 I think of it as one of the primary organizational truths of the cortex. The local inhibition between neurons, which creates competition, dominates the machinery of the cortex. It’s not an accident that epilepsy is a disease of the cortex. An epileptic seizure occurs when that local inhibition fails.25 Signals that normally keep each other in check suddenly proliferate and turn into a wild surge of activity that spreads indiscriminately through the cortex. The disease shows just how much inhibition is the essence of the cortex and how drastically the system fails when the inhibition is not sufficient.

            In Chapter 2, I described a simple trick in the crab’s eye called lateral inhibition, in which nearby neurons suppress each other.26 The outcome of lateral inhibition is a sharpening of the image. Bright patches register as brighter; dim patches register as darker. Biased competition is the cortex’s version of lateral inhibition, blown up a millionfold, expanded from a local competition inside the eye to a teeming world of elimination rounds arranged in a vast hierarchy.

            Graziano, Michael S A. Rethinking Consciousness: A Scientific Theory of Subjective Experience (pp. 33-34). W. W. Norton & Company. Kindle Edition.

            Like

          28. This seems to be a question and answer on the topic.

            https://neurostars.org/t/can-a-neuron-be-both-excitatory-and-inhibitory/13286

            I think as a general rule neurons are either excitatory or inhibitory but not both. They may release more than one type of neurotransmitter but they are with a few exceptions of the same type.

            From Wikipedia:

            n a 1976 publication, however, Eccles interpreted the principle in a subtly different way:

            “I proposed that Dale’s Principle be defined as stating that at all the axonal branches of a neurone, there was liberation of the same transmitter substance or substances.”[9]

            The addition of “or substances” is critical. With this change, the principle allows for the possibility of neurons releasing more than one transmitter, and only asserts that the same set are released at all synapses. In this form, it continues to be an important rule of thumb, with only a few known exceptions,[10] including David Sulzer and Stephen Rayport’s finding that dopamine neurons also release glutamate as a neurotransmitter, but at separate release sites.

            https://en.wikipedia.org/wiki/Dale's_principle

            I don’t necessarily interpret what Graziano is writing as saying that a neurons can do both.

            In a broader sense, I’m not even persuaded by the “competition” idea at all. This seems almost like some sort of projection of capitalistic economic principles onto neurons telling us competition is good.

            That inhibition is widely prevalent in the brain I have no doubt. As I indicated it is absolutely critical for stability and coordination. Its purpose is more related to cooperation than competition.

            Liked by 1 person

          29. Hmmm. It looks like I was oversimplifying above. Lateral and local inhibition are definite phenomena, but it appears they take place through intermediate neurons. So for excitatory neuron A to inhibit excitatory neuron C, it must first excite inhibitory neuron B, which in turn inhibits C (probably among others). At least that appears to be the pattern in the retina (with horizontal cells taking the role of the intermediates).

            So, a neuron can have both excitatory and inhibitory downstream effects, but only with the assistance of other intermediate neurons.

            James, I appreciate the correction. I learned something new this morning. Thank you!

            I doubt capitalist ideology had anything to do with the understanding of neural competition (although who can say what mindset scientists bring to observations). I have seen Darwinian competition mentioned as an analogy. We know competition happens in the nervous system. There are just too many lines of evidence for it from attention research. Or at least for dynamics that are very easy to describe as competitive.

            Both the increased processing of attended information and the suppression activity for unattended information can be explained by a theory of attention known as the biased-competition model (Desimone and Duncan, 1995). This model states that attention works by biasing ongoing neural activity. Such a bias can be induced in a top-down manner by directing attention to a specific spatial location or by biasing attention toward items relevant for a task – for example, you may direct your attention toward locating a spoon rather than a fork if your goal is to eat soup. However, not all biases need be top-down. For example, a bottom-up influence that would bias the competition would be stimulus intensity – a brighter stimulus is more likely to capture attention than a dimmer one. While originally derived from research with monkeys, it is also supported by human neuroimaging studies (Beck and Kastner, 2014). The idea of attention as a biasing mechanism is a powerful concept and has also been used to explain some of the deficits observed in hemineglect, as we discuss in a later portion of this chapter.

            Banich, Marie T.; Compton, Rebecca J.. Cognitive Neuroscience (p. 312). Cambridge University Press. Kindle Edition.

            As I noted to Lee, what is competition at one level, can be seen as cooperation at a higher level. The competition of various coalitions of circuits can be seen as the overall system deciding what it’s going to focus its resources on.

            Liked by 1 person

          30. Competition may be taking place someplace but I’m not see how “competition” between neurons is occurring in what is called the “biased-competition” model.

            Top down bias actually does describe one clear and possibly causal role for consciousness. It could interact directly with neurons by boosting signals for matching to spoons as in the example. The bottom-up biases seems to somewhat related to the Weber-Fechner law that I wrote about. Groups of neurons are not signaling change until a threshold is reached. So I see biasing but not competition between neurons per se. It seems to me more like some signals get stronger than others for a variety of reasons and the stronger signals mask the weaker ones, as would happen as a wave of greater magnitude would hide smaller ones.

            Like

          31. It’s worth noting that the word “competition”, in reference to sub-agent processes, is really just a crutch, a way to quickly convey complex dynamics. I don’t doubt if new data later makes that crutch unproductive, it will be quickly tossed.

            Like

          32. “A united whole that is emergent can be emergent from competition.”

            Competition within itself? Are you serious? A united whole that is emergent can emerge from other united systems that are competing amongst themselves, but not the way you or global workspace rationalize it? Have you ever heard the old proverb: “…a house divided against itself cannot stand??

            This is your blog site Mike, so I will leave you with the fantasy that you are the smartest guy on the blog.

            Like

          33. No offense Mike, but it boggles my mind that you do not seem grasp the concept of “united whole”, and I am not the only lone wolf who observes this rationale of yours. Now granted, functions of individual systems that make up that united whole may look like competition from the outside looking in but that is a skewed perspective, one that is seen from the perspective of one’s own original assumptions.

            You really do need to consider that as a united whole, the brain is not a capitalistic system where functions within that system are competing against each other and building coalitions where one coalition wins that competition and the other coalition looses. A healthy brain is a united whole where all of the individual systems that make up the united whole work together for a common cause. And that common cause is the emergence of this thing we call mind. Mind is not some ad hoc variation of a rationale like global workspace. Although, if one is convinced that mind is not a separate and distinct system, one that emerges from the brain, then the only explanations that are left are ad hoc versions of one kind or another.

            Since you have so much difficulty in understanding concepts that do not conform to your original assumption of functionalism, I will recuse myself from engaging with you any further because it is a waste of time.

            Have fun kids…

            Liked by 1 person

          34. “But while we can never have someone else’s perspective, we can account for their (and our) perspective in external terms. And as that accounting is taking place, we have to be careful not to lose track of what’s already been accounted for. A lot has already been broadly accounted for, with many holes left to fill in, but nothing indicating that the enterprise as a whole is infeasible.”
            It is plausible that robot haven’t got subjective being and it is reason why we havne’t got any hard problem with robots. I don’t see any idea how to account subjective perspective of people and animals.

            Liked by 1 person

    1. On the optic nerve, what I was thinking about is that there are about 100 million rods and cones, but only around a million axons in the optic nerve, so obviously there’s a lot of consolidation happening.

      The 50 bps estimate is interesting, but it seems to be excluding all the supporting processes. Consider that to read this comment, your brain has to recognize the shape of the letters and words, then phrases, map them to language semantics, and then produce the voice in your head. (Assuming like most people, you hear a voice while reading. Not everyone does.) That seems like a lot more than 50 bps in total.

      Like

      1. And answers the question you posed in the 2014 post:

        “Why would an awareness mechanism evolve with no causal influence?”

        Like

      2. Don’t know if you’ve had the time to read the paper Mike but I think it makes a lot of sense. Since all of the contents of consciousness are a non-conscious production, how could consciousness have any agency? Even in the case of supposed “free won’t” the decision to abandon a previous decision is non-conscious before we become aware of it.

        I’m beginning to think that consciousness may initially have been a side-effect of the centralized body control mechanism, situated at the brainstem where all of the body’s sensory nerve inputs coalesce. Perhaps the easiest way to implement the centralized neural control of the body just happens to produce feelings. Interesting thought …

        Like

        1. Sorry Stephen. I only briefly skimmed it prior to my initial response. In general, I don’t see consciousness as fundamental, so the observations that it’s composed of non-conscious processes, or doesn’t have direct control of motor output, aren’t controversial to me. The logical leap from there that it has no causal influence on behavior, except for behavior involving us communicating about our experience, I find pretty dubious.

          Like

    1. My argument against it is an evolutionary one.

      How would something seemingly present across a large number of species come about if it had no ability to cause anything to happen? It’s true that some side effects occur during evolutions – spandrels – but consciousness requires a great of energy and has some significant downsides. So, there must be an upside to it and, if there is, then it would have to be able to cause something to happen.

      What’s more, it seems clear consciousness is required for learning and memory. I don’t know if you follow my blog but I just wrote something on that.

      https://broadspeculations.com/2021/04/20/secret-ingredient/

      Like

      1. To me the strongest evidence is biofeedback.

        “Biofeedback is the process of gaining greater awareness of many physiological functions of one’s own body, commercially by using electronic or other instruments, and with a goal of being able to manipulate the body’s systems at will. Humans conduct biofeedback naturally all the time, at varied levels of consciousness and intentionality. Biofeedback and the biofeedback loop can also be thought of as self-regulation.[1][2] Some of the processes that can be controlled include brainwaves, muscle tone, skin conductance, heart rate and pain perception.[3]

        Biofeedback may be used to improve health, performance, and the physiological changes that often occur in conjunction with changes to thoughts, emotions, and behavior. Recently, technologies have provided assistance with intentional biofeedback. Eventually, these changes may be maintained without the use of extra equipment, for no equipment is necessarily required to practice biofeedback.[2]”

        https://en.wikipedia.org/wiki/Biofeedback

        To explain this away, you need some sort of convoluted explanation that the feedback is passing through consciousness and, in fact, must pass through consciousness but consciousness isn’t required at all because it has no effect.

        Placebos might be another example.

        In a broader sense, probably all learning beyond simple associative learning requires mediation of consciousness to integrate feedback from the external world.

        Liked by 1 person

      2. BTW, I started to look for examples of spandrels and apparently there is a great deal of controversy of the term and argument of what actually are spandrels. Many examples cited as spandrels, others argue that they not actually spandrels – that they do serve a function.

        Wikipedia: “Critics such as H. Allen Orr argued that Lewontin and Gould’s oversight in this regard illustrates their underestimation of the pervasiveness of adaptations found in nature.[5][6]”

        So is consciousness is an epiphenomenon, then you have to argue it really is a spandrel and one that likely is present in most species in the animal kingdom.

        Liked by 1 person

  15. James, Oakley/Halligan’s “Chasing the Rainbow” thesis is that consciousness provides “… the capacity to communicate to others the contents of the personal narrative that confers an evolutionary advantage.” I find that difficult to apply to simpler, much less social non-primate conscious organisms. I wish they had addressed that point.

    My latest notion is that when we discover the tissues/structures that produce end-stage conscious feelings we might also discover that the physical production of feelings feeds back to brain structures responsible for non-conscious processing and thereby influences that processing. The bi-directional neural circuitry is certainly in place for that to happen. That wouldn’t mean that consciousness has agency but would mean that the end-stage production of feelings has a feed-back effect. That effect might or might not be independent of the contents of consciousness.

    Just a tidbit: I believe it’s the entire brain that requires a great deal of energy, not consciousness per se.

    Liked by 1 person

    1. I would say that the evolutionary advantage begins much lower down in complexity with learning and memory. The logical way it feeds back into non-conscious processes is precisely that it does something key to learning and memory.

      I tried to address the consciousness as feelings approach in my review of Solms’ book.

      “Frankly the more I’ve tried to understand this view the less I am persuaded by it. At one point, Solms explains that feelings are always conscious. He then goes on to quote Freud in support of the idea but the quote from Freud is about emotion, not feeling. He then states: “For now, let me be absolutely clear about what I mean by the term ‘feeling’: I mean that aspect of an emotion (or any affect) that you feel“. This is hardly any definition at all. It is saying “feeling” is what you feel. If we are equating, even partially, “emotion” and “feeling” then I can’t see how much that passes before my daily consciousness is emotion or feeling. Like anyone else, I have feelings and emotions but they do not dominate my day to day consciousness. I can’t see how, as I stare at a computer screen now and gather my words for this sentence, much emotion is involved with it. I do have a “feeling” of being like something in the Nagle sense but how that relates to emotion isn’t clear. Even more perplexing is why a Freudian would think that feelings must always be consciousness. It would seem unconscious feelings are persuasive through the Freudian menagerie of complexes and neuroses. Solms himself acknowledges on the same page with his definition that many psychoanalysts disagree with his view that feelings cannot be unconscious.

      Let’s acknowledge that defining consciousness isn’t easy. It is something we all know from our experience but that experience consists of a great many things: memories, dreams, random thoughts, sights and sounds, pleasures and pains, and occasionally a good idea or two mingled among the bad ideas. The only commonality is a sense that we are something, know something, or we are experiencing something, although sometimes even the sense of being something is lost to what we are experiencing. If this is what Solms is saying, it improves little on Nagle’s definition and, by itself, gives no support to the brainstem theory. What I seem to be experiencing at any one point is usually a mixture of cognition, sensual impressions, and general feelings about my well-being. Lumping this together as “feeling” doesn’t seem to help to elucidate its nature. Similarly lumping it all together as some sort of cognitive image seems to leave out real emotions and feelings”.

      Liked by 1 person

      1. James, consciousness “doing “something key to learning and memory” isn’t very specific. Non-conscious processes can fully account for both.

        Regarding feelings, I don’t know why physical feelings, aka sensations, aren’t considered in Solms book. I view physical feelings as the earliest contents of consciousness which likely preceded emotional feelings evolutionarily. Both types of feelings were likely produced in the oldest brain structure, the brainstem. I don’t know if you’ve followed my discussions with Mike Smith over the last year or so, but I’ve discussed the “consciousness as sentience” idea rather thoroughly. Essentially, my position is that all of the contents of consciousness are feelings including physical feelings like touch and pain, emotional feelings, and all sensory feelings like olfaction, sight and sound. Additionally, I view thought in both words and pictures as feeling—in words it’s vocalization-inhibited speech and in pictures thought is sight-inhibited vision, both variations of sensory system outputs.

        With that simplified perspective unifying all contents of consciousness as variations of just one thing—feelings—I disagree that defining consciousness is difficult. I don’t recall if you responded to the definition I’ve proposed, so I’ll repeat it here:

        Consciousness, noun. A biological, embodied, unified streaming simulation in feelings of external and internal sensory events (physical sensations) and neurochemical states (emotions) that is produced by activities of the brain.

        The simulation is a representation in feelings of an organism centered in a world—the feeling of what happens, to borrow a phrase from Damasio. Only a few questions have been asked about this definition and no objection has been raised to invalidate it, other than Phil Eric’s insistence that biological isn’t a fact of the matter of consciousness when it clearly is. Although Eric Schwitzgebel also wants to leave room for manufactured (AI) consciousness, an imaginative projection that has no place in the definition of a phenomenon, no one has raised any substantive objection to the definition, yet everyone continues to believe consciousness is difficult to impossible to define.

        I probably have left something out in my attempt to summarize my perspective, for instance accounting for dreams and hallucinations, but I’d be interested to learn your take.

        Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.