Anil Seth’s theory of consciousness

I recently completed Anil Seth’s new book, Being You: A New Science of Consciousness. Seth starts out discussing David Chalmers’ hard problem of consciousness, as well as views like physicalism, idealism, panpsychism, and functionalism. Seth is a physicalist, but is suspicious of functionalism.

Seth makes a distinction between the hard problem, which he characterizes as being about why experience exists in the world at all, with something he calls the “real problem”, which he describes as explaining why a conscious experience is the way it is, why it has the phenomenological properties it has, in terms of physical mechanisms in the brain and body. Seth asserts that the real problem is distinct from both the hard problem and what Chalmers calls “easy” problems, such as the ability to discriminate, categorize, and react to sensory stimuli, reportability, attention, etc.

When considering how consciousness might be measured, Seth notes that it could be like temperature or it could be like life. Temperature is a phenomenon that emerges from particle physics, one that can be described with a single equation and measured with a single value. Many theories of consciousness, such as IIT (Integrated Information Theory), seem to envision consciousness as being like temperature. On the other hand, biological life is a complex phenomenon, not subject to being meaningfully described by a single equation or measurement. Seth’s money is that consciousness is more like life than temperature.

Seth’s own view falls along the lines of predictive coding theories of consciousness. These theories see the brain as a Bayesian processing system, one that is constantly making predictions, receiving error correction from the world, and adjusting. In this view, perception happens from the inside out rather than outside in. The system is constantly making predictions about what is there, receiving sensory information, and adjusting.

It’s important to understand that these predictions are generally related to what the system is doing or planning to do. So the predictions, the inferences, should be viewed as active inferences rather than passive ones. This view has a lot of resonance with Karl Friston’s free energy principle, which Seth explores a bit in the book.

Another important aspect of this view to understand, is it doesn’t just pertain to predictions about the outside world, but also about the self. In other words, the self is just another perception, a prediction, or a set of predictions, ones involving the state of the body, the perspective of the system, and the perception of volition (free will). We primarily perceive ourselves in order to control ourselves. This pertains even to emotions, which Seth sees as control oriented perceptions (predictions) which regulate the body’s essential variables.

All of which comprises something Seth call the “beast machine” theory of consciousness, which gets at the evolutionary purposes of brains, to make movement decisions. (He described this theory in a TED talk a few years ago.)

Seth finishes the book with discussions on free will and artificial intelligence. On free will, he doesn’t accept any “spooky” kind of libertarian free will, but much of his discussion focuses on our perception of free will. In the discussion on AI, Seth’s skepticism about functionalism resurfaces again. He admits he has no evidence for it, but seems to feel that something about biology, perhaps going down to the cellular or molecular levels, will prevent AI from being conscious.

This is a good book. Seth is an excellent and engaging writer and keeps his discussions accessible for the lay reader. And there’s a lot more in it than my brief summary here discusses. I fully recommend it for anyone looking for an introduction to predictive coding theories.

I do have some issues with it however. I find Seth’s skepticism of functionalism pretty puzzling, since most of what he describes in the book seems thoroughly functionalist in nature.

I also have trouble seeing the distinction he makes between the hard problem and his real problem. The real problem seems like details of the hard problem to me. Given what I’ve read from Chalmers, I suspect he’d see Seth’s characterization of the hard problem as too abstract. Much of what Chalmers discusses seems focused on the same issues as Seth, the relation between the phenomenal and the physical.

Seth also seems loathe to admit that any study of the phenomenological has to go through behavior, such as a subject reporting their experience, or through the overall functionality. In that sense, the distinction he makes between studying the phenomenology and studying the functionality seems forced.

I do like Seth’s discussion of the real problem though. Getting into the details can, I think, serve as a bridging mechanism between between Chalmers’ hard and easy problems. But I’m a functionalist who sees any sharp categorical distinction between the hard and easy problems as artificial anyway.

Interestingly, Seth has a brief discussion about the likelihood that early modern thinkers, such as Rene Descartes, had to espouse certain ideas they may not have believed in, conceivably including substance dualism, ideas that may have offered some immunization from persecution by the church, and given them cover to explore other intellectual ideas they cared about. He notes that other thinkers, such as Julien Offray de La Mettrie, who didn’t play this game, got into trouble. I thought this was an interesting observation coming from someone who leads a center on consciousness research, a role that almost certainly has to keep in mind what might offend funding sources.

All that said, I think predictive coding theories have a lot going for them. More broadly than just consciousness, they get at what brains are actually for. And they seem to work at all levels of brain evolution, from explaining why it was adaptive for worm like creatures to develop the first very limited capabilities beyond stimulus-response mechanisms, to a fish figuring out whether that thing in the distance is food or a predator, to me figuring out what that strange looking snack is at a party.

As I’ve noted before, I don’t see theories like predictive coding as alternatives to ones like global workspace or higher order thought theories, but all of them as potential supplements to each other. There’s a tendency among theorists (which I don’t detect from Seth) to regard their theory as the one truth. But like biological life, I think it’s likely the truth will involve a large collection of theories.

What do you think of predictive coding theories? On the right track? Or hopelessly going in the wrong direction?

194 thoughts on “Anil Seth’s theory of consciousness

  1. I have taken to looking at consciousness from the point of view of coding Deep Reinforcement Learning algos recently. Which is absurd and the wrong way round – deep RL is inspired by nature. But coding a Q Learning algo recently to trade stocks gave me pause for thought. The way it learns is simple when you get the hang of it. probably like the brain when you get the extraordinary structural complexities out of the way, or understood at least. Of course no self awareness arises in these simple AI algos nonetheless the present an interesting reflection on the possible nature of life and its mechanics. Simple, learnt error correcting mechanisms. Hmmm –

    Liked by 2 people

    1. After I read Simona Ginsburg and Eva Jablonka’s book: The Evolution of the Sensitive Soul, I got interested and did a little reading on reinforcement learning, particularly in the difference between model based and model free learning. It does give you a view into a possible way what we now call cognition started in evolution.

      Liked by 2 people

  2. I partly agree with the “beast machine” idea.

    I have long seen AI as a mistaken idea. Our intelligence is not due to logic. Rather, it is due to our underlying animal nature.

    I’m not so persuaded by predictive coding ideas. Yes, prediction is important. But I don’t think you get there by applying Bayesian methods to what William James described as a “blooming buzzing confusion.” Before you can usefully apply Bayesian methods, you need useful data. For me, the important detail is understanding how the perceptual system manages to get useful data.

    Liked by 3 people

    1. I don’t see animal nature and logic as exclusive to each other. I think the right way to think about it is the logic is oriented toward the system’s goals, which for a living system is related to homeostasis and reproduction, our underlying animal nature.

      On needing useful data, one thing to consider is that an animal could start off with innate dispositions, essentially instinctive data, that it then adjusts as it learns about the world. Consider all the ways children tend to think about the world (such as thinking there may be monsters under the bed), that they gradually let go of over time. I think the same thing happens with perception, albeit at a more primal level.

      Liked by 3 people

  3. No need for anyone to be completely naive about Seth’s metaphysical position on consciousness either. Seth is under the same restraints as Renee Descartes was in his day. Today it’s the Church of Scientism and the adult day care institution of academia which is the Church of Reason. So don’t expect anything new from anyone who is an active member of either church. Under those restraints, what is acceptable is the party line which is just more of the same recycling of stagnant ideas, an original painting placed in a different frame and then sold on the open market as an original.

    Nothing new here: talk about throwing good money after bad…….

    Liked by 2 people

    1. Throw out science and reason? What’s left aside from faith and tradition?

      Anyway, Descartes, if in fact he did compromise on his views, did so to get the ideas out he really wanted to discus. If Seth is doing the same, then it’s in service of the same goal. I have little doubt predictive coding is what he believes is true.

      Liked by 1 person

      1. Think of it this way Mike. If Anil Seth can’t make it as an adult day care worker maybe he could write a sequel to Hit-Monkey. He could call it “Philosophical Hit-Zombie”: now that’s a story line.

        Liked by 1 person

  4. It does seem prediction is a big part of what our brains do. I’ve come to define sanity as the degree to which one’s mental model of reality corresponds with the real thing. That’s much the same as saying the model accurately predicts the world around it, and that we improve that model over time based on error (and success) feedback.

    But I agree with Neil Rickert’s comment that brains go beyond being prediction engines. The Sebald gap between humans and all animals is significant. Our understanding of consciousness needs to account for our literature, art, music, and much more. Animals don’t write, or comment on, blog posts. 🙂

    Reading your post I was mostly struck once again by all the commotion around the notorious Hard Problem. Chalmers really set a cat among the pigeons with that one. It’s one of those divisive issues, a razor to split opinions. It’s got deniers, detractors, and proponents. It’s led to the meta-Hard Problem. Now we have the Real Problem.

    So much concern about a label, and it seems to dodge the basic question: Why does a machine think it experiences something? As far as we know, no machine other than brains does this. Not even our other organs do it; just brains. Call it what you will, it’s a basic unanswered physics question. Just that brains are, by far, the most complicated system nature has ever produced (all those billions of highly connected neurons), that alone makes them Hard to figure out. Maybe you don’t see that as a Problem, but it seems hard to deny that it’s really Hard.

    (Can we combine Seth and Chalmers and call it a Real Hard Problem?)

    Liked by 4 people

        1. I ask because my current understanding (and what I think others don’t appreciate) involves the role of information which requires a physical substrate but has a description that is independent of that substrate (multiply realizable). The “subjective” lies in that independent description.

          *

          Liked by 1 person

    1. I think it’s possible to relate literature, art, music, and commenting on blog posts to prediction mechanisms, much of them around the social dynamics of a social species. Of course, humans have symbolic thought, which allows us to extend those predictions far beyond our immediate surroundings, which is not something we observe in any other species (at least not yet). But it all still seems like predictions. Again, not that I don’t think we need other models to help complete the picture.

      I’ve noted before that Chalmers has a definite skill in naming philosophical concepts. But the hard problem is really something people have talked about for a long time. Joseph Levine called it “the explanatory gap”. A lot of people have historically just called it the mind-body problem. I personally think this problem was solved conceptually in the 1940s by McCulloch and Pitts, as well as by Gilbert Ryle, albeit with lots of details (the “easy” problems, which are far from easy) still to work out.

      In terms of a machine thinking it has experience, as Seth noted, that’s arguably just another perception, another set of predictions, active inferences. Ones we often wrong about, although right often enough to get through the day. The reason we have this ability is control, to control, or at least influence, the processing in our own brain.

      Liked by 1 person

      1. I just can’t agree that literature, art, and music, are fully explained in terms of predictions. It seems a forced fit to me. Certainly predictions are involved, but the heart of those things is our experience of them. (Which is why I don’t buy P-zombies. Why would they go to movies or listen to music?)

        Our perception and experience are clearly tied together, but I’m not sure prediction is central so much as framework. Fish can predict. Higher animals predict better. Humans take it to a whole new level.

        Liked by 1 person

        1. I think the thing to consider is what experience actually is. The predictive processing answer is that all conscious experience is composed of perceptions, perceptions about the world, the state of our body, or the state of our mind. And all perceptions are predictions, active inferences. It’s a remarkably powerful explanation, but it requires thinking it through to see how it fits with everything we call “experience”.

          Liked by 1 person

          1. But we don’t know what experience actually is, and as you’ve said repeatedly (and I quite agree), consciousness is not likely to be a single theory or explanation. Like life it’ll be myriad, so I don’t think “prediction” can be the only answer.

            Like

          2. I think when it comes to perception, prediction seems like a crucial insight. When it comes to how all those perceptions relate to each other, then we do need other theories like global workspace, attention schema, etc. But that doesn’t seem to spoil the core insight. Maybe the question to ask is, what about experience (“raw experience”) do you not consider addressed by the predictive framework?

            Like

          3. Anything at all! 🙂 In part, I think experience is the flip side of a predictive system. It’s how a predictive system learns to predict. The equivalent would be training mode in an ANN.

            But I also think experience is a form of joy, and I don’t see much role for prediction there. If anything, having my prediction thwarted is something that gives me joy. The best stories (or any art) are the ones that surprise you. I would include the joy of learning, of many sports, listening to music, or just going for a walk.

            Trying to lump all that under prediction seems far too reductive.

            Like

          4. It’s important to remember that a crucial part of prediction is error correction. Consider where learning begins, when we are surprised, or things aren’t going the way we think they will. In other words, when error correction is high.

            Joy is an emotion, an affect, and so can itself be considered a prediction, a quick assessment of the nervous system. And it can certainly result from another prediction being wrong, if being wrong results in a new prediction of improved future circumstances.

            I don’t want to oversell predictive theories. I think they have much going for them, but they’re far from perfect with valid criticisms. There’s a recent Quanta article on them worth checking out if you haven’t already.
            https://www.quantamagazine.org/to-be-energy-efficient-brains-predict-their-perceptions-20211115/

            Like

          5. Sorry Mike, I think we’re at a dead end on this one. I don’t see much connection between affect or experience and prediction. Totally on board that prediction is a big part of the picture, but I just can’t see it as the total picture.

            As with IIT and others, I think it’s a partial answer.

            Liked by 3 people

        2. Hi Wyrd,
          I bump into you again tonight!

          Earlier this year, I discovered I have Aphantasia i.e. no Mind’s Eye. This has truly blown my mind!
          In a way, I feel like I’m a bit of a zombie myself. *You* have conscious visual experience of what you imagine but *I* just function. I never knew I was missing anything.

          One comment by another Aphant (as we are sometimes called) is ‘why do you people bother going to the cinema if you can just imagine it?’ (it depends upon the sort of films you watch I guess.)
          Any AI/robot could ‘want’ to go to the cinema because it might maximize some Reinforcement Learning cost/reward value – it is epistemic exploratory behaviour for starters.
          So, I see things the *opposite* way around regarding P-Zombies!

          (BTW: Back to Blade Runner – I’m feeling a bit like Deckard at the moment.)

          I briefly email-corresponded with both Seth and Solms regarding Aphantasia a few months ago. Both said it would be very interesting to explore with regard to consciousness. But to me, this *variety* of human conscious experience is fundamental – something you need to understand before you write a book on consciousness! You could probably map Aphant individuals into a 20-dimensional space, say, to account for whether they have visual dreams, if they have audiation, etc. This would be a list of what personal functions ‘show up’ as conscious in each individual. Then you have to explain that *variation*, probably tying this in with fMRI scans for each person.

          And that would be for only 100 million ish people worldwide on one aspect of consciousness – one aspect that was so poorly studied it didn’t even have a name until 2015! What other varieties are there across 8 billion people that are not properly known about? (we know about synaesthesia, hallucinogens etc)

          If you lack the basic ‘this is conscious’ vs ‘this is not conscious’ data on human consciousness across multiple subjects, how can you even *begin* to come up with any scientific theory to discriminate? We are not much further forward with regard to consciousness than the Pre-Socratics were with their speculatory theories of matter: ‘everything is air’, ‘everything is water’.

          Finally, my new (facetious) definition of what is conscious: anything that professes shock at the discovery that something particular is or is not conscious – beyond the trivial “print(‘That’s shocking!’)”

          Liked by 1 person

          1. Hey! Been a minute or two.

            I have a buddy who was long ago diagnosed as color blind (and more recently as having Asperger’s). He’s spoken often about how those blew his mind, so I can at least imagine your shock vicariously. In his case, on the one hand, it set the world right by explaining life-long mysteries. On the other hand, he also found it devastating.

            Not that I know much about Aphantasia, but I’m not sure I’d equate it with p-zombies. You say you have no mind’s eye; do you have an inner voice (or monologue)? I have to ask, does being an Aphant extend to maps? For instance, are you able to visualize any map of your neighborhood? Could you draw one by memory or by thinking it out? (Wow! I guess you could never use a “memory palace”.)

            Zombies would be functional (computational), and assuming there is something more, I think the key to detecting it is something along the lines of a Rich Turing Test — a prolonged (days, weeks) dialog exploring creativity, humor, curiosity, critical thinking, meta-thinking, and openness. As you suggest, humans vary (I’m totally down with large dimension config spaces), and I suspect there is a spectrum from those with little or no mental images to those with vivid ones. Likewise, from what I’ve heard, our inner dialog, that varies, too. And, if mere function can reliably pass a RTT, I think we’d have to call it conscious.

            Ha. My (also facetious) definition of consciousness: anything with a definite opinion. Not too different. (It does make me grin a bit. The serious study of consciousness has no useful definition of its central topic of study. It’s almost as if it was a Hard Problem or something…)

            Like

    2. You ask “Why does a machine think it experiences something.” I would say it’s not the machine, it’s a particular sort of process running on the machine that thinks it experiences something. That happens when a process that detects external things (and modifies its behaviour accordingly), directs its attention to its own operation and detects itself detecting things.

      Liked by 1 person

  5. Predictive coding theories of consciousness are significant acheivements of human mind, but they do not bring much for an understanding of human consciousness.
    As Neil Rickert highlights “Our intelligence is not due to logic. Rather, it is due to our underlying animal nature”. This can introduce the importance to look at our human consciousness as a result of primate evolution. More precisely we can define our self-consciousness as the capability to represent our own entity as existing in the world, like others are represented as existing. Such performance is the result of an evolution of representations managed by the pre-human brains of our ancestors during evolution (https://philpapers.org/rec/MENCOI). Even the hard problem (“what it is like to…”) needs a self-conscious entity to address the subject. Self-consciousneess is at the core of human mind. It is too bad that most of the research is done on phenomenal consciousness.

    Liked by 2 people

    1. I think you’re definitely right that this all needs to be assessed within the context of evolution. I know Seth’s beast machine theory is very centered on an evolved biological framework.

      In terms of representing our own entity as existing in the world, I agree. But the thing to consider is what that representation fundamentally is. The predictive coding answer is that it’s a prediction framework, a set of active inferences. In short, the idea is that it’s a perception, just like any other, just directed at the body and mind.

      On research, a lot depends on how you define “phenomenal consciousness”. Many philosophers would argue that most of what’s being investigated is not phenomenal consciousness, but access consciousness. Of course, to a functionalist, investigating access consciousness is investigating phenomenal consciousness.

      Like

      1. What is a representation is an interesting subject. For me a representation exists by the meanings it carries for the agent that hosts it (that creates it or uses it).
        The representation of an entity for an agent is then the network of meanings relative to that entity for the agent. That perspective is detailed in the above link (P. 17: “From meaningful information to meaningful representations”).
        Such perspective on representations is far from meaningless symbols of traditional AI.

        Liked by 2 people

        1. Christophe, I agree with this. Consciousness requires an enactable representation that includes a representation of self in relation to the world – what I can sense, how that makes me feel, what I could do about it, and what I might then sense, feel and do. That’s a very rich representation and all about how it feels to be the me, the agent. It also means that it is necessary to study in detail what needs to be in that representation (the contents of consciousness) and what mechanism needs to use that representation (the process of consciousness that steps the mind on to the next time interval, ready for what is likely to happen). For me, once you have all this spelt out such that all that carries meaning for us is represented and enacted, the hard problem is solved or dissolved.

          Liked by 1 person

          1. Peter,
            Thanks for your support. Let me propose a few points in addition.
            We self-conscious humans have representations of our own entities, and we can think about these representations. They are rich and complex. They indeed include feelings and emotions. An access to the content of these representations is very limited because they are related to self-consciousness (the nature of which is still to be understood), and because they contain unconscious components.
            Also, accessing the content of these representations may not be enough to answer the hard problem which implicitly includes a self-conscious entity capable to introspect, to ask the question “what its is like to …”.
            Even if it is not often highlighted, I feel that the hard problem needs to take into account the performance of self-consciousness.

            Liked by 1 person

  6. I’m currently looking into consciousness characterised as control in the context of evolution, and this seems fertile ground for a broader joined up account. In brief, living entities evolved to be physical control systems to survive and reproduce (this is physical control). Nervous systems evolved to make that control more capable of change within the lifetime of the organism (mental control). Consciousness then evolved by applying that same approach to the nervous system itself (mental control of control). This included, of necessity, a predictive model of self, world and the interaction between the too, including the definition of what good and bad (pleasure and pain) look like for the organism. That led in humans to then applying consciousness to that model, leading us to ask what we really are, and what good looks like at the deeper level – philosophical and religious considerations (this could be characterised as mental control of (control of control)).

    Liked by 3 people

    1. Hey Peter. Good hearing from you!

      I think I’m onboard with all of that. You might find a discussion Seth has in the book from cybernetics research interesting. I’ll just post the first few paragraphs because I’m not sure if Kindle would let me copy the whole thing, but it’ll give you an idea of where he’s going.

      One of these insights comes from a 1970 paper by William Ross Ashby and Roger Conant, which describes their so-called “Good Regulator Theorem.” The concept is nicely encapsulated by the title of their paper: “Every good regulator of a system must be a model of that system.”

      Think about your central heating system, or—just as good—your air-conditioning system. Let’s say this system is designed to keep the temperature inside your house at a steady 19ºC (about 66ºF). Most central heating systems work using simple feedback control: if the temperature is too low, switch on, otherwise switch off. Call this simple type of system “System A.”

      Imagine now a more advanced system, “System B.” System B is able to predict how the temperature in the house would respond to the heating being on or off. These predictions are based on properties of the house—how big the rooms are, where the radiators are located, what the walls are made of—as well as on what the weather conditions are like outside. System B then adjusts the boiler output accordingly.

      Thanks to these advanced abilities, System B is better at maintaining your house at a steady temperature than System A, especially if you have a complicated house or complicated weather. System B is better because it has a model of the house, which allows it to predict how the temperature inside the house will respond to the actions it can take. A top-end System B might even be able to anticipate upcoming temperature-related challenges—perhaps a cold day on the way—and alter the boiler output in advance, so as to guard against even a temporary drop in warmth. As Conant and Ashby said, every good regulator of a system must be a model of that system.*

      Let’s take this example a step further. Imagine that System B has been fitted with imperfect “noisy” temperature sensors that only indirectly reflect the ambient temperature in the house. This means that the actual temperature cannot be directly “read off” from the sensors; instead, it has to be inferred on the basis of the sensory data and prior expectations. System B now has to have a model of (i) how its sensor readings relate to their hidden causes (the actual temperature in the house), and (ii) how these causes will respond to different actions, such as adjusting the boiler or radiator output.

      Seth, Anil. Being You (pp. 190-191). Penguin Publishing Group. Kindle Edition.

      Like

  7. “All of which comprises something Seth call the “beast machine” theory of consciousness, which gets at the evolutionary purposes of brains, to make movement decisions.”

    Exactly. I don’t mention that frequently but I think it is correct. Sometimes when we talk about the predictive brain we tend to think of consciousness as something passive. But the ultimate purpose is movement and action.

    Interestingly anesthetics affect even microscopic organisms and make them immobile. Lack of ability to initiate and carry out movement is a sure sign of unconsciousness if the body is still alive.

    I have sometime speculated that the peculiar brain rhythms associated with consciousness might have their origins in the chemistry of muscles and their firing/recovery periods. The brain and nervous system possibly developed in a more sophisticated way with worms and the requirements to control the muscular system of the gut.

    Liked by 2 people

    1. That’s an interesting idea. Striated muscle fiber cells have their own action potentials, typically set off by a motor neuron. And I know the contractions are powered by local ATP stores. But I’m not familiar enough with the evolution or physiology of muscles to say much more. The Wikipedia article talks about a couple of current theories, but they both seem to have muscles evolving around the same time as neurons.

      Several authors I’ve read have commented that the molecular “toolkits” used by all these cell types are thought to be ancient, predating animal and complex life. Which makes sense. Life was microbial for billions of years before complex life emerged, but it seems like evolution would have been at work that entire time. Of course, the precise use of the toolkit would have shifted and diversified as complex life arose.

      I really need to read a book that covers evolution more globally than nervous systems.

      Liked by 1 person

          1. Also, this:

            “Phylogenetic comparisons showed that one of the crucial structural proteins of striated muscles of vertebrates, a “myosin” motor protein, originated by gene duplication. “As this specific myosin has so far only been found in muscle cells, we expected that its origin coincided with the evolution of muscle cells. We were very surprised to see that the ‘muscle myosin’ evolved probably in unicellular organisms, long before the first animals lived,” explains Ulrich Technau who led the study.

            “In sponges, that all lack muscles, the ‘muscle myosin’ appears to play a role in regulating the water flow,” comments Gert Wörheide (Ludwig-Maximilians-Universität Munich), whose team was investigating muscle proteins in sponges together with Michael Nickel (University of Jena) and Bernard Degnan (University of Queensland).

            https://www.sciencedaily.com/releases/2012/06/120628145626.htm

            Liked by 1 person

  8. I read Seth’s highly touted book and was thoroughly disappointed. There is nothing new here, just a rehashing of original ideas put forth by other physicalists like Dennett and Frankish who are all honorable members of the illusionist club.

    The one point in which I do agree with Seth is that at a very primal level our experience is fundamentally “all about control”. It’s too bad that he didn’t expound that principle exhaustively because that would be new territory and something useful. At its core, the hard problem is a psychical problem; clearly it’s not a scientific one and it’s not really a philosophical problem either. The psyche drives “the beast” in its insatiable need for control, so any intellectual model that the beast constructs must reinforce that sensation of control.

    “The beast” is a psychologically unstable system and it needs control to function in the world. Prediction is an important aspect of that dynamic. But zeroing in on prediction like it’s something profoundly new and original is misdirected. From the recent articles published in scientific journals lately, prediction theory seems to be just another trendy fad that academics are quick to exploit.

    Liked by 1 person

    1. Thanks for commenting.

      I’m pretty sure Seth would strenuously deny he’s an illusionist. He actually was interviewed by Frankish on his and Philip Goff’s Youtube channel where they talk about their differences: https://www.youtube.com/watch?v=NkMpFcDwDxM
      Seth does talk about a consciousness being a controlled hallucination, which I can see being easy to conflate with illusionism. But as he notes in the book, he isn’t saying phenomenal consciousness doesn’t exist, as Frankish does. He sees it as real, and his “real problem” is aimed at explaining it.

      Prediction theories actually have a long history going back to Hermann Helmholtz in the 19th century. They did see a revival starting in the 1970s, but have been building every since. It might still turn out to be wrong, but I don’t think “fad” would be the right word to describe it.

      Like

      1. Thanks for posting my comments. One of the phrases I always find somewhat vague and ambiguous is the commonly used term “subjective experience”. According to Seth consciousness is defined as “any kind of subjective experience whatsoever”.

        But what does subjective experience mean? The term is commonly used, like the meaning is a given or something and that everybody is suppose to know what it means. But there are two possible explanations that I can think of, one is outlined by wikipedia as:

        “The subjective character of experience is a term in psychology and the philosophy of mind denoting that all subjective phenomena are associated with a single point of view (“ego”).

        I get this one because a single point of view reduces back solipsism. But if it can be inferred that others are having the same “controlled illusion” then one can escape the trap of solipsism. The second explanation would be derived from subject/object metaphysics itself which is simply that subjective experience of any kind would be a mental experience of “any kind”. So it’s unclear to me what Seth’s rendition of subjective experience of any kind would be. It’s easy to conflate the two renditions of subjective experience but I think the phrase itself lacks clarity.

        Like

        1. I’m with you on seeing the term “subjective experience” as vague. It’s one of the phrases that many are often unwilling to even try clarifying, and when they do it’s usually with the phrase “something it is like”, which seems outright designed to be ambiguous.

          I think the “subjective phenomena associated with a single point of view (“ego”)” is on the right track, but it still leaves us with having to define “subjective”, “phenomena”, and “ego”.

          Like

    2. I agree that viewing consciousness in terms of control is productive. I would see consciousness as ‘control of control’ in this context. Talking in terms of control shows that action taken (output) is just as important as awareness of what is sensed (input).

      What’s interesting though, in digging a little deeper here, is that ‘control’ is just a 1-sided (egocentric) view of an interaction. Does the driver control the steering wheel or does the steering wheel control the driver? Actually it is a 2 way interaction – a putting into relation between two things…and if you start asking where the self ends it all gets messy again: at the skin? are tools included? what about remote communication? what about managing a team or a company?

      Liked by 1 person

  9. Mike,
    After reading the IEP’s quite graspable entry on “functionalism”, I suspect that you and I should stop calling ourselves either that or its opposite. Apparently these titles are too broad to do much good. The article generally frames this debate in terms of consciousness either being more like a lock/key (or functional), versus more like a diamond (or brain stuff substance). Given how little has been demonstrated in science about this business, to me this seems like a clear and destructive false dichotomy. In retrospect this debate seems only to have hardened positions into a status quo of failure. So I for one am going to stop taking shots at either functionalists or its identity theory opposite in a blanket sense. Consider me both of them as well as neither. Instead I’ll take my shots at each classification in the specific ways that I think they deserve, or effectively where advocates follow their ideologies into non-acknowledged spookiness. And I’m quite serious about the “non-acknowledgment” part. I can only beat up on Chalmers to the extent that he imagines his property dualism isn’t supernatural. It wouldn’t surprise me at all if he both realizes and acknowledges this.

    So what do you think Mike? Would you disavow a “functionalist Mike” identity, or at least in cases where your naturalism could legitimately be questioned?

    Liked by 1 person

    1. Eric, based on what I know about your views, I actually don’t think “functionalist” is the right label for you. I tend to think of your views being more in the identity theory category, but it wouldn’t be the first label for your model that made sense to me you wouldn’t agree with.

      The danger with labels is that there’s always someone somewhere who espouses a version of it we might not buy into. It might be that there is something buried in that article I’d object to. But based on the introduction, I don’t see anything that conflicts with my view. I definitely sign up for the first couple of sentences:

      Functionalism is a theory about the nature of mental states. According to functionalism, mental states are identified by what they do rather than by what they are made of.

      Is there something specific in the article you think would be problematic for me?

      Liked by 1 person

      1. Actually Mike I’m not suggesting that I should stop calling myself a functionalist since I don’t recall ever having done so. I’m suggesting that I should stop criticizing this position in a blanket capacity. I’m saying that I should mend my ways and say that this position might be naturalistically okay, that is with appropriate caveats that guard it from getting spooky. (And of course it’s fine if someone wants to get spooky, though I know you don’t).

        And no my ideas don’t support any blanket kind of identity theory. Unlike diamonds, brains are usefully defined as complex machines that sometimes use a physics of some sort to create an entity that phenomenally experiences its existence. So if we put that physics to work in one of our computational devices, then naturalism mandates that a phenomenal entity would emerge from that physics (and whether functionally useful or just because we built something that thus felt good/bad). As you know, I suspect that the right electromagnetic radiation should do the trick. So here it’s also not correct to classify me in the identity theory camp. But let’s not let ourselves get silly with this “multiple realizable” business. It’s not like anything goes for phenomenal experience, or anything else. Just as specific causal stuff creates diamonds and EM radiation, phenomenal experience should have its own requirements that can’t be ignored given that specific physics. This remains something for scientists to empirically determine. Hopefully they’ll soon end the aspirations of countless misguided theorists today.

        I don’t think there’s anything buried in that IEP article that would make you object to functionalism. You’ve read about that sort of thing far more than I have. But just because prominent people like Searle and Block haven’t constructed bulletproof arguments which thus convince you that they’re right, this doesn’t also mean that they’re backing a generally wrong theme. It could be that they’ve inherited some bad premises from which to argue, such as the highly anthropocentric Turning Test.

        I’ll stop here to see if you agree so far. If it turns out that there are various spooky implications to certain versions of functionalism, would you say “Hold on, I don’t condone any position which has those implications!” Or would you instead say that being a functionalist is more important to you than being a naturalist?

        Liked by 2 people

        1. On seeing functionalism as able to be compatible with naturalism, good to hear. Thanks for letting me know!

          On whether functionalism or naturalism is more important to me, actually I hope I would drop either one if the evidence consistently and reliably went against them. Remember, for me they’re just labels to sum up a set of conclusions, not ideologies I signed up for. I try to be on guard against wearing any “ism” too tightly.

          Honestly, while “functionalist” gets the idea across to someone knowledgeable in the philosophy of mind, it might be more accurate to call me as a “processist”, since whether or not a particular process is functional is a matter of interactions with the environment and/or interpretation.

          But that process philosophy is broader than just philosophy of mind. In this view, the difference between substances amount to their constituents being different processes. It’s conceivable there might be some brute layer of reality where we do hit some kind of fundamental substance, but even elementary particles amount to excitations of fields in quantum field theory, aka, processes.

          One consequence of this view is that identity theories may eventually reduce to process theories anyway, although I’m sure the average identity theorist would insist that happens at a much lower level than in functionalist theories.

          Liked by 2 people

          1. Processist? I like that Mike. Here you’ve used the great medieval friar, William of Occam’s razor to trim some extraneous crap off “functionalism” (with the “crap” part being the human interpretation of “What’s functional?”). And processist seems very much like causalist, and so naturalist. Furthermore you’ve noted how different substances implement different processes, and so the identity theorist’s position also becomes accounted for here to the extent that phenomenal experience might exist by means of certain substance based processes. And I agree that we should consider the various “isms” that we identify with, provisionally. I suppose that I could bend my devotion to naturalism (which mandates determinism) if I had sufficient evidence that it does fail. For me this might effectively require godlike displays however. In truth I think that I’m more devoted to naturalism than any other “ism”. This is given my observation that nothing exists to discover to the extent that causality fails. So to me naturalism seems like a logical founding premise from which to begin — without causality the entire endeavor of seeking to understand seems pointless.

            Regardless I’m now able to get into the meat of what I wanted to say, or my observations about how computers operate. Each of us consider the brain to essentially function as a computer does. Therefore it’s possible that this model could tell us some things about how phenomenal experience might arise by means of such a machine.

            Computers may be defined as machines that accept input information, or “algorithms”, and process them into new algorithms which sometimes go on to operate various output mechanisms, such as computer screens and muscles. And indeed, the term “algorithm” seems productive to only exist in terms of a machine that’s able to implement that information. You wouldn’t expect the algorithms of a VHS tape to play movies on your iPhone, and this is because your iPhone isn’t set up to accept such information. So a VHS tape should not be considered algorithmic in respect to an iPhone, though it should be considered algorithmic in respect to a VHS tape player.

            With these observations to work from we can ask ourselves, do brains sometimes create entities which phenomenally experience their existence? Yes of course they do. So if the brain sometimes creates entities which phenomenally experience their existence, what this seems to mandate is that the brain must produce various algorithms that animate phenomenal experience producing mechanisms of some kind.

            This is what I considered so exciting about McFadden’s theory. He, unlike anyone else I know, presents a potential answer that fits this basic model, or a neuron animated mechanistic answer. It could be that the radiation which is produced by certain synchronously fired neurons, create an EM field that itself exists as the phenomenal experiencer. We could even test his theory by wiring up countless properly charged electrodes inside someone’s head to see if we could run certain firing sequences that alter someone’s otherwise brain based experiences. (The premise here is that if someone’s vision, smell, and so on exist as such by means of an incredibly complex neuron produced EM field, then the right exogenous EM radiation should be able to alter that field somewhat given expected wave interference. Thus a test subject might say things like, “Hey, whatever you just did to me seemed funky…”) And indeed, if McFadden’s theory were experimentally verified in this or some other way, a paradigm shift should occur on the order of Einstein’s relativity.

            With this I wonder if you can see what I consider so exciting about McFadden’s theory? Not only does his proposal fit with the way that computers are known to function in general, but it provides an eminently falsifiable situation for us to potentially test. This lies in contrast with theories that propose phenomenal experience by means of brains that use mechanism independent algorithms. For me this is where things get otherworldly, and thus Chinese rooms, China brains, Schwitzgebel’s USA consciousness, and my own thumb pain thought experiment. If we insist upon positing that what actually creates phenomenal experience is the function of mechanism independent algorithms, then we seem to leave our naturalism behind us at the door. It seems to me that algorithms should be considered either mechanism dependent, or to not exist at all.

            Liked by 1 person

          2. “I’m sure the average identity theorist would insist that happens at a much lower level than in functionalist theories.”

            You nailed it. For what it’s worth, I think process talk and object/property/time talk are interchangeable. I find it just dead obvious that there are many many ways to describe something which are all true, and for many pairs of descriptions, they give the same amount of information too.

            Liked by 1 person

        2. Eric,

          Do you see a place for a priori judgements in your naturalistic world view, especially when it comes to physics or do you consider them to be spooky, supernatural and just a bunch of bunk?

          Liked by 1 person

          1. Lee,
            I most certainly do include a priori assessments to be part of my naturalistic perspective. Mathematics is a priori for example and I’m not about to give that up. Occasionally in discussions I also enjoy making observations that seem true by definition, and so I might tag that observation with an “a priori” label. I guess I should have been more clear about my meaning when I said that you’re a priori and I’m a posteriori. I’ll try to fix that if I can.

            It seems to me that things which are true by definition, and so cannot possibly be false, cannot tell us about the world we live in…in themselves. This is because such truths will inherently be world independent, or true both everywhere as well as nowhere. So to potentially learn about our world I believe that we need at least some level of a posteriori assessments to work from. This is to say that we need phenomenal based measurements of how things are, and even if they’re ultimately false representations of what actually exists. Then with at least some phenomenal beliefs we should also be able to use mathematics and other a priori tools to make various assessments in those terms if we like.

            So yes, with that caveat in mind I may be considered both a posteriori and a priori.

            Like

          2. Eric,

            Thanks for your clarification. Certainly mathematics is a priori, fitting the description that it’s a “necessary truth” because it holds universally. The ability of a necessary truth to hold universally is the arbiter of any assessment, regardless of whether it’s an a priori or an a posteriori judgement.

            Where you and I are in agreement is that the “conscious” mind is a separate and distinct system that emerges from the unconscious brain. You like to use the computer analogy to express that difference and that perfectly acceptable because it gets the point across. And being a naturalist like yourself, I don’t think there’s anything supernatural going on.

            I’ve been working on a theory as of late that mind is a quantum system and not a classical one. As you know, it’s one thing to posit such a theory and it’s another thing to test it. And that’s where a priori comes in I think. So here it is in a nutshell:

            In order for an a priori or even an a posteriori judgement to be a necessary truth it has to hold universally. Currently, the prevailing consensus among naturalists is that mind is a physical system, a consensus that I am in full agreement with. However, in order for this assessment to be a necessary truth it has to hold universally. Here is where this assessment breaks down. The unconscious brain is an objective system just like every other physical system that we know of, but the mind is a subjective system. That’s a problem for a necessary truth…..

            As naturalists, we are now confronted with a clear and succinct contradiction. So, either the brain is not a physical system, a conclusion that leads to some type of dualism or it is indeed a physical system, but it has to be a quantum system and not a classical one.

            This short a priori analysis is simple enough; so there’s your nutshell…..

            Liked by 1 person

          3. I think I get what you’re saying Lee. I’ll try to put my interpretation into an even smaller nutshell to see if you agree.

            I take your point to be that we have an entire universe of objective reality, and yet one tiny element that represents things like you and presumably me that are instead subjective products of brain function. This suggests a dualism, or least in the classical sense of causal function. So I think your point is that if the brain uses quantum weirdness, such as superposition and entanglement, then an objective thing like the brain might very well create something else which is not objective, but rather subjective. Is that it? In any case, what do I think about that? I guess what concerns me is the idea that the subjective might not ultimately be objective.

            Imagine us as outside gods observing this universe as an entirely causal system. We’d see and understand all sorts of chemical and even quantum function. We’d see and understand the emergence of robotic life and its associated genetic material “blueprints”. Furthermore we’d see and understand certain varieties of life essentially “wake up” to perceive their existence in a subjective capacity. But would we outside gods consider there to be anything fishy with the emergence of subjectivity? I don’t think so. I think we’d consider this as just one more fully determined event given certain brain physics that we’d consider obvious. Yes existence could now be perceived subjectively by certain forms of life in the system, though to us gods this should just be another objective circumstance given causal dynamics.

            These observations do not however imply that quantum mechanics isn’t crucial to the emergence of phenomenal experience. That is possible, and indeed beyond yourself various prominent people have proposed this to be the case.

            Apparently Surrey University in the UK offers regular and masters degrees in quantum biology, and mainly given the work of Johnjoe McFadden. For example they argue that the ridiculous efficiency with which chlorophyll is able to convert light energy into plant energy, could be through superposition function that yield ideal paths. In the late 90s McFadden was interested enough in the idea that consciousness might be quantum based to read a book on the topic by Roger Penrose and Stuart Hameroff. As I recall the book suggested to him that quantum mechanics should not be the right tool, though though here it occurred to him that classical electromagnetic radiation might do the trick. As you know this solution now seems quite plausible to me as well. I wonder if you have any reservations about this proposal, or perhaps questions?

            Like

          4. Eric,

            By reading your second paragraph it’s clear you got my point. The only reservation I have about McFaddens theory of electromagnetic radiation is that it is still a classical system which makes it objective, following the so-called laws of physics whereas subjectiveness does not follow those same rules.

            I think the strongest argument for any new idea is logical consistency. From an a priori perspective, logical consistency will reduce to a “necessary truth” or a truth of necessity if and only if the interactive dynamics of a given system under the proposal of a new theory hold universally. I really don’t see the efficiency with which chlorophyll is able to convert light energy into plant energy as quantum or a feature of quantum mechanics because plants are just another objective system following the same so-called laws of physics that other objective systems follow.

            This problem of inconsistency arises from that fact that mind is a subjective system and there has to be a means of accounting for the causal dynamics of a subjective system and I just don’t see any other type of classical system that is subjective. By the sheer process of elimination a theorist has to follow the evidence and that evidence points to quantum mechanics.

            I respect your assessment of my ideas and I appreciate you taking the time out of your busy day to do so. The youtube video link that Mike posted on a response to First Cause is well worth the personal capital. Seth seems to have his head screwed on straight and I like that he considers himself a pragmatic materialist.

            Liked by 1 person

  10. Assuming robots don’t enslave humanity in 2200 (fingers crossed!) I’d LOVE to see conscious machines and humans living side by side. I think it’s interesting Seth doesn’t envision AI attaining consciousness, but I’d venture to say AI could replicate it to the point of it being indistinguishable to us, imperfect beings as we are. And hey, if they react with joy when we compliment them or sadness upon a tragedy, who cares if it’s just their programming?

    Maybe an AI society alongside a human one could reduce loneliness (humans are social beings) and increase productivity. Although people would still need to do physical/mental things themselves too, otherwise we’d evolve into couch creatures.

    Liked by 2 people

    1. That’s a good way of putting. It actually matches up with Alan Turing’s famous test for machine intelligence. If we can’t tell the difference, it’s intelligent, or conscious, or thinking, or whatever label you want to use to describe systems like us.

      Many people worry about AI enslaving us. But I think the real danger is the one you mention at the end, of us devolving into couch creatures. Imagine a world where all the work is done by robots, and all our social interactions are with Westworld type entities, only they work as designed.

      No reason to stay in shape, or develop social skills, or even to procreate. Why bother when we have manufactured entities to interact with, that are so much easier to deal with than real humans, and that can satisfy every sexual desire we have without complicating things with their own human desires? AI might work exactly as we desire them to, and that might lead to our end. Extinction by paradise.

      Liked by 1 person

      1. ” If we can’t tell the difference, it’s intelligent, or conscious, or thinking, or whatever label you want to use to describe systems like us.”

        Consciousness definitionally relates to an internal subjective state not whether we can tell the difference externally.

        Liked by 1 person

        1. The question is whether the external behavior can be practically generated in a sustained and consistent manner without having its own version of those internal states. (“Practically” is important here; no lookup tables bigger than the universe.)

          Liked by 1 person

          1. Maybe if you could tell what external behavior could only be performed by entities having a conscious internal state.

            Possibly mistakes and inconsistency might be better tests of a conscious states. Something that invariably responds in the same way to the same stimulus would more likely not be conscious.

            Liked by 2 people

          2. On which external behavior, that gets into the observation that what we call “consciousness” is more like life than temperature. It’s a collection of capabilities, capabilities that are present in varying levels for each system. Different combinations of those capabilities allow for different behavior. And that leads to the functional hierarchy of consciousness I periodically write about.

            Mistakes and inconsistencies is an interesting criteria. It points to a certain degree of volition, as well as world and self models, models with varying levels of accuracy, that is, with varying levels of prediction error.

            The real trick here is to be consistent. If we use certain criteria to say a crab or snail is conscious, we should be prepared to apply the same criteria to a robot.

            Liked by 1 person

          3. I ‘m surprised you set the criteria for your robot so low, to the crab level. Since the capabilities are somewhat limited for testing with a crab, it would nearly impossible to ever know from the capabilities you can test which might require “consciousness”.

            Of course, we can always gather information on the crab and crab robot, human and human robot. For example, we can run an MRI. If it looks like a brain…

            Liked by 1 person

          4. The crab reference was really just meant as an example, not my standard. I had this recent article in mind noting that the UK has determined that decapods, including crabs and lobsters, are sentient. https://gizmodo.com/octopuses-crabs-and-lobsters-are-sentient-beings-say-1848105287
            Mind you, they don’t plan to actually do much about it, but they have reached that determination, apparently with the help of a team led by Jonathan Birch (the dimensions of consciousness guy). Of course, not everyone would agree with their criteria:

            The team considered eight different criteria for sentience, including the presence of pain receptors and integrative brain regions, the capacity for associative learning (as opposed to habituation and sensitization), and “flexible self-protective tactics used in response to injury and threat,”

            Liked by 1 person

          5. Even if we assume there might be non-biological “pain receptors and integrative brain regions” analogs for robots their presence wouldn’t demonstrate consciousness since we can’t be sure if the robot feels anything or is just acting like it feels something. Learning and self-protective tactics could be done with non-conscious machines.

            None of those criteria work with robots.

            Do you have anything else?

            Liked by 1 person

          6. Again, not my criteria. (When arguments about criteria erupt, my response is the functional hierarchy.)

            My only point is that whatever criteria someone does use, they should be consistent with it, not apply one set of criteria for life and another for other things. Of course, one of their criteria could be that it must be alive, which is fine, but then they should be prepared to provide criteria for what makes a system alive.

            Like

          7. I’m still looking for your criteria since you seem pretty convinced it is just a functional thing. There must be some function that only a conscious robot could do and an unconscious one could not. Anyone who asserts that a robot could be conscious should be prepared to tell what that function is.

            Like

          8. How do we measure distance; what’s our yardstick? A dead robot might be seen as not much better than a rock. We and animals are made from plants and other animals; robots are made from, well, rocks. 😀

            Liked by 1 person

          9. Obviously I’m talking in terms of functionality and behavior. If we start talking about substances, I’m not sure if you put a pile of the base elements in a human body next to a pile of the base elements in a robot’s body, that most people would have any intuition about which one was more like us. Is it easier to relate with a slab of graphite than one of silicon? 🙂

            Like

          10. I can’t speak for others, but I see much more in common with the carbon than the silicon! We’re mostly oxygen (65.0%) and hydrogen (9.5%) — humans are 74.5% just gas. 😉 We’re 18.5% carbon with a hint (3.2%) of nitrogen and traces of other things. So, yeah, mostly “graphite” — certainly organic (we’re 96.2% CHON). My laptop, on the other hand,… 😀

            Of course you meant functionality; I knew you did. That’s why I teased you with a dead robot! 🙂 To continue being silly, I think it points out another difference. If you had a small dead robot (say all its motors and circuits burned out), you might, instead of a rock, use it as a paperweight. But would you so use a dead rabbit? 😮

            Like

          11. Yep, we’re all mostly full of hot air. 😁

            If the rabbit were burned out, but not too much, it might make a good meal. (Provided we got to it soon enough.) Likewise any parts from the robot could conceivably be salvageable. Now, if we could figure out how to make a robot how of gingerbread… 🙄

            Like

          12. There again, though, we could salvage the rabbit to make more us, and would salvage the robot only to make more robots. The differences seem to pile up here. 🤔

            I suspect gingerbread robots would quickly go extinct, especially around this time of year! 😋

            Like

          13. I think I read somewhere that someone has already deployed armed autonomous drones. They fly off and seek their own targets using pre-programmed parameters and the magic of AI. They have a small charge (all they can carry), so they fly up next to their target and detonate. 💥

            Like

          14. Like the bombs in Dark Star? 😀

            Yeah, that was the article I read. I’ve wondered why we haven’t seen more of that. Those drones are practically the hunter-seekers from Dune.

            Like

          15. Ah, yes, be sure not to get your bomb excited on a false alert too often.

            The hunter-seekers in Dune are interesting, because due to AI being taboo, a human operator had to be nearby. Actually, I just realized something about Dune, one reason that universe disturbs me. That assassin had to be holed up for months on a suicide mission just for a chance to kill the Duke’s son. This is something that pervades the whole series. The Duniverse bans AI, so humans have to become like machines. I wonder if Herbert did that on purpose.

            Liked by 1 person

          16. The hierarchy is a sort of mish-mash of things we can find ways to measure externally (for example, whether an organism can distinguish red) with other things, like imagination and introspection which are difficult to measure. There isn’t any function I can see that can be measured that couldn’t be done by machines. In the end, if we can measure a behavior, we can probably develop a machine that can perform it.

            Liked by 1 person

          17. Imagination can be measured by testing whether the system can plan its actions, such as a crow figuring out how to get to a piece of food in a novel apparatus. And we measure introspection all the time, in humans, by having them report their states. Of course, measuring it in any system without language is difficult. There have been tests demonstrating various levels of metacognition in different non-human species, but full-on introspection may be unique to humans, for now.

            Like

        2. Jim,

          Just an FYI for you or anyone else who might be interested. This video offers some pretty compelling empirical evidence on mind being quantum.

          Roger Penrose – Webinar – Day 3 – YouTube
          The Science of Consciousness Conference Aug 13,2021

          Liked by 1 person

        3. Not only is consciousness an internal state, but some of it is interoception. In other words, internal states which are *about* other internal states. Some conscious states, like color perception and temperature perception, include both exteroceptive and interoceptive aspects.

          Liked by 1 person

  11. In case those of you interested in these matters haven’t seen it already, interesting presentation (video and transcript) on Essentia Foundation by Iain Gilchrist: “Consciousness is the stuff of the cosmos”. I think I agree with most of it until he leans a bit too far toward it all being consciousness and away from matter for my taste, but maybe that’s just a matter of degree.

    Liked by 2 people

    1. Interesting talk: https://www.essentiafoundation.org/seeing/iain-mcgilchrist-consciousness-is-the-stuff-of-the-cosmos/

      If I’m unraveling his language correctly (and I didn’t watch the actual talk or Q&A, just skimmed the transcript), I think I agree with much of what he’s saying. But I’m not wild about the language he’s using. It seems to aggressively invite misinterpretation. It’s a bit too Continental in form for my tastes, which often gives the impression of saying something radical, but when carefully unpacked amounts to something more standard.

      Still, food for thought. Thanks for sharing!

      Like

  12. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

    Liked by 1 person

    1. When talking about a conscious machine do you mean a machine capable of phenomenal consciousness, capable of feeling?
      As far as we know the performance of feeling is a performance of life. And the problem is that do not know what is life. We know partly how it works but we do not know its nature. We cannot fabricate life. We can only modify some of its characteristics (ex: modify the genome of an existing cell).
      Can we think at imitating something we do not understand? I don’t think so.
      This brings the subject of strong AI to the unknown nature of life. This positions artificial life to come before strong AI.
      An option for artificial life would be to merge life with silicon, to somehow bring a true “stay alive” constraint into computers (“meat in the computer” ?). (https://philpapers.org/archive/MENTTC-2.pdf)

      Liked by 2 people

      1. Christophe,
        I’ve been wondering how you could be so sure that you’ve developed an evolutionary perspective of how something like a great ape might evolve into something like a human, and yet skip whatever it is that creates something like a great ape. This seems like a substantial omission in terms of evolution. But if you consider phenomenal experience to be an inherent component to everything that lives, then I think I understand. In that case “consciousness” is just “life”, though here the evolution of the human version of it might be a more interesting question. Conversely for me it’s the opposite.

        I wonder however how comfortable you are with the implications of all life phenomenally experiencing its existence? When I go out to spray my weeds with herbicide, I personally suspect that these forms of life feel nothing as they die. But do you consider me to be inflicting any suffering upon them? And from your view would it be possible for someone to be kept alive and yet not experience anything, as is widely presumed with anesthesia for surgery, or even being in a coma? Conversely from my own perspective the brain is a non-conscious machine, and many of them often produce a phenomenal dynamic which constitutes the conscious form of function, or something which is quite different from the purely non-conscious variety of other brain activity.

        Like

        1. Eric,
          What we know about the evolution of our universe follows well accepted steps: Energy => Matter=> Life=> Human consciousness. My interest about an evolutionary nature of human mind is with the last step, from life to human consciousness. Pre-human primates are starting point, as written in the presentations.
          Human consciousness is phenomenal consciousness AND also self-consciousness. Feeling is an inherent component of life, but self-consciousness is not. “Consciousness” is not just “life”. (As said earlier it is too bad that philosophy of mind tends to disregard self-consciousness.. “Consciousness” tends more and more to be understood as “phenomenal consciousness”. This creates confusion).
          If we consider that phenomenal experience is feeling, then we have to accept all living entities as subject to phenomenal experience. We agree that mammals have feelings (pain, fear…). But does this imply that a paramecium feels the presence of acid in her vicinity? Not like a mice feels the bite of a cat, for sure. But I think the paramecia have some elementary kind of feeling (like your weeds receiving herbicide). The problem with the concept of phenomenal feeling is that it has been introduced as a human feeling only, not caring about biological theories (again, see S. Cunningham “philosophy and the Darwinian legacy”). Self-consciousness can help: human phenomenal experience is the animal one plus self-consciousness.

          Liked by 1 person

          1. Christophe,
            The thing is that if you begin from the premise that everything living is also sentient, which is to say also feels its existence in a personal or value based capacity, then you also diverge from what the vast majority of us believe. It’s generally believed that life must at least have central information processors (or “brains”) to create a sentience dynamic, and from there only when such machines are incited to do so. So here your proposal of how something like an ape might have evolved into something like a human might be dismissed simply given that what precedes it seems strange to us in general. If something merely biological like a tuft of grass can personally suffer as it’s killed, then what are the mechanisms by which it does this suffering? Don’t many genetic processes associated with “life” continue long after “death” and into its decomposition? And why would the human seem to lose all phenomenal experience when under brain altering anesthesia given that this should merely affect certain brain function rather than end the vast majority of its neural and genetic function in general?

            The suggestion here is that certain brains might create something which sometimes experiences phenomenal dynamics that we humans might refer to as “hunger,” “vision”, “sound”, “pain”, and so on if we were to have such experiences ourselves. So if you dispute that brains do this sort of thing then your larger “ape to human” stance might not get far without explaining why or how it is that life itself constitutes entities which phenomenally experience their existence. I wonder if your model is able to make such a “brain adjustment”, and so be altered to conform with certain brain based conceptions of phenomenal experience?

            Like

          2. Christophe,

            I quickly surveyed your website home page and you have an impressive array of essays to your credit demonstrating your commitment to the study of consciousness. I commend you for the personal capital that you have invested in this endeavor.

            Having written and published two books myself and participated on a few different blog sites over the last two or three years, it is my experience that few individuals have any taste for new ideas that do not conform to their own confirmational biases. As a result, I have dialed back my active participation on these sights focusing on my own research. I pop in from time to time to in an attempt to articulate how my own views have changed and continue to evolve in case anyone is interested.

            Just an FYI, I am in agreement with your assessment that sentience is a fundamental property of all living systems. I am currently working on a model which significant expands the concept of sentience to all systems regardless of whether they are organic or inorganic. Phenomenal experience is unique to the system we call mind, but sentience is a ubiquitous property of matter. What we recognize as the four forces of nature, electromagnetism, the nuclear force both strong and weak as well of this thing we refer to as gravity are sentient experiences that physical systems experience, experiences that are non-conceptual, no processing required to generate motion

            It is the experience of non-conceptual sentience that is responsible for motion resulting in form in non-conscious systems and equally, it is the experience of conceptual sentience that is responsible for motion resulting in form in conscious systems. It does not matter whether that motion is physical motion of moving an arm or an emotion which equally stimulate a response.

            Good luck

            Liked by 2 people

          3. Reply to post from Philosoper Eric Nov 24 8:58pm
            Eric,
            There is more and more evidence that plants can feel. Feeling is a basic charactertistic of life as it is directly related to survival. All living entities have to satisfy a “stay alive” constraint which implies the capability to feel, to receive and interpret information having a relation with that constraint (as you know I call this “meaning generation for constraint satisfaction”).
            Available vocabulary and our trend to anthropomorphize does not help to clarify that subject.
            Feeling, meaning generation, sentience, awareness, different types of consciousness, with or without “self” tend to multiply the possible threads and make the overall picture look quite messy.
            C’est la vie…..

            Liked by 1 person

          4. Christophe,
            I agree that things can get really messy in this corner of academia, and often because it’s difficult to know how a given term is being used, much less many strung together. I’ve noticed that quite a few popular people in the softer parts of academia seem to depend upon their unclarity or obfuscation in order to achieve and maintain their popularity. It’s far more difficult to criticize a position which is given only vague or shifty parameters. Eric Schwitzgebel once did a post on the topic that I think nailed it. This leads me to complement you on both your clarity and that you aren’t abandoning a position simply because it’s unpopular. Like you I hope to help others grasp my various positions, and yes offer their criticism wherever they suspect that my models might fail so that improvements might be made when needed.

            Like

        2. Eric,

          Just an FYI on our other conversation; Roger Penrose has a youtube video from “Closer to Truth” episode dated October 8, 2020. Sounds like he is convinced that the system of mind is quantum and has a physics link to the instantiation of those dynamics that he did not have when he wrote his first book.

          Like

          1. Reply to Lee Roetcisoender Nov 24 post to me (Christophe)

            Lee,
            Thanks for your support.
            Looking at expanding the concept of sentience to inorganic systems is an interesting subject. It looks pretty clear that in addition to the 4 laws there is something in our universe that brings particles and atoms to increasingly build up more and more complex entities. Can it come from a trend to increasing complexity, naturally existing in addition to the 4 forces, and coming with an inter-matter kind of sentience? Perhaps you have an idea.
            Please let-us know

            Like

          2. Christophe,

            Yeah, it’s pretty clear that complexity is a relational dynamic of sentient systems taking on the values of the other sentient systems through emergence. If systems take on the values of other systems through emergence, the sum of those values over time will become logarithmic resulting in consequential complexity.

            So I really think the answer to understanding the dynamic of complexity lies in examining this thing we refer to as value. According to my models, value might actually be the “universal constant” in our universe that drives motion resulting in form. For unconscious systems, the so-called 4 laws of physics are actually non-conceptual representations of that universal constant. For conscious systems like ourselves, motion resulting in form would be driven by conceptual representations of that same universal constant.

            According to this explication, any given system that is engaged in a relational dynamic with another system where emergence occurs, the emergence of that system would be a derivative of those values based upon what feels good for that newly emergent system in contrast to what feels bad. One could posit that sensations or feelings that are positive on this linear scale of value may actually be the impetus for the complexity and the resulting novelty that we observe in our universe.

            Liked by 2 people

  13. Mike, since you have Seth’s book at hand, can you repeat for us his definition of consciousness. I’m sure he uses the words conscious and consciousness liberally throughout, so without his definition we cannot know what those statements mean.

    Speaking of definitions, of which you believe there are an uncountable number, leading to your “eye of the beholder” claim, you might recall my proposal:

    Consciousness, noun. A biological, embodied, unified streaming simulation in feelings of external and internal sensory events (physical sensations) and neurochemical states (emotions) that is produced by activities of the brain.

    Since Chalmers is mentioned above, here’s his definition, from Sam Harris’ Making Sense:

    It’s basically what it feels like, from the first-person point of view, to be thinking and perceiving and judging.

    So Chalmers is in the “Consciousness is Sentience” definition camp as well as Damasio (and his crowd) with his more general “Consciousness is the feeling of what happens”. So the “consciousness as sentience (feelings)” definition has quite a few adherents.

    Going forward, Mike, perhaps you could include the author’s definition of consciousness in your posts about consciousness books and articles. That would allow us to track the various definitions, so we can eliminate the non-factual and/or erroneous ones and see if your eye of the beholder viewpoint is factual or simply an impression. As I’ve commented previously, I don’t believe there are many credible definitions to choose from. Heaps and gobs of philosophical metaphoric theories of consciousness, but few definitions.

    As to ‘prediction’, I prefer the term ‘expectation’ because the brain isn’t foretelling anything. Rather, incoming sensory information is continuously processed as a story and an ongoing comparison with remembered stories results in the formulation of expectations. We generally experience what the brain expects to experience based on this mechanism. All of this, of course, is strictly unconscious processing—we do not consciously simulate action scenarios or resolve complex navigation as you sometimes seem to imply.

    Here’s a new thought! Although it’s a trivially obvious fact, perhaps we should consider that all memories have their origin in experience. Nothing unconscious can be recalled or utilized in the brain’s expectation processing. So there’s a three-part cycle to consider:

    expectation→consciousness (content)→memory formation

    Clearly expectation that conforms closely to reality leads to a more successful organism and successful actions add to the memory store which drives expectation. All three components are required. Perhaps there’s something here that hints at what consciousness is for. Thoughts?

    Liked by 2 people

    1. Stephen,
      I’m not really wild about Seth’s explicit definition of consciousness, which isn’t far from Chalmers’. From the opening pages of the book:

      Although the functions and behaviors associated with consciousness are important topics, they are not the best places to look for definitions. Consciousness is first and foremost about subjective experience—it is about phenomenology.

      Seth, Anil. Being You (p. 14). Penguin Publishing Group. Kindle Edition.

      However, as I noted in the post, his effective definition is more interesting. From the end of the book:

      The first challenge was to understand perception as an active, action-oriented construction, rather than as a passive registration of an objective external reality.

      …The second challenge turned this insight inward, to the experience of being a self. We explored how the self is itself a perception, another variety of controlled hallucination.

      …The final challenge was to see that the predictive machinery of conscious perception has its origin and primary function not in representing the world or the body, but in the control and regulation of our physiological condition.

      …Everything in conscious experience is a perception of sorts, and every perception is a kind of controlled—or controlling—hallucination.

      Seth, Anil. Being You (pp. 280-282). Penguin Publishing Group. Kindle Edition.

      If you want more detail, you’ll have to read the book. 🙂

      It’s not clear to me that all memories have their origin in conscious experience. There are unconscious memories. I think they’re what often lead to intuitions which seem to come from nowhere. Certainly a lot of learning is unconscious. But arguably one function deeply tied to consciousness is sophisticated observational learning. You might find this post interesting: https://selfawarepatterns.com/2020/04/07/unlimited-associative-learning/

      Liked by 1 person

      1. Seth’s “Consciousness is … about subjective experience” isn’t really a definition (although he calls that a “folk definition,” see below), but more of a remark about the contents of consciousness (see below again) and perhaps a tautology. That accords with his “Everything [all contents of] conscious experience is a perception of sorts” as well. I suspect his ‘hallucination’ reference can be taken as similar to my “simulation in feelings,” since the content of feelings is completely and vastly different from the physical events they represent.

        I found that Seth was also interviewed by Sam Harris in Making Sense. Seth remarks:

        “There’s a sort of easy, folk definition, which is that consciousness is the presence of any kind of subjective experience whatsoever. There is a phenomenal world of subjective experience that has the character of being private, that’s full of perceptual qualia, or content—colors, shapes, beliefs, emotions, other kinds of feeling states, and this world can go away completely in states of general anesthesia or dreamless sleep. It’s very easy to define consciousness that way. To define it more technically will always be a bit of a challenge.”

        Although he is reluctant to provide a precise “technical” definition, I submit that his “other kinds of feeling states” puts his definition in the sentience camp, leaving us, at this point, with a single credible factual definition of consciousness.

        I find your statement “There are unconscious memories” very surprising. Memories that cannot be accessed? What would their content be? Is there a separate short-term memory storage and conversion to long-term memories in a separate memory region? I’ve never come across any assertions that would support that. Do you have any references that intuitions aren’t simply the result of metaphoric pattern matches, as in “A is like B so perhaps some B-properties apply to A.”

        I failed to mention that my proposed “expectation→consciousness (content)→memory formation” cycle shows why p-zombies cannot exist. The original person’s consciousness will result in memory formation and the expectation resolution process will possibly have different results than otherwise. Those altered expectations can and will lead to different behavior from that of the p-zombies. I believe that if the production and operation of consciousness changes anything in the brain, then the p-zombie is no longer identical to the original conscious person.

        Liked by 1 person

        1. What you found largely matches Seth’s definition at the beginning of the book, albeit without the “folk” qualification. I agree it’s largely tautological. That’s fine for a Merriam type definition, but doesn’t really say anything beyond that.

          On unconscious memories, the best thing to do is google “implicit memories”. The Wikipedia in particular discusses the research history, with some citations.

          I actually don’t think classical p-zombies are meaningful under any physicalist theory. The concept is only coherent under some form of non-physicalism (meaning it requires as a premise something it purports to demonstrate), and even then it implies philosophical epiphenomenalism.

          Like

          1. BTW, Seth goes on to say, “To define [consciousness] more technically will always be a bit of a challenge.” What distinguishes a technical definition from one like “a simulation in feelings”? Or was he just dodging the issue because he doesn’t have a fact-based definition of consciousness to talk about?

            So Wikipedia’s Implicit memory” article starts out with ”One of its most common forms is procedural memory, which allows people to perform certain tasks without conscious awareness of these previous experiences; for example, remembering how to tie one’s shoes or ride a bicycle without consciously thinking about those activities.” Uhhhh … memories allow that? A memory of bike riding? Seems to me more likely that learning to ride a bike has established those physical capabilities as almost reflex-like.

            Also: “This provided evidence for specific and long-living influences of past memory even when participants were unaware of its influence” and “As people mature, they are usually capable of intentional recollection of memory, or explicit memory.”

            These don’t seem to be the “memories formed unconsciously” that I have a difficult time believing in. This seems more like memories that influence processing without being first explicitly recalled.

            Like

          2. Implicit learning isn’t unconscious learning. There is no evidence for learning occurring while someone is unconscious. That people can learn while not totally conscious of what they are learning (or at least not being able to put into words in an explicit manner) is not the same is learning while unconscious. In implicit learning, there is feedback from the environment that consciousness mediates.

            Liked by 1 person

          3. Well, as usual, this gets into how someone defines “consciousness”. Which means another hierarchy! (You are no doubt overjoyed.)
            https://selfawarepatterns.com/2021/03/20/a-perceptual-hierarchy-of-consciousness/
            The TL;DR is that we have four categories of mental content.
            1. Content that is the result of introspection.
            2. Content that is the current target of introspection.
            3. Content that is within the scope of potential targets of introspection but isn’t currently being introspected.
            4. Content outside the reach of introspection.

            Explicit memory must make contact with 1 to be explicit. It’s worth noting that explicit report is the gold standard of detecting conscious content in science. Other sources of detection must be for content that has already been demonstrated to be reliably accessible by 1, at least in humans.

            Implicit memory can be in 2 that 1 is getting wrong, or in 3 or 4. If you consider 3 conscious, then you might well consider a significant portion of implicit memory to happen consciously, consciousness that “overflows” report. But at least some of it appears to happen in 4 (such as sensory impressions lasting less than 50 ms that nevertheless have an effect on mental states).

            Like

          4. This isn’t that hard. Show any evidence of learning beyond maybe simple stimulus-response sort of stuff while a person is deeply asleep or under an anesthetic.

            Nobody learns a language or to ride a bike while they are unconscious. They might not be able to self-report they learned something or describe some rule for what they learned but whatever they learned they learned it while they were awake.

            Liked by 1 person

          5. What I think you are missing is that learning by its nature requires a matching of internal representations with the external world with constant correction and adjustment. That matching doesn’t take place without involvement of conscious processes even if it also might involve unconscious ones.

            Like

          6. I’m not wasting time explaining the distinction between wakefulness and awareness.

            On learning, I’ll remind you that it’s widely acknowledged that sensitization, habituation, and classical conditioning don’t require conscious cognition. Even localized operant conditioning doesn’t require it. Ginsburg and Jablonka cite unlimited associative learning, roughly equivalent to Feinberg and Mallatt’s global operant learning, as the type requiring minimal conscious awareness. They cite extensive sources, and I believe you have their book. It’s worth noting that they’re working with a pretty liberal version of consciousness.

            Like

          7. I think Mark Solms argues that part of the role of consciousness is learning so that what is learned can then be made automatic and unconscious. The ultimate goal is to be unconscious but since there are always nuances and new challenges in the world the goal can’t be reached in a final way.

            Like

          8. Stephen,

            The definition of consciousness is pretty straight forward, it contains two fundamentally basic axioms:
            1. “Consciousness is a localized field of sentient experiences.” A. It’s localized because it occurs within the confines of a physical brain. B. It’s a field because the experience is multifaceted and consists of many things. C. All of those multifaceted experiences having feeling associated with them, Chalmers “what it feels like”.

            2. “The experience itself is a conceptual representation of a fundamental reality.” A. Consciousness is not a direct experience, it’s a conceptual one. The rock that one observes smashing ones little finger is not in ones head, neither is the finger. The rock, the finger as well as the pain one sentiently experiences is a 100%, unadulterated “conceptual experience”. B. Just because we have not figured out what this fundamental reality conceptually represented in our mind actually is does not negate the fact that all of those conceptual representations reduce to a fundamental reality.

            Consciousness; Noun:
            Consciousness is a localized field of sentient experiences that are conceptual representations of a fundamental reality.

            From here we can all argue about the contents of the experience and whether or not that we actually feel anything, what’s real and what isn’t etc. One can also play the game and posit that the ultimate goal of consciousness is to become unconscious, a new species of philosophical zombies. Last time I posted this definition Mike wanted to obfuscate it right out of the gate by asking: “it all depends upon what you mean by experience”. This is all just a game and it’s not limited to the participants of this blog, it’s the same game academics play. Seems like the paradigm of collaboration amongst individuals is a ship that sailed a long time ago.

            Party on……

            Liked by 1 person

        2. Stephen,

          Welcome back to the ongoing side-show of larger circus. A technical definition of consciousness is relatively straight forward, albeit none of the paid faculty of the adult day care institutions of academia seem capable of accomplishing this simple mundane task. Everyone is hung up on the contents of our experience which is fine but, when the contents of the experience are conflated with the definition, this slight of hand only creates more confusion.

          As a “pragmatic” materialist myself, as I see it there are three hard problems of associated with consciousness:
          1. The hard problem of phenomenology. This problem may be within the reach of physics to resolve.
          2. The hard problem of sentience. This is a metaphysical problem not a scientific one.
          3. The hard problem of matter. This is also a metaphysical problem, one that has to be succinctly articulated before the hard problem of sentience can even be addressed.

          Party on…..

          Liked by 2 people

          1. Lee, what is your relatively straightforward technical definition of consciousness? As I just commented, Seth remarks that “To define [consciousness] more technically will always be a bit of a challenge.” I repeat: What distinguishes a technical definition from one like “a simulation in feelings? Are the two of you talking about an anesthesiologist’s metrics?

            I still believe Chalmers’ Hard Problem of Consciousness only exists for mind-matter dualists.

            Liked by 1 person

          2. Interesting definition Lee, but I have questions. I’ll repeat my definition here to save you scrolling:

            Consciousness is a biological, embodied, unified streaming simulation in feelings of external and internal sensory events (physical sensations) and neurochemical states (emotions) that is produced by activities of the brain.

            And yours:

            Consciousness is a localized field of sentient experiences that are conceptual representations of a fundamental reality.

            1. If it’s localized to the brain, why not say so in the definition? I specify “produced by activities of the brain” as well as biological and embodied.

            2. Your introduction of the term ‘field’ seems unnecessary. What you seem to be saying is that the contents of consciousness seamlessly combine separate sensory tracks and emotions. I specify that the contents of consciousness are all feelings, making consciousness itself a feeling.

            3. I trust by ‘sentient’ you mean ‘feeling’ which was the definition of the word before all the sci-fi authors wanted it to mean ‘intelligent’ and/or ‘conscious.’ The word is derived from Latin sentientem (a feeling); the ability to experience sensations. Experiences don’t have feelings associated with them—experiences are feelings.

            4. The word ‘conceptual’ seem out of place here and appears to have a lot of baggage that doesn’t fit your usage. Google the definition to see what I mean. The first one that pops up is “relating to or based on mental concepts,” which in your usage seems circular. My definition specifies that consciousness is a simulation (representation, if you will) because the contents of consciousness are nothing at all like the physical and/or neurobiological reality they represent. I’ve repeated this many times: there’s no color in the world, nor brightness or darkness; the world is completely silent; your digestive system biochemistry is not nauseous. On and on—qualia represent (simulate) physical reality and only exist in the mind. They’re not concepts.

            Now that I’ve critiqued your definition, how about your analysis of mine? What can be improved? What’s unclear? Anything non-factual?

            I maintain that we can locally develop a credible, fact-based definition of consciousness and eliminate all of that “eye of the beholder” equivocation and nonsense. How about joining the effort?

            Liked by 2 people

          3. These definitions of consciousness are interesting. But let’s not forget that we need to be self-conscious in order to talk about consciousness. We need to be self-conscious to talk about our feelings and experiences. We need to consider ourselves as entities capable of mental and physical actions (to think and to type). This is implicit but we cannot do without it. The capability for actions is needed for actions, and self-consciousness has not really been taken into account so far. Aren’t we putting the cart before the horse when forgetting to address the reality (and the nature) of self-consciousness?

            Liked by 1 person

          4. Stephen,

            Your definition reads:

            Consciousness is a biological, embodied, unified streaming simulation in feelings of external and internal sensory events (physical sensations) and neurochemical states (emotions) that is produced by activities of the brain.

            Your use of the words “biological, embodied, unified streaming” is fine as is the word “feelings” which is essentially saying the same as: “…a localized field of sentient experiences.” I choose the phrase “localized field” to contrast it against how idealism expresses consciousness as a “a universal, or global field”. “Localized field” takes the definition out of woo woo land.

            “Simulation” is problematic because it infers implicitly or other wise that the experience is not real, which puts it in the same category as a controlled hallucination or an illusion of some kind.

            “…external and internal sensory events (physical sensations) and neurochemical states (emotions) that is produced by activities of the brain.” This phrase focuses too much on what’s occurring in the brain, or how feelings emerge from those activities.

            Whereas; “The experience itself is a conceptual representation of a fundamental reality” gets right to the core of the experience itself. I choose the term “conceptual representation’ to contrast it against a “non-conceptual representation” whereas, a non-conceptual representation would be the sentient experience of a plant or a single celled amoeba for example. (pretty sure you have a problem with the notion of pansentientism).

            A definition has to apply “universally” in order to be defensible. There are those who will contest that conceptual representations are objects in the world that we perceive through our senses and exclude objects of our imaginations as representations. My definition does not make that distinction and explicitly states that all of the contents of our experience is a representation of a fundamental reality. That might come across as a sweeping statement, nevertheless it’s defensible because one cannot get something from nothing. (Trying to figure out what that thing actually is becomes another topic of discussion).

            Liked by 1 person

          5. Christoffe,

            “…self-consciousness has not really been taken into account so far.”

            Good point and I appreciate you bringing it up. I get what you are saying here. Personally, I wouldn’t choose the word self-conscious, I’d refer to it as self-awareness. This simple distinction avoids the possibility of obfuscation or employing a tautology.

            The awareness of self is absolutely essential, no question about that point but fundamentally, the awareness of self is just another idea (a concept) that the mind constructs from the data it receives from the fundamental reality in which it exists.

            As an analogy of what I’m referring to here: in physics it’s recognized that at a fundamental level every classical system is literally a quantum effect. Likewise, the idea of a self (the concept of) along with every other idea (also concepts of) is an effect of a fundamental reality impinging upon us as a system.

            So, I’m just not sure how incorporating and/or emphasizing the importance of a self along with its counterpart of self-awareness into a definition contributes to the definition. From my perspective, the idea of a self along with it’s counterpart of self-awareness would be part of the contents of consciousness. It’s easy to conflate consciousness with the contents of the experience, that’s why I think a simple yet concise and succinct definition trumps an overload of too much information. This will avoid all the pitfalls associated with conflation, obfuscation or tautologies.

            Liked by 1 person

          6. Lee,
            You are right. The concept of self is complex enough by itself. For our discussion it would be better to use “consciousness of our own entity”, (and probably introduce the reflectivity of consciousness. In French we use “reflective consciousness”). And I still strongly feel that a consciousness of our own entity is mandatory when addressing our feelings and our experiences. That consciousness has to exist to make possible any thinking about onself. It is a bit surprising that this subject has not been highlighted before.

            Like

          7. Christoffe,

            “And I still strongly feel that a consciousness of our own entity is mandatory when addressing our feelings and our experiences. That consciousness has to exist to make possible any thinking about oneself.”

            Absolutely, I couldn’t agree more. And that a concise, succinct definition eludes everyone does not surprise me. I’ve been all over the map myself with a definition for the last three years or so. After spending a fair amount of personal capital conversing with idealists, materialists and academics, I’ve come to see why a definition is so difficult. I don’t want to impute motive here but I think the main reason academics refuse to commit to a definition is because once committed, one has to be able to defend their position; and in order to defend a definition it has to “hold universally”. If an idea or concept does not hold universally, someone will find the exclusions and tear a definition apart, usually in a public domain.

            “It is a bit surprising that this subject has not been highlighted before.”

            Rarely will you find intense discussions about reflective consciousness on a materialist blog site. In contrast, reflective consciousness is probably the most discussed topic on idealist blogs. Many of the insights I’ve gained for developing my current definition are a result of discussions with idealists, particularly those who practice the Zen Buddhist tradition. Though I am not an idealist myself, I owe those folks a great debt because I don’t think I could have developed a “defensible” definition without understanding their point of view. I consider myself a “pragmatic materialist” because we live in a material world and every system is based upon those physics. A physical universe is what we have, so material and the dynamics of that material is what we have to work with.

            Like

          8. Lee,
            Phenomenology does not take into account the need for a consciousness of our own entit, mandatory when addressing our feelings and our experiences. What phenomenology looks at is a minimal form of self-consciousness as a constant structural feature of conscious experience. And ‘this immediate and first-personal givenness of experiential phenomena must be accounted for in terms of a pre-reflective self-consciousness’ which is present whenever we are undergoing an experience (Gallagher & Zahavi, 2010).
            So phenomenology replaces self-consciousness by the postulate of pre-reflective self-consciousness. I do not think it is reasonable and prefer to consider an evolutionary approach to self-consciousness that naturalizes reflectivity (https://philpapers.org/rec/MENRAA-4).

            Like

          9. Christoffe,

            After reviewing the link you posted, I agree with your assessment. Furthermore, an evolutionary approach to self-consciousness naturalizes reflectivity and phenomenology has to be viewed from this perspective.

            Nice work……

            Like

          10. Christoph, self consciousness isn’t a separate type of consciousness. I believe the ‘self’ is a story, perhaps a meta-story, and becomes the contents of consciousness when considered.

            Like

          11. Christophe

            … and self-consciousness is somewhat anthropomorphic. No one can know whether all conscious organisms feel themselves as a ‘self’.

            Like

          12. Stephen,
            Agreed about the anthropomorphic aspect of our human self-consciousness, and about the self as a story becoming the content of consciousness when considered.
            But an evolutionary approach brings to position self-consciousness as a performance, as the capability to think about one’s own entity represented as existing in the environment, like conspecifics are represented (https://philpapers.org/archive/MENPFA-4.pdf).
            As introduced with Lee, the usage of the word “self” is to be avoided as much as possible (Hume, possible animal self, etc..). I prefer using “auto-representation” or “representation of one’s own entity”.

            Like

          13. Christophe,

            “..the capability to think about one’s own entity represented as existing in the environment, like conspecifics are represented.”

            This is a great contribution Christophe and I hope the contributors of this blog can appreciate the clarity and profound implications this distinction makes for the discipline of philosophy. Because essentially, “a representation of one’s own identity” in contrast to a “self” negates the circular and psychical trap of solipsism.

            Like

          14. Lee, more on definitions of consciousness (please excuse any repetitions):

            Regarding your Axiom #1B: “It’s a field because the experience is multifaceted and consists of many, many, many things.”

            That’s a novel definition of the word ‘field’ and is strictly your own as far as the dictionaries I consulted are concerned. I don’t believe you can escape woo woo land with one-off personal definitions like that. My definition captures the “many, many things” with the qualifier ‘unified’ applied to “simulation in feelings.” The “many, many things are not consciousness but the contents of consciousness—the feelings.

            The specification “localized field” therefore means your definition includes non-factual statements which are not allowed in the definition of a phenomenon. I suspect you want to make something more out of ‘field’ on your way to pansentientism. Your “conceptual representations” is pretty shaky too. Concepts require minds with a foundation of learning. How is an infant’s feeling of hunger conceptual?

            Yes I have a problem with “… sentient experience of a plant or a single celled amoeba …”. Where is your evidence that a plant or amoeba feels?

            I’m reminded that in April you defined consciousness as a ‘force’, which it clearly isn’t. When I quoted the physics definition of force and asked about your units of measurement for consciousness (newtons? dynes?), you dropped out of the thread.

            Regarding your comments about my definition:

            Simulation doesn’t infer unreality. It means that the experience and the physical origins of the experience in the organism and the world are totally different. Particular wavelengths of light are seen as colors, but colors aren’t wavelengths, nor are the sounds you hear sound compression waves. A computerized mathematical simulation of the weather is quite real too, although it’s clearly not the weather.

            “External and internal sensory events (physical sensations) and neurochemical states (emotions)” specifies the origin of feelings (the content of consciousness) which obviously occur in the brain. When defining consciousness, why ignore the brain that creates it?

            Have you or PhilEric considered the fact that every metaphysical System of the World constructed by philosophers and/or metaphysicians over the last few thousand years is Wrong? Perhaps your metaphysical ‘pansentientism’ is the very first correct System of the World but, based on your explanations, the odds seem pretty slim.

            Like

          15. Stephen,

            Let’s see if I got this straight, and please correct me if I’m wrong. The fact of the matter is that every metaphysical System of the World constructed by philosophers and/or metaphysicians over the last few thousand years is Wrong? Under those circumstances, the last thing one should do is be innovative and come up with a metaphysics that unique, one that is original, and one that defensible because it holds universally.

            “How is an infant’s feeling of hunger conceptual?”

            “Something needs attention I know not what.” That feeling of hunger is a conceptual representation of a low value experience (feeling). Because for the infant, just as it is for us, the feeling of hunger is an idea my friend. The only thing that separates our experience of hunger from an infants experience of hunger is that we’ve made the correlation and identified the infant’s “I know not what.”

            Here’s another tip: Do not adjudicate my model of “pansentientism” based upon information that I have not provided.

            Like

        1. He did say something like that. It often seems to me that Hume got a lot of stuff right.

          For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never can catch myself at any time without a perception, and never can observe any thing but the perception. When my perceptions are removed for any time, as by sound sleep; so long am I insensible of myself, and may truly be said not to exist. And were all my perceptions removed by death, and could I neither think, nor feel, nor see, nor love, nor hate after the dissolution of my body, I should be entirely annihilated, nor do I conceive what is farther requisite to make me a perfect non-entity. If any one, upon serious and unprejudiced reflection thinks he has a different notion of himself, I must confess I call reason no longer with him. All I can allow him is, that he may be in the right as well as I, and that we are essentially different in this particular. He may, perhaps, perceive something simple and continued, which he calls himself; though I am certain there is no such principle in me.

          But setting aside some metaphysicians of this kind, I may venture to affirm of the rest of mankind, that they are nothing but a bundle or collection of different perceptions, which succeed each other with an inconceivable rapidity, and are in a perpetual flux and movement.

          A TREATISE OF HUMAN NATURE
          https://www.gutenberg.org/cache/epub/4705/pg4705-images.html

          Like

      2. I think this is a sign of sanity: refusing to engage in premature definition. Tautologies and truisms can broadly outline a place to look and to do science. *After* that science has progressed, then one can go back, if truly necessary, and give tighter and more controversial definitions (or partial definitions).

        Liked by 1 person

        1. Maybe. If the initial definition actually does productively outline a place to look and do science. I’m not convinced simply saying “subjective experience”, “phenomenology”, or the wretchedly common “something it is like”, does that. Honestly, I think those phrases, along with the common unwillingness to even try clarifying them, become barriers to effective investigation.

          Like

          1. All three of those phrases point, at the very least, to the difference between dreaming/wakefulness on the one hand, which is like something, and dreamless sleep on the other. So that gives you one place to look.

            Like

          2. I struggle to see it. It seems like the “like something” phrase, if we take the common meaning of “like” and “something” is literally meaningless. It only serves as a tag, a hook, for people to hang an assumption bag on. But if we never open the bag and examine the actual assumptions, it actually ends up being a block to exploration (at least scientific exploration). Not examining those assumptions gives the illusion of simplicity and agreement, but only because we’re hiding the complexity and disagreements.

            Like

          3. Yes it hides the disagreements – that’s the point of avoiding premature definition. Perhaps you would say that Seth should have said “I refuse to give a definition until after we learn more.” To me, it’s tomayto, tomahto.

            Liked by 1 person

          4. Mike, you wrote: “It seems like the ‘like something’ phrase, if we take the common meaning of ‘like’ and ‘something’ is literally meaningless.”

            P. M. Hacker rips Nagel’s phrase to shreds in his paper, “The sad and sorry history of consciousness: being among other things a challenge to the “consciousness studies community”.
            See the list of his papers at:

            https://www.pmshacker.co.uk/selected-papers

            Like

          5. You’ve shared this paper with me before. I don’t remember too many of the details, but I definitely remember thinking it made many excellent points. I especially like how he begins, by pointing out just how recent the concept of consciousness actually is, and that even the early modern version is different from the one discussed by late 20th century philosophers.

            Like

  14. Stephen,
    I’m pleased that you’ve shown up to possibly balance out what I started earlier with Mike. My perception has been that in a sense you and he are relative opposites, you associating consciousness with some kind of life substance based quality of the brain (just as a diamond may effectively be defined to have substance based properties), and he associating consciousness with whatever seems to function consciously regardless of what it’s made of, somewhat like a lock needn’t be made of any particular substance. I said I’d try to stop criticizing each position in a blanket capacity, but rather only to the degree that they ignore whatever spooky implications they might have. Mike seemed to play along by refining his functionalism to processism. Theoretically this hacks off extraneous human conceptions of what’s functional to put things more in terms of causality itself. Furthermore he acknowledged that various substances do employ their own unique causal processes, and thus seems to accept the identity theorist’s point in that regard. The one issue we left open is how to handle the implications of various anti functionalist thought experiments, such as that something should feel what we do when our thumbs get whacked if paper with the right markings on it is converted into other paper with the right markings on it. How might causal dynamics exist here? I’m speaking of universal rather than mechanism dependent algorithms, or a tricky ploy by which many popular modern theorists seem to bypass the physics of phenomenality.

    With that recap, what about you? Given how primitive science happens to be in this regard today (and even noting the contributions that charismatic figures like Dr Edelman have had), are you open to the idea that the physics behind phenomenality might be somewhat like the physics of a lock rather than an entirely substance based identity? It seems to me that brains should not create phenomenal experience by just being what they are, but by doing what they do. And even if that “doing” does happen to depend upon certain brain substances, should we not determine what they are to see if they might be employed in one of our machines to produce something that phenomenally experiences existence well? As I see it, causality mandates that human made machines must have such dynamics as long as they implement the same physics that the brain uses to do so. To me the converse would be spooky. Agreed?

    Like

    1. PhilEric, that’s way too much Philosophical Speak for me, so I’ll take a pass.

      However, you seem to support the proposition that if we implement the same physics in a non-biological substrate as we see in a biological phenomenon then we will achieve the same results as the biological instance.

      Can you prove that is a true proposition? Can you point to an instance?

      Liked by 2 people

      1. Good question Stephen. No I can’t empirically prove that we humans already do or will ever build inorganic things that might effectively be referred to as “living”. In a practical sense that seems ridiculous to me. Imagine us constructing an inorganic seed that could be fed to grow into something like an inorganic Redwood tree, perhaps along with an entire inorganic ecosystem for support. Clearly human engineering is utterly pathetic when compared against evolution itself. But just because our engineering seems so inferior, my naturalism does not permit me to say that it’s impossible for something like the human to build something like life. Naturalism mandates that the same sort of physics must indeed result in the same sort of stuff. Agreed? Or if not then I’d be interested in how such a denial might be consistent with naturalism?

        On engineered sentience, from the article that I initially presented my understanding is that there’s a “functionalist” position by which sentience exists in terms of something more like a substrate independent lock, versus an “identity theorist” position where it’s more like a substance based property (as diamonds for example are usefully defined). Either position may be used to subvert causality if done poorly as I see it, and strong adherents on either side should be general obstacles to progress. I’ve considered Mike a strong adherent for functionalism, though perhaps now less that I formerly thought. But I’m also wondering about you given that you’ve baked “biology” right into your definition for consciousness. If science were to empirically determine that the brain creates phenomenal experience in a way that could be reproduced in a humanly fabricated machine, could you accept that such a machine would be sentient?

        I’m pessimistic that any hard core believer on either side would also be objective enough to change their opinion on the basis of solid arguments alone. This is why I think empirical evidence should be what settles this matter some day. Let’s begin with what at least should be clear, or that the brain animates phenomenal experience producing mechanisms of some kind in order to create consciousness. Thus if scientists were to alter the function of a person’s phenomenal experience producing mechanisms, that person should then phenomenally feel this tampering and might even be able to report it. This is what I think should eventually happen to end a vast collection of silly ideas which reside in these fields today, or a beginning from which actual progress should be made.

        Like

        1. Eric,

          Naturalism doesn’t mandate anything—it’s a philosophical viewpoint. It certainly doesn’t mandate “… that the same sort of physics must indeed result in the same sort of stuff” or even that we’ll be capable of instantiating “the same sort of physics” that we see in biological phenomena. Your inability to cite a single example would seem to support that.

          As I’ve said before, all known instances of consciousness are biological and that’s a definitional fact. Should a non-biological construct prove to be sentient, the definition of consciousness would be amended to reflect that new fact.

          I sometimes get the impression that you acquired your beliefs in cause and effect and EM consciousness and etc., following which you began to search for evidence to justify those beliefs. Hypotheses are generally derived from evidence. Philosophy is not renowned for its responsiveness to evidence. Nearly all philosophers are certain that external flowing time exists, a false proposition that denies the validity of relativity physics.

          Liked by 1 person

          1. Stephen,
            Now I remember how this dispute between us goes. The majority of it seems to concern a divergence in how we consider it productive to define something that can be conscious. To me it seems productive to leave the non-biological potential open given that causality mandates that one of our machines would be sentient if the right physics were incorporated. So just as the fact that the only kind of life we know of is Earth based doesn’t lead us to define life to be Earth based, I don’t think we should define consciousness to only be life based. But as your response suggests, you don’t deny that one of our machines would be sentient if the right physics were incorporated. That keeps you square with naturalism. I was only objecting to non acknowledged spookiness.

            On people who disagree with us seeming to display overly subjective viewpoints, get used to that. We’re all self interested products of our circumstances. Given our investments it feels good to be right and bad to be wrong. Nevertheless it’s clear that some do a better job than others of at least behaving objectively. The following post of Mike’s discusses a book by Julia Galef on “scout” versus “soldier” mindsets. https://selfawarepatterns.com/2021/05/01/the-scout-mindset/ I consider myself reasonably “scout”, though I also realize that this is expected. Those who consider themselves at odds with my positions should instead see me as “soldier”.

            Like

          2. Eric,

            I don’t believe the function of a definition is to be productive (whatever that means in a definitional context) and definitions aren’t meant to reify hypothetical and science fictional concepts. I repeat: ‘biological’ is a fact of the matter of consciousness. You cannot provide a single counterinstance. If/when the facts change, the definition will change.

            My proposal is a definition Eric. Not fiction, not speculative.

            Liked by 1 person

          3. Stephen,
            For a guy who seems to have a low opinion of philosophy, you sure do get philosophical! The nature of definition is a hotly disputed issue in the field. Most popular seems to be Wittgenstein’s “ordinary language” conception. Clearly my own is different.

            In any case if you don’t believe that the function of definition is to be productive (and aren’t sure what that means in a definitional context), then this statement suggests that you do believe that the function of definition is to be unproductive (not that you should understand what this means either). Modus Tollens I think. My advise would be to not comment upon what you don’t understand. And aren’t definitions sometimes meant to reify hypothetical and science fiction concepts, with wide approval?

            I suppose you’ve gone this way because it’s clear that my approach conflicts with your conception of definition in the factual sense of what’s known. One issue if adopted however is that the hypothetical organisms which the vast majority of us suspect sometimes exist on other planets (or perpetually exist in a block universe sense), should not be referred to as “alive” unless or until we have evidence of that existence. Are you also advocating such a restriction in the “life” term? Or if not then what’s the difference?

            Like

      2. Eric,

        I originally posted this in response to Stephen but I would appreciate your assessment as well as anyone else who chooses to critique it. So I am reposting it with minimal structural differences:

        The definition of consciousness is pretty straight forward, it contains two fundamentally basic axioms:

        Axiom number one:
        “Consciousness is a localized field of sentient experiences.”
        A. It’s localized because it occurs within the confines of a physical brain.
        B. It’s a field because the experience is multifaceted and consists of many, many, many things.
        C. All of those multifaceted experiences having feeling associated with them, Chalmers “what it feels like”.

        Axiom number two:
        “The experience itself is a conceptual representation of a fundamental reality.”
        A. Consciousness is not a direct experience, it’s a conceptual one, and this axiomatic principle cannot be overstated. The rock that one observes smashing ones little finger is not in ones head, neither is the finger. The rock, the finger as well as the pain one sentiently experiences is a 100%, unadulterated “conceptual experience”. The entire content of the experience is a concept with no exception
        B. Just because we have not figured out what this fundamental reality that is conceptually represented in our mind actually is, does not negate the fact that all of those conceptual representations reduce to a fundamental reality.

        Consciousness; Noun:
        “Consciousness is a localized field of sentient experiences that are conceptual representations of a fundamental reality.”

        From here we can all argue about the contents of the experience and whether or not that we actually feel anything, what’s real and what isn’t, how the experience arises, where it comes from, subjective experience, the awareness of self, what is a “self”, self reflection, blah, blah, blah, blah, blah…..

        Liked by 1 person

        1. Right Lee, I don’t see a reasonable way to dispute any of that. What you’ve said seems like what standard evidence suggests. I guess where we’ve differed in other conversations is your affinity for pansentience. But I think you’ve also made a distinction between sentience by means of anything causal, and then sentience by means of brain based phenomenal experience, or the Cartesian me. I don’t know about the first though the second interests me quite a lot (which isn’t strange since this would actually be me). It’s something that can be distorted or ended for a while with drugs, or even cease permanently with death. Luke Roelofs admitted this to me over at The Splintered Mind when Eric Schwitzgebel was reviewing his book. Luke didn’t seem sure what causes this variety of subjectivity either.

          Anyway consider my own suspicion here. Are you able to effectively object to my position that computers operate by accepting input information and then processing it into new information, sometimes in an algorithmic form that animates the function of mechanisms such as a computer video screen? I think brain function may also be assessed this way, which is to say that nerve signals are transmitted to the brain and are then processed to potentially yield algorithms that animate various output mechanisms.

          If the brain uses algorithms to animate mechanisms specialized for creating phenomenal experience, then what might those mechanisms be? Surely they’re not hidden, but rather some non understood element of brain function is doing this that’s plain for us to see. It seems to me that beyond the electromagnetic radiation associated with certain varieties of synchronous neuron firing, nothing else fits the bill. Furthermore we might test this theory by firing the right synchronous charges in the head in patterns that actual neurons are known to use, and so potentially alter someone’s EM field based phenomenal experience for subjective report. Continued empirical verification of such a hard problem solution should at least settle this specific matter, and so be built upon for the advancement of science.

          Like

          1. Eric,

            You should check out this website below: Discussions and review of “Shadows of the Mind” by Roger Penrose January 23, 2019 by PA Knott. Its an easy read and Knott himself is a hardcore computationalist who critiques Penrose’s logic, a logic which demonstrates that mind is not computational and that those mechanisms of the mind are actually grounded in a new kind of physics.

            https://quantarei.wordpress.com/2019/01/23/discussion-and-review-of-shadows-of-the-mind-by-roger-penrose/

            Liked by 2 people

          2. Well done Lee! Your ploy was to say some things that you know I agree with and then use that to try to interest me in some of your stuff. Of course it’s interesting that you’d try to sell an idea by means of an assessment from someone who doesn’t buy that idea. I think an even better critique would be the one that McFadden gave in an interview sometime before COVID. https://directory.libsyn.com/episode/index/show/senseandscience/id/15007949

            From minute 13 he discusses legitimate uses for quantum mechanics given its properties of tunneling, superposition, and entanglement. Apparently there is reason to suspect that biology might use it in terms of enzyme efficiency and photosynthesis. He goes through a few more as well, most of which have achieved at least some level of empirical support. Then at about minute 53 he contrasts this with the Penrose idea which about two decades earlier he had hoped to devote a chapter to for his quantum biology book. He explains that quantum mechanics is displayed in the function of things like photons, electrons, and even protons and neutrons, though not in enormous structures such as brains, or at least not unless they’re close to a temperature of absolute zero. But I do thank Penrose for his efforts since (as McFadden goes on to say), it was this consideration that incited him to explore how our singular conscious experience each moment might be compiled by the brain as one amazingly complex field of neuron produced electromagnetic radiation.

            Like

          3. Eric,

            I suppose I shouldn’t be surprised that you wouldn’t even investigate this new frontier in science since it might undermine your own beliefs. You cite what Penrose posited over two decades ago. Since collaborating with Hammeroff in the late nineties and working with the neuroscience communities, advancements have been made to put it bluntly, that are beyond belief; at least from my perspective. Below is a link from day three of the Science of Consciousness Conference dated August 13,2021.

            This new theory is called “Orchestrated Objective Reduction Theory of Consciousness”. All of the run of the mill objections that everyone raises about entanglement, decoherence, and quantum systems not being able to sustain themselves over greater distances and time are succinctly addressed in this video. It’s over three hours long and the scientific data is intense, so if you or anyone else is “willing” to watch this presentation they had better put their “big boy” pants on.

            Liked by 2 people

          4. Big boy pants Lee? Hmm… What’s a witty rejoinder for that…? How about, “What if I laugh so hard that I crap ‘em?” 😃

            Actually if I watched that three hour plus presentation I don’t think I would laugh. I think it simply wouldn’t make sense to me. If things aren’t reduced down to models I already grasp at least somewhat, or if they’re contrary with my existing beliefs, there’s only so much time I’ll invest to see if the penny will drop, or maybe if my conflicting beliefs could use adjustment. I did scan the Wikipedia Orch OR submission though. Even if you think McFadden’s criticism was unfair, apparently this was the version of Penrose’s theory that he’d been reading about.

            Anyway you seem to be developing a team that you suspect could succeed in the end. I’ve developed such a team as well. And though it’s often criticized, perhaps you should be thankful that your side is widely known and has a reasonable number of followers. That’s not the case for my side. Furthermore this is essentially a spare hobby for McFadden, or something less than his quantum biology passion. Regardless I consider his simple proposal quite sensible and love how empirically testable it happens to be. Rare is the consciousness proposal that can claim each. In any case success should ultimately be won in the laboratory. In that case either McFadden’s proposal should succeed or… actually I don’t know of a second proposal which has a plausible testing potential right now.

            Liked by 1 person

  15. I increasingly feel that the problem starts as soon as we try to represent things in words. As far as we can tell, physics is continuous all the way down the stack, so as soon as we use a word to try to describe it, we are artificially chopping up the universe into over-simplified Lego bricks, and storing up problems for later. Some quite simple words like “I” “am” “conscious” each carry a lifetime’s worth of philosophical baggage with them. (Fun trying though).

    Liked by 4 people

    1. I’m with you on that. I’m always struck by how many philosophical arguments are really people arguing past each other with different definitions. I personally think these difficulties can be overcome, at least to some extent, but it first requires acknowledging that not everyone’s using the words the same way.

      Liked by 1 person

      1. Re “… it first requires acknowledging that not everyone’s using the words the same way.”

        Acknowledging that problem won’t fix it.

        We’ll never overcome the difficulties while we’re lost in a sea of equivocation and obfuscation. A common understanding of the meaning of terminology is a prerequisite to effective communication in any field. Multiple conflicting definitions of dubious quality from the eyes of the various beholders create only confusion.

        Anyone who communicates seriously about consciousness without providing a credible fact-based definition of the phenomenon is airing unintelligible nonsense and should be ignored until the deficit is remedied. Insist on a quality definition from every “beholder” or we’ll never get rid of the mountains of philosophical consciousness garbage we’re mired in.

        “What can be said at all can be said clearly, and whereof one cannot speak, thereof one must be silent.” — Wittgenstein.

        Liked by 1 person

    2. I’m optimistic too. Philosophy may be reasonably old and uncertain, but science seems reasonably young and harbors various understandings that do seem valid. If progress will be needed in metaphysics, epistemology, and axiology in order for the softest areas of science to also progress, I suspect that these understandings will be achieved in mere years and decades rather than in centuries and millennia.

      Like

  16. Hi to all, another ‘shoot first aim later’ contribution from this particular newcomer. I prefer Seth’s approach to Solms’, if only because he doesn’t aim to ‘solve’ the hard problem, only to give us a way of thinking about it and approaching it. His comparison with the way the ‘mystery of life’ evolved from being an intractable philosophical problem to being an area of scientific knowledge is helpful. Both have their strengths : Seth reaches out to cover free will (basically Dennett’s position), Solms doesn’t. Solms covers vocal thought, Seth doesn’t.
    My immediate thought on finishing Seth’s book was that, if the f.e.p. works at every level from the cell to organism, and at each level ensures survival through entropy minimisation, one could extend the concept to the social level, to how groups ensure their survival keeping the group entropy to a minimum, by harmonising aims and objectives through vocal exchanges. The entropy of ‘us and them’ groups, so to speak. To go back to Solms : social media algorithms as Markov blankets ?
    Anyway, I’ll go back and do my homework by re-reading both books, carefully. Bye for now.

    Liked by 3 people

    1. Hi Chris,
      I generally agree with you on preferring Seth’s approach over Solms’. That’s not to say that Solms doesn’t have some important insights. I don’t know enough about the FEP to know if can be extended in the way you describe, but it sounds interesting.

      Re-reading can be useful, although my advice would be to broaden your view and get other perspectives in the field. Authors to consider would be Stanislas Dehaene, Joseph LeDoux, Antonio Damasio, Christof Koch, and Michael Graziano. These authors have a range of views, often contradicting each other, but they provide a pretty good breadth of the neuroscience take on consciousness. (Sounds like you’re already familiar with Dennett so I won’t go into philosophers.) If you want more on the predictive mind, I’ve heard good things about Jakob Hohwy’s book, which I own but haven’t read yet.

      Or if you’re interested in the evolution of consciousness, Todd Feinberg and Jon Mallatt’s “Ancient Origins of Consciousness” might be worth checking out. Although if you really want to get into something, try Simona Ginsburg and Eva Jablonka’s “The Evolution of the Sensitive Soul” (honestly, everything I’ve read since has sort of felt like an also-ran). Warning: these last two can get a little technical. I’ve posted on both of them if you’re interested.

      Like

      1. …and there was me, thinking that with predictive processing, the f.e.p., self-organisation, and the controlled hallucination, I’d come to the end of a 45 year fascination with what used to be called the “mind-body problem”. Ah well. Anyway, it’s been fun, especially the years spent convinced of a non-dualist spirit based universe. So there is more to come and it will still be fun.

        Like

        1. There is a lot to be said for predictive theories. I definitely don’t want to undersell them. And some of those other sources will explore them as well. But they’ll also cover some details Seth or Solms don’t get into. But if you read G&J, you’ll get a good overview of the idea space.

          Like

  17. I just finished Seth’s “being You” a couple of days ago. I enjoyed it very much and I also enjoyed your review of his book. It feels like Seth has made significant inroads into a subject that is near and dear to me. I liked his explanation of controlled and controlling illusions or hallucinations, but I found the term names inadequate. I would have described them more as models based on utility that we construct and tweak in order to resist the second law of thermodynamics as long as possible.
    Regarding his reservations about the functionalism approach, like you, I don’t see it as either-or between the biological and silicon substrates of consciousness. True, our consciousness and that of other living beings are likely based on our biological substrates but that doesn’t sound to me like a requirement for General AI to be conscious. I also believe that once we fully decompose the functional aspects of our consciousness we should be able to implement those functions in software and silicon or any other non-biological substrates. I wouldn’t care whether the AI really “feels” redness, chair-ness, fear, anger, or lust, as long as it can perform the functions that those concepts and emotions perform in us. To be fair to Seth, he never ruled out functionalism. He did say our consciousness probably goes down all the way to the cellular level. IMHO we could program the same all the way down to the transistor level.

    Liked by 3 people

    1. How convenient, the perfect foil for my 2 cents worth. “To be fair to Seth, he never ruled out functionalism. He did say our consciousness probably goes down all the way to the cellular level. IMHO we could program the same all the way down to the transistor level.” I totally agree with Seth, and with your reply too.

      BUT – and this is important – we (computer and robotic technicians) almost certainly won’t program all the way down to that level. There’s no money to be made that way. The human brain is going to be an inferior product by the time tech is sufficiently advanced to program all the way down to that level. Evolution finds LOCAL optima. Intelligent design can do far better.

      Liked by 2 people

      1. I agree with you that we probably won’t bother programming yet-another-silicon-based-species for general human-like AI robots or whatever. There’s no incentive to do it and many compelling incentives not to do it. I just asserted that I see no compelling reason why it couldn’t be done, using informed principles of functionalism. The incentives for programming special-purpose AI and robots are obvious, but we must never give up our oversight of those systems. Only humans have a legal responsibility; not our AI creations. A robot can’t be tried for a murder that it commits but his/her creator can be tried or sued in a court of law.

        Liked by 1 person

        1. Ah, but what ARE the principles of functionalism? As I understand it, the functions in functionalism are gross abilities like language use, intelligence, goal-seeking, etc. A functionalist is someone who makes analogies like pain:avoidance::knife:cutting. A nonfunctionalist like me, however, would offer something more like pain:avoidance::gold:yellowness+heaviness+nontarnishing. The cause, pain, is sufficient for the effect, avoidance, but not necessary for it. Unlike a “knife”, most concepts are not defined by a single function or compact suite of functions.

          Therefore, if you feel the need to achieve isomorphism all the way down to the cellular level AND you admit that this is not needed to achieve high-level functionality, you are not a functionalist.

          Like

          1. If there is no level too low, I think a better word for your position is “naturalist” or “physicalist”. Substrate-independence of mental properties is out the window, for one thing. Because any difference of materials will make *some* difference in the way a being interacts with the world. If I do X-ray crystallography on you, I can tell whether you’re made of silicon or hydrocarbons.

            Like

          2. Who cares what I’m made of. If my internal structures don’t function, then I’m unconcious and dead. Maybe there’s ultimately no difference between dead and alive states, organic and inorganic. It’s obvious that organic compounds are created from inorganic substances. The enzymes in our RNA use free energy to perform critical functions in our cells but they are not living things. I don’t believe in the magic of our biological substrates. I believe in physics and the scientific method. Functions are the way to understand the structures in and around us.

            Like

          3. Sure, but I still think you’re trivializing “functionalism” if you allow arbitrarily fine-grained “functions” to count as making a difference to mental states. Anil Seth could claim to be a “functionalist” in the relevant sense, since life can be defined in terms of metabolism (as you mention) plus reproduction plus a few other activities. A “functionalist” could claim that carbon-based entities are conscious while silicon-based ones are not, since whether quarks + electrons make up mostly-silicon or mostly-carbon depends only on how they interact: a fine-grained function.

            Like

    2. Seth’s discussion of control, along with Ginger Campbell’s recent interview, has me reading a book on cognitive control. Refreshingly, it doesn’t look like it will dwell on the c-word at all.

      I probably should have made it more clear that Seth doesn’t rule out functionalism. He’s just suspicious of it.

      I do agree that if an AI can discriminate redness from other colors, or identify things like chairs, and make use of that information to learn and pursue goals, it doesn’t really matter whether it “experiences” them, at least unless someone can define “experience” with enough clarity for it to be a meaningful question.

      Liked by 1 person

      1. “… if an AI can discriminate redness from other colors, or identify things like chairs, and make use of that information to learn and pursue goals, it doesn’t really matter whether it ‘experiences’ them …”

        Functionally correct. There is, however, a big difference between visually feeling the color red and computing that a particular reflected wavelength is named ‘red’.

        Liked by 1 person

        1. I agree there is a difference, but only because those two things aren’t functionally the same. If an AI can do everything with its inference of redness that we can, then insisting on different language seems like euphemism.

          Like

          1. Mike … huh?

            No, there really IS a difference between wavelengths of light in the world and a visual feeling of color in a brain. I don’t believe the AI is inferring redness … it’s a direct lookup with the wavelength as input and the color as output. And an AI cannot do everything with that knowledge of redness that we can—it cannot respond to the emotional significance of the colors in a painting, for instance.

            “Doesn’t really matter?” Perhaps I don’t really understand the point you’re making …

            Liked by 1 person

        2. Stephen,

          Let me help you out here. One cannot defeat the dialectic argument of a functionalist playing by the rules of functionalism. One has to take an idea being discussed out of the domain of functionalism. Let me give you an example: Computing is a function whereas feeling is not a function, it’s a substrate; and as a substrate feeling has to hold universally or it is not a necessary truth; hence pansentientism.

          Now, as far as defining experience with enough clarity for it to be a meaningful question.

          Experience; Noun:
          feeling; the substrate upon which a functional universe of possibility emerges in all of it’s diversity, novelty, clarity and complexity.

          Liked by 1 person

          1. And as far as defining substrate, I believe “The base layer of a structure” seems the closest but, then, exactly how are feelings a substrate? Yeah, I know: dictionaries and word definitions … how inconvenient.

            Liked by 1 person

  18. This is super interesting. I wonder if Anil Seth has given thought to the apparent relationship between consciousness and quantum particles? Like the way they operate in superposition until a conscious observer is presented

    Liked by 2 people

    1. Glad you found interesting! Seth doesn’t get into interpretations of quantum mechanics, but he does identify himself as a physicalist and mostly talks in terms of classical physics. The closest he comes to discussing the relationship between quantum mechanics and consciousness is this endnote.

      The yearning for a eureka solution may partly account for the persistent appeal of theories of consciousness based on quantum mechanics, most of which trace back to the mathematician Roger Penrose’s The Emperor’s New Mind, published in 1989. While it can’t be ruled out that some future quantum-based theory may have something useful to say about consciousness, the attempts so far seem to me to evince a false syllogism: Quantum mechanics is mysterious, consciousness is mysterious, therefore they must be related.

      Seth, Anil. Being You (p. 292). Penguin Publishing Group. Kindle Edition.

      Liked by 1 person

    2. As far as I understand it, I don’t think a conscious observer is needed to terminate superposition, just a measurement – i.e. a certain sort of interaction with something physical such as a measuring instrument. Therefore I agree that the claimed relationship between consciousness and quantum mechanics is a red herring (except to the extent that everything turns into quantum mechanics at the lowest level of explanation).

      Liked by 2 people

      1. Well said Peter. I believe that scientists have verified that pretty well. And of course it would be utterly ridiculous and anthropocentric if QM never occur on Earth until there were beings here which would attempt to measure things. But then I think Anil’s point does happen to be a bit more broad, or that it counters those who decide that consciousness itself must be a quantum dynamic given that both it and consciousness are strange. Furthermore their strangeness appears to give that position illegitimate cover. The following link provides a 15 minute video of McFadden attacking that position by illustrating sort of things that actually are appropriate for quantum mechanics. https://iai.tv/video/johnjoe-mcfadden-life-in-the-quantum-universe?_auid=2020

        Liked by 1 person

      2. The idea of consciousness causing the wavefunction collapse goes back to speculation from John von Neumann in 1932, and later speculation by Eugene Wigner, so it’s often called the Von Neumann-Wigner interpretation. Most of the philosophical ideas about this are descendants of these speculations.

        But it’s worth noting that the scientific speculation largely occurred prior to H. Dieter Zeh’s work on quantum decoherence, which explains the disappearance of quantum wave effects through completely physical processes, even if not the actual collapse. Very few contemporary physicists see consciousness playing a role in the collapse, even among the minority that still think there is some kind of ontological collapse.

        Liked by 1 person

  19. Interesting talk!
    Predictive processing is one interesting view about the brain. Although the conversations in PP are complex and require plenty of calculus and thermodynamics/statistical mechanics understanding, I’m doing my best to try to get into them as far as I can.
    Have you read ‘The emperor’s new Markov blankets’?
    Would you like to join a small academic server for educational purposes in Discord? Its main purpose is to discuss stuff related to cognition and behavior (including consciousness) from different perspectives and fields. Whether you want to discuss these topics from a neuroscientific, philosophical or psychological perspective that’s alright. The same applies to the members’ ‘allegiances’, whether you are into PP, radical behaviorism, enactivism, computationalism, and so forth.

    Liked by 1 person

    1. Thanks! I haven’t read “The Emperor’s New Markov Blankets” yet. It looks like there’s an updated version available as a preprint, and it will eventually be published with responses from other researchers: https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/emperors-new-markov-blankets/715C589A73DDF861DCF8997271DE0B8C#

      That Discord sounds interesting. My interest is a bit light right now, but I’d be open to taking a look.

      Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.