The problem with the theater of the mind metaphor

In the last post, in response to my criticism of Chalmers for relying on the standard but vague “something it is like” definition of phenomenal consciousness, someone pointed out that Chalmers has talked before metaphorically about a movie playing in our head, notably at the beginning of his TED talk on consciousness. I think this is an insightful point. It goes to what most people intuitively think about consciousness, that it’s a theater of the mind.

Indeed, I think what Nagel probably meant when he coined the phrase “something it is like” was “something like a theater of the mind.” It’s probably fair to say that when most people ask whether a particular animal or other system is conscious, it’s this idea of a inner theater they are intuitively thinking about. So what Nagel meant when saying we could never know what it’s like to be a bat was, we couldn’t know what their theater of the mind is like.

So the theater metaphor does a good job of capturing the most common intuitions about consciousness. However, it also encapsulates the problems with those intuitions.

First, it implies that somewhere in the brain, there is a place where a presentation is made, and a place where an audience views that presentation. This leads a lot of people to look for those locations. The current search for phenomenal consciousness, as something separate and apart from the overall information access processing, is a key example of looking for the presentation.

But it also often leads people to hypothesize about relatively compact locations where the audience may lie. Common guesses are the brainstem, the thalamus, the hippocampus, the prefrontal cortex, the claustrum, or the posterior cortex hot zone.

The problem with this notion is it downplays the role of the audience. However, when it comes to consciousness, the audience is arguably the most important part. Relegating it to a small section of the brain actually magnifies the difficulties in explaining what the audience actually is. It might work if the audience actually were some passive entity simply receiving the presentation of the theater, but it isn’t. The audience is, ultimately, the doer of the mind, the portion that makes decisions and takes action.

This, I think, is why the theater metaphor is a target of illusionists. Daniel Dennett, in his 1991 book, Consciousness Explained, actually spends a lot of time attacking what he calls the “Cartesian theater.” And I’m pretty sure this theater is what Keith Frankish has in mind when he attacks phenomenal consciousness.

Dennett attacks the idea of the homunculus, the little person in the brain observing events. The trouble is that this can lead to an infinite regress. Does the homunculus have its own homunculus? And does that homunculus have yet its own? Dennett, in later writing, does admit that as long as each nested homunculus is less sophisticated, it can break the regress, but that arguably is not our intuition about what’s happening.

However, while I think the illusionists are basically right, I disagree with their framing of the issue. I don’t know how helpful it is to simply point at an intuitive metaphor and say it’s wrong. I think the solution is to present a better one.

The theater metaphor often used to present Global Workspace Theory (GWT) actually tries to improve on the standard one by stipulating that both the audience and backstage support are unconscious mechanistic agents. It is their collective response to what’s being presented that makes that presentation the contents of consciousness.

But as I noted in my post on GWT, I think even this improved theater metaphor leaves the audience in too passive of a role. It’s why I see a raucous meeting as a better metaphor. The meeting has a chairperson, but their control is tenuous. The chair recognize a series of speakers, who take the floor and make a speech, sending their content into the overall room consciousness.

But the person making the speech is far from the only person talking in the room. There is a general background hum of people having side conversations with each other. While the speeches have large causal effects on the mood and sentiment in the room, the side conversations are having their more local effects too. And these effects periodically result in a speaker, or coalition of speakers, shouting down the current speaker, despite any efforts of the chairperson, and taking the floor and saying their own piece, further altering the tenor of the room.

In this metaphor, the consciousness of the room is what is being conveyed in the speeches, that is, it is what is having causal effects throughout the room. Most members of the audience have the potential to become part of the presentation at any time. There’s no central presentation, no passive audience, no one location in the room where it all comes together.

So, rather than a theater of the mind, we should think of a community of the mind. This seems true even if GWT turns out not to be the right theory of consciousness, or (as I suspect) it’s only a part of the overall picture.

Unless of course I’m missing something?

71 thoughts on “The problem with the theater of the mind metaphor

  1. While reading this “First, it implies that somewhere in the brain, there is a place where a presentation is made, and a place where an audience views that presentation. This leads a lot of people to look for those locations.” It came to my mind the research about athletes doing rehearsals, especially mental rehearsals. So, an athlete might mime shooting a free throw in a basketball game … or they could just imagine shooting a free throw, and their success rate is increased. Studies show that even if just imagining the action, that the same muscles get activated as when the free throw is actually shot.

    This implies that the situation is a two-way street. Our imaginations (the projector in a theater of the mind?) can display though the muscles of the body. Creating sensations of activities that are not happening. All together this could result in something like a healthy “consciousness as theatrical play.”

    These mental rehearsals play an important part in my sport, archery. I find, however that the imaginings, covering just a few seconds of time are difficult, even when practiced, so we tend to borrow memories of a most recent execution is that were available. This connects memories to imaginings and they might be considered to be “stock footage” edited into my imaginings.

    I am one of those who find accounts of lucid dreaming to be far fetched as I rarely have them. My dreams are chaotic and repetitive to distraction. So, I wonder about people who envision a home remodel in their minds with no help from architectural software and its ilk. I tend to think it is more fragmented than whole.

    Of course, one has to take into account that we tend to think much too much about how wonderful we are.

    Liked by 3 people

    1. The interesting thing about imagination and episodic memory is that they are handled by the same processes in the brain. Episodic memory is actually a recreation of a past event rather than a recording of it. It’s a simulation. That’s why our memories are so unreliable. But it makes sense if you think about why episodic memory would have evolved, to aid in predicting the future.

      So it’s definitely a two way street. The action oriented planning parts of the brain are always working the image oriented parts in its imaginary deliberations. Good point. Another case where the notion of a passive audience is misleading.

      I can’t say much about lucid dreaming. I can only recall one time figuring out I was in a dream and driving myself to wake up, except then I realized I still wasn’t awake but had just dreamed I woke up, which finally got me to the stage of actually waking up. Every other dream I’ve ever had was the same chaotic mess you describe. But I rarely remember my dreams.

      Definitely the biggest human bias we have to be on guard against is human conceit. The vast majority of the time when we think the answer is that we’re special, the history of science shows we’re most likely fooling ourselves.

      Liked by 2 people

  2. The fragmentary storage of memories, cleverly using the parts of the brain that process those sensory inputs, essentially provides departments for our little theater: sound effects, props, scrims, costumes, etc. So a little of this and a little of that and “Lo and Behold!”

    So there need not be a viewing room, unless that is the executive function. As it is, I wonder where it is that memories get reconstructed. Is that short term memory or maybe long term memory, but then that is just where we started, so where do these memories get reassembled?

    Liked by 2 people

    1. From what I’ve read, it’s more a matter of where memories get orchestrated. A memory of a car is bound up with our concept of a car, plus or minus a few instance specific details (which often get lost over time). So a memory of the car that drove by last night as we were walking on the road will involve the neural circuits in the temporal lobe that preserve that concept. Recalling the shape and color will involve higher level visual circuitry, the same circuitry that was exercised in the event itself. It’s a galaxy of associations spanning the thalamo-cortical system, with the visual cortex having the visual aspects, the auditory cortex the auditory aspects, etc.

      The orchestration likely involves various parts of the prefrontal cortex, the hippocampus, and probably some other regions. So you could think of an episodic memory simulation as a symphony, with certain regions as the conductor, although the conductor’s hold on the individual members of the symphony isn’t always certain.

      And I’m not sure, but if the conductor is absent for some reason, the symphony can still happen, just more chaotically, like in dreams.

      Liked by 1 person

  3. Fascinating theories. It seems to me humans are fascinated by their own fascination.

    I suspect consciousness is simply emergent behavior given a sufficiently complex system. It’s not a “thing” or phenomena as much as it’s a by-product of complexity and self-referential feedback loops. Humans appear to treat consciousness as if its manifestation is somehow outside the realm of chemicals and electricity piled high and switched on. This reduction, undoubtedly, comes as an affront to our sensibilities. How can our magnificent minds and their fantastical abilities be nothing more than emergent behavior of complexity?

    Liked by 2 people

    1. Exactly. I think a lot of the fascination with consciousness amounts to human bias and conceit, a sense that there has to be something special about the way we process information. That’s not to say that brains aren’t amazing systems, just that we often read more into them than the data shows, because we’re looking at us in the most intimate fashion. Which makes it perhaps the most difficult thing there is to be objective about.

      Liked by 2 people

        1. >”I suspect consciousness is simply emergent behavior given a sufficiently complex system.”
          Unless we specify and prove what exactly is a “sufficiently complex system” we would be on shaky ground. First of all, there is no such term as “sufficiently complex system” in the theory of complex systems. Also, all mammals are complex systems. Do you attribute consciousness to all lived and living mammals? If not, where is that boundary?

          Liked by 2 people

          1. You’re right. Complexity is not the only metric we need to include as a determinant of consciousness. That is, unless by complex we include all the factors that we allow biological live to evolve to the point of sentience. Data, memory and learning are the keys here. Data for us comes as experience (all the sensory input we humans enjoy). Memory is persisted experience and learning are the new neural pathways our brains are capable of creating and maintaining.
            Some say that a cerebral cortex is required for cognitive awareness. I’d say that the cortex is merely a highly optimized physical construct which allows for elevated, self-referential electrical processing. When we create its equivalent, including all the aforementioned complexity, and that creation wakes up, looks around, analyzes its environment and considers its existence, its awareness of its self and its place in the cosmos, that we would be hard pressed not to call it conscious. (For it would call itself conscious.)

            Liked by 2 people

          2. As far as what the boundary is concerned, that is, when does a bonobo, elephant, orca, raven or octopus become self-aware and capable of individual separation of their mind, their body and the environment — that I suspect we’ll learn here in the coming decades.
            I’m personally convinced that what we call consciousness is an assembly concern. Whether the parts are biological or synthetic, if a critical mass is cogently assembled, self arises.

            Liked by 2 people

          3. “Whether the parts are biological or synthetic, if a critical mass is cogently assembled, self arises”. I’m not sure about the synthetic part. Current Artificial Intelligence (AI) does not have and does not need consciousness.

            The future AGI might not need consciousness at all when it would be able to understand and imitate humans consciousness. Probably, we should limit our discussions about consciousness only by “biological life on Earth” and exclude synthetic parts.

            Also, AI and AGI are complex systems. Probably no consciousness will arise in AGI even when AGI will get out of human control and will control its own complexity growth. By creating AI we, humans, already proved that intelligence could exist without consciousness.

            Liked by 1 person

          4. It’s all in how we define “consciousness”. If a system having models of itself and its environment, and using those models to further its goals, counts as consciousness, then that system is conscious.

            But if we constrain the scope of the goals to biological ones, or require that the system have affects in the way a biological system does, then unless the machine is effectively engineered life, it isn’t conscious.

            Liked by 2 people

  4. Your raucous meeting metaphor makes a lot of sense to me, personally. I often think of my decision making process as if it were a political debate, with various parties trying to argue their case, and then everyone takes a vote at the end. And just as in real life, my inner politicians aren’t necessarily honest, demagoguery sometimes wins over logic and reason, and the whole democratic process is not always 100% fair. For better or worse, though, this metaphor feels the most accurate to me.

    Liked by 2 people

    1. That’s a good way to describe it. In the post, I originally wrote “congress of the mind” rather than “community of the mind”, but thought people might unduly focus on that aspect of it. But you’re right, it is very much like a political debate.

      Liked by 2 people

  5. I also like your meeting metaphor, but I think you could make it more apt by extending it to representational government systems, like the U.S. Congress or British Parliament. So, a few points:

    As you say, most, but not all, of the audience are potential speakers. Each member has its own agenda and concerns, and frequently only pays attention to the speaker when their concerns are being addressed.

    There can be more than one meeting, and while each controls it’s own agenda, actions of the one can intrude on the agenda of the other (like when articles of impeachment are sent from the House to the Senate).

    There is at least one audience member who is keeping a record of what is said from the main stage and who said it.

    There can be subgroups with their own conversations, as in sub committees, etc. You could call it a community of mind, or as Marvin Minsky called it, a Society of Mind, the point being it’s not necessarily correct to say the sub-minds are not conscious while the whole mind is conscious.

    I agree that the responses to the presentation (representation) determine the contents of consciousness. This applies at the sub-mind scale as well as the collective mind scale.

    *
    [oh, and audience members = unitrackers = cortical columns]

    Liked by 2 people

    1. Thanks James. I actually considered using “congress of the mind” rather than “community of the mind” in the post. Based on multiple responses, it looks like it would have been better to go with “congress.”

      I agree that we can view it as more than one meeting, although within the thalamo-cortical system, I’m not sure how clean the separation between them might be. It might amount more to huddles within the meeting room rather than completely separate meetings.

      “There is at least one audience member who is keeping a record of what is said from the main stage and who said it.”

      Could you elaborate on this? I’m curious exactly what you mean by it.

      What would you say is an example of a sub-mind?

      I’m not sure about the equating of unitrackers with audience members, although based on what you’ve told me about them, I definitely think unitrackers are involved.

      Liked by 1 person

      1. OK, I’m speculating a little beyond my warrant. So attach “What if, maybe, “ in front of all the following.

        I’m using my current working model to suggest the stage/podium/microphone is the thalamus (or some central sub-cortical locus). When a speaker “gets the floor” , their input is combined with zero to (not too) many other speakers to create a representation (via a semantic pointer) in thalamus. The pattern thus created determines which audience members get triggered, which will certainly include feedback to the inputting members, plus others.

        I’m currently speculating that a cortical column = unit of audience = unitracker simply because it makes sense and fits all the data on connectivity that I’ve seen.

        Assuming this model, it is known that there is interaction between the cortical columns independent of any sub-cortical connections. These interactions seem to map pretty well to predictive processing architecture (and artificial neural network architecture) with feedforward and feedback processes. These would constitute the “unconscious” activities, (although I think “unconscious” only applies relative to a different system similar to the way conscious events in a simpler animal are “unconscious” relative to me, because I don’t have access to them).

        So for the audience member keeping a record, I’m referring to episodic memory, and I’m pretty sure that largely involves the hippocampus.

        BTW, I just watched this keynote given by Buzsaki and it’s definitely worth listening to if you have the time. I think his stuff is compatible with my ideas. Hear’s a link (haven’t read the review here, but it contains a link to the video): https://www.psychologicalscience.org/observer/2020-kavli-keynote-cognition

        As for what might be a sub-mind, I may be reconsidering my definition of a mind. Does something count as a mind if can have exactly one and only one thought? That’s kind of what unitrackers are, except that a “thought” or “experience” would require two unitrackers, because the experience needs 1 unitracker to generate a representation vehicle and another (or some other system) to interpret the representation vehicle. I could see requiring a “mind” to have some intermediary system that can generate multiple possible representations. In this latter case, sub minds might be associated with various functional sub-nuclei in the thalamus, perhaps giving multiple semantic pointers?

        *
        [ok, a little speculation heavy]

        Liked by 1 person

        1. Nothing wrong with speculation. As usual though, I think you’re putting too much on the thalamus. The biggest data point, I think, is that too much of the interconnections between cortical regions don’t go through it. Some go through other subcortical structures (amygdala, hippocampus, basal ganglia, etc), but it sounds like many regions are also directly connected to each other.

          The unit of audience is an interesting idea, but it seems a little too clean and uniform for the brain. From what I’ve read, it’s likely much messier. But again, my knowledge of cortical columns remains scant, and I might think differently if / when I learn more.

          Thanks for the Buzsaki link! I’ll check it out. I see it has an intro from Lisa Feldmann Barrett. I’d be interested to know what she thinks of Buzsaki’s ideas.

          Whenever I start wondering about the definition of a concept, I always fall back to quality dictionaries, which work to capture the way words are actually used. Merriam-Webster on “mind”:

          a: the element or complex (see COMPLEX entry 1 sense 1) of elements in an individual that feels, perceives, thinks, wills, and especially reasons
          b: the conscious mental events and capabilities in an organism
          c: the organized conscious and unconscious adaptive mental activity of an organism

          https://www.merriam-webster.com/dictionary/mind

          These definitions all seem scoped at the level of an individual or an organism. That seems to rule out something like a cortical column having its own mind. “Sub-mind” doesn’t have an entry, so we’re probably freer for now to designate various things for it without confusion.

          Liked by 1 person

          1. [sorry for delay, got distracted]

            On cortical columns, here (https://www.biorxiv.org/content/biorxiv/early/2020/09/10/2020.09.09.290601/F1.large.jpg) is a pretty good review along with a new theoretical explanation of how they compute. See, especially, this:
            Here, we derive a detailed and functional model of cortical microcircuits based on message-passing inference in Recursive Cortical Networks (RCN), a generative model for vision [10]
            […]
            Cortical micro-column as a binary random variable

            A fundamental organizational unit of neocortex is that of cortical columns formed by synaptically connected vertical clusters of neurons [22], and the computational role of this columnar connectivity has remained a mystery. RCN assigns a functional and computational role to cortical columns: the whole column is viewed as a binary random variable, and the neurons in different laminae perform computations that infer the posterior state of this random variable from available evidence. The random variable represented by a cortical column can correspond to a ‘feature’ or a ‘concept’—for example, an oriented line segment in V1 or the letter ‘B’, in IT. The different laminae in a particular column correspond to the inference computations that determine the participation of this feature in different contexts: (1) laterally in the context of other features at the same level, (2) hierarchically in the context of parent features, (3) hierarchically as context for child features, and (4) pooling/unpooling for invariant representations (Fig 4C). During inference, their activities represent the contributions of these different contextual evidence in support of the feature being ON. The belief that the cortical column is ON itself is represented by specific neurons in specific laminae. In the factor-graph specification of RCN, the different aspects of a feature like lateral membership or pool membership are represented by binary variables themselves, but these binary variables are copies internal to the micro-column, and these copies represent the different contextual interactions of the feature-variable represented by the micro-column.

            So the authors are saying that a cortical column, as a unit, is either ON or not. The “whole column is viewed as a binary random variable, and the neurons in different laminae perform computations that infer the posterior state of this random variable from available evidence”, and the “ random variable represented by a cortical column can correspond to a ‘feature’ or a ‘concept’—for example, an oriented line segment in V1 or the letter ‘B’”. This is exactly what I would expect from a unitracker, a unit that tracks one concept. Also, each column has one output to “higher order thalamus”, whatever that is, as well as more than one input from thalamus, which makes it both potentially speaker and audience.

            I expect there will be variations on this theme in the various parts of the cortex, possibly with some columns (I expect early in sensory pathways, but possibly others) which connect only with other columns, but who knows. But I will wager something akin to this architecture goes throughout the cortex. Also, it makes sense with Buzsaki‘s ideas.

            *

            Liked by 2 people

  6. I think the Theater of the Mind is an uncharitable interpretation of “there is something it is like”. A better interpretation is simply: subjectivity.

    Ever notice how when you stick both hands in the same sink-full of water, it can feel hot to one hand and cold to the other? And not because the sink water has non-uniform temperature. That’s subjectivity. Of course, when it feels the same to both hands there’s still a subjective sensation – it just becomes less obvious in that case.

    I think the raucous meeting metaphor is great. I’m not sure how helpful it is in explaining subjectivity though. Not that it has to. We don’t need one single explanation for everything.

    Liked by 2 people

    1. One of the problems with taking something as vague as “something it is like” and trying to find a more specific meaning, is it’s always possible for someone to say that’s not it. The problem I see with “subjectivity” is it’s equally vague. Your example does provide an insight, but it’s when we try to make the concept more precise that the disagreements arise.

      Thanks Paul. I totally agree that one metaphor isn’t going to answer it all. One of the frequent problems with discussions about consciousness is an unwillingness to discuss how complex it is, how many hidden layers of functionality are at work.

      Liked by 1 person

      1. But it’s OK to deploy concepts which allow disagreements to arise, More than OK; most arguments should not be settled by definition. Definitions should allow room for empirical investigation and/or room for ineliminable vagueness.

        Liked by 1 person

        1. I think there’s a difference between a definition which begs the question, and one that is precise yet remains theory neutral. The problem is vague or ambiguous language frequently hides problematic notions, not to mention hidden disagreements.

          Only by trying for a definition with more clarity, do we expose those issues. This is one area where I think philosophy can provide a valuable service, but which it too often whiffs on. It’s why, despite disagreeing with his conclusions on consciousness, I find Chalmers’ writing interesting. He does a good job at clarifying the position landscape.

          None of which means we shouldn’t be willing to modify our definitions as new information comes in.

          Liked by 1 person

          1. The problem is that there is a huge gulf between question-begging definitions on one shore and precise-yet-theory-neutral ones on the other. And almost all of the verbal/conceptual action happens in that gulf.

            Liked by 2 people

  7. >First, it implies that somewhere in the brain, there is a place where a presentation is made, and a place where an audience views that presentation. This leads a lot of people to look for those locations.

    That may be the naïve view of some but I think most neuroscientists looking for NCCs are trying to understand the mechanism by which the brain creates experience. That mechanism could be primarily in some particular structure but also it could be spread all over the brain in particular sorts of neurons, connections, patterns of neurons firing, or something else. The lack of any compelling idea for a NCC seems to be for me the major defect in most theories. If the argument is the brain produces subjective experience, it must have some physical mechanism. Hand waving with metaphors really doesn’t explain it.

    Liked by 2 people

    1. One of Dennett’s points in his book, which I agree with, is everyone intellectually disowns such a naive view. But have a conversation with most people about consciousness, and the theater becomes an implicit context. What else is the search for a phenomenal consciousness independent of access consciousness but a quest for the movie screen, the performance stage, an attempt to catch the presentation in action.

      I think in the quest to understand experience, we need to be open to the idea that our naive intuitions about it are wrong. Once we’re willing to go there, the results from neuroscience look much more promising.

      Making accusations of “hand waving” without identifying what is missing is simply hand waving away a disliked proposition.

      Liked by 1 person

      1. I thought I was explicit about what is missing – the physical mechanism.

        The first part of your response is fairly incoherent. Dennett’s “everyone” is who? Neuroscientists, philosophers, random people on the street. And then are the “most people” the same as Dennett’s “everyone”? We humans rely a great deal on vision and we westerners watch a lot of TV so the movie screen metaphor is something we easily fall into using.

        You’ve mentioned about our naïve intuitions being wrong many times but once you’ve declared an intuition naïve saying it is wrong is almost redundant. Which of our many intuitions are wrong?

        Liked by 1 person

        1. I can’t find the passage, but I think Dennett was mainly referring to philosophers of mind. “Most people” is my phrase, and it refers to philosophers and people I interact with about consciousness.

          On intuitions, well, the point of this post is that the intuition that somewhere in the brain there is a screen, stage, or other kind of presentation area distinct from an audience, is wrong. You yourself have said the brain shouldn’t be regarded as a passive system.

          Liked by 2 people

          1. I think that particular “intuition” is very culturally determined. I doubt this is the view of people in non-Western societies and possibly not even the view of people in Western societies prior to the invention of movies. It may prevail in Dennett’s circle of friends and colleagues but I’m not even sure it goes beyond being regarded at best as a rough metaphor among neuroscientists who are preoccupied with the matter.

            In Freud’s day engines were the new technology so his psyche model is based on energy flows and balances. In our day our technology is information technology so we see the psyche in terms of information. No escaping our milieu.

            Liked by 3 people

          2. The intuition may be culturally specific. One of the reasons I went with “theater of the mind” rather than “movie in the mind” is because theaters with plays are far older, going back to ancient times. I suspect it’s why Dennett called it the “Cartesian theater” rather than “Cartesian movie”.

            But theaters do seems like a western concept. It’s debatable whether classical Greeks conceived of consciousness in the modern sense, but it’s striking that any writing we might take to be about consciousness arises after the development of theatrical plays. Certainly plays had been around for millenia before the modern writing about it.

            The common argument that the way we understand brains today is no better than the way people in past centuries with older technologies understood it, is misguided. In truth, some of the metaphors people used in the past were accurate, as far as they went. But understanding the nervous system as a computational system predates computer technology, and actually influenced its design.

            Liked by 1 person

          3. Finally found that passage from Dennett:

            Cartesian materialism is the view that there is a crucial finish line or boundary somewhere in the brain, marking a place where the order of arrival equals the order of “presentation” in experience because what happens there is what you are conscious of. Perhaps no one today explicitly endorses Cartesian materialism. Many theorists would insist that they have explicitly rejected such an obviously bad idea. But as we shall see, the persuasive imagery of the Cartesian Theater keeps coming back to haunt us—laypeople and scientists alike—even after its ghostly dualism has been denounced and exorcized.

            Dennett, Daniel C.. Consciousness Explained (p. 107). Little, Brown and Company. Kindle Edition.

            Liked by 2 people

          4. I don’t know that it haunts me all that much. If it is theatre, it is probably more like a Theatre of Cruelty.

            “Artaud wanted to abolish the stage and auditorium, and to do away with sets and props and masks. He envisioned the performance space as an empty room with the audience seated in the center and the actors performing all around them. The stage effects included overwhelming sounds and bright lights in order to stun the audience’s sensibilities and completely immerse them in the theatrical experience. Artaud believed that he could erode an audience’s resistance by using these methods, “addressed first of all to the senses rather than to the mind,” because, “the public thinks first with all of its senses.”

            https://en.wikipedia.org/wiki/Theatre_of_Cruelty

            Liked by 2 people

  8. The Cartesian Me “is” the movie that is playing and ironically this movie is switched on and off by the brain during periods of sleep. So I guess the questions that need to be resolved are as follows:
    1. Is the Cartesian Me a separate and distinct system that emerges from the biological brain?
    2. Is this biological brain itself a conscious system separate and distinct from the Cartesian Me?

    This synthesis does not suggest dualism, it merely points out the distinction of separate physical systems, a postulate that suggests that it’s physical systems all the way down.

    Liked by 2 people

    1. Thanks Lee. Taking the Cartesian Me to be a pre-theoretical conception of self, I see the answer to both 1 and 2 as no. For me, the Cartesian Me is part of what the brain does in interaction with its environment, including the body.

      Liked by 1 person

      1. You missed my point Mike: The Cartesian Me is NOT the “pre-theoretical conceptions of self”; it literally “is” the self. According to this synthesis, the Cartesian Me has to be a separate and distinct system. Furthermore, this Cartesian Me is switched on & off by another system. You can call it the ghost in the machine if you want, but the ghost is not a ghost, it is literally another physical system. Now, Idealists would never, under any circumstances accept this synthesis, but I find it ironic that a type A materialist like yourself would have a problem with such a rendition.

        We know that systems are emergent all the way up the ladder of complexity, so what justification would there be to reject the notion that the system we know as mind is NOT just another emergent system, one that just happens to emerge from the brain. None of this contradicts a materialistic framework??!!

        Without this bold synthetic a priori assessment, neurologists are caught in an intellectual vortex, a vortex that is created by their original postulate, and that postulate being that the system of mind and the system of brain are the same system. That postulate irrevocably reduces to mind being a self-caused cause, a synthesis that just becomes another lame tautology, another argument to absurdity.

        Lets’ be clear; the burden of proof does not lie with my postulate, the burden of proof lies with the inverse; one has to rationally or empirically justify why this could “NOT” be the case.

        Liked by 1 person

        1. Lee,
          I wasn’t sure I knew what you meant by “Cartesian Me”, which is why I was cautious. Unfortunately, I’m still not sure what you mean.

          The reason I said “no” was that you stipulated “a separate and distinct” system. That sounds like more than just emergence. It’s physical generation of something. If you’d maybe said an emergent system in the brain or its operations, I might have been onboard, albeit with the clarification that, for me, it would only be weak emergence. (Eric might be onboard with something generative with his fascination with EM theories.)

          If we’re talking strong emergence, then that’s not Type A materialism. It seems more in line with Type B, which sees consciousness as completely physical yet irreducible to physics. The Type A camp sees it all ultimately reducible.

          Usually the burden of proof falls on the person making the positive assertion.

          Liked by 2 people

          1. Regarding what type of physicalist I might be under the Chalmers system, my friend Liam (who calls himself a dualist in the theme of Chalmers), classifies me as “type C”. This is to say that I don’t consider the p-zombie logically impossible, or metaphysically impossible, but just plain impossible given my perception of causal dynamics. I haven’t looked too much into it. From the video in the following post he questions me about this at minute 21: https://physicalethics.wordpress.com/2020/04/10/what-i-stand-for-part-one/

            Liked by 2 people

          2. Not sure if I knew about type-C materialism, but from Chalmers’ descriptions, a type-C materialist seems to be a type-B materialist who thinks we might reach type-A materialism in the future.

            According to type-A materialism, there is no epistemic gap between physical and phenomenal
            truths; or at least, any apparent epistemic gap is easily closed. According to this view, it is not
            conceivable (at least on reflection) that there be duplicates of conscious beings that have absent or
            inverted conscious states. On this view, there are no phenomenal truths of which Mary is ignorant
            in principle from inside her black-and-white room (when she leaves the room, she gains at most
            an ability). And on this view, on reflection there is no “hard problem” of explaining consciousness
            that remains once one has solved the easy problems of explaining the various cognitive, behavioral,
            and environmental functions.

            …According to type-B materialism, there is an epistemic gap between the physical and phenomenal
            domains, but there is no ontological gap. According to this view, zombies and the like are conceivable, but they are not metaphysically possible. On this view, Mary is ignorant of some phenomenal
            truths from inside her room, but nevertheless these truths concern an underlying physical reality
            (when she leaves the room, she learns old facts in a new way). And on this view, while there is
            a hard problem distinct from the easy problems, it does not correspond to a distinct ontological
            domain.

            …According to type-C materialism, there is a deep epistemic gap between the physical and phenomenal domains, but it is closable in principle. On this view, zombies and the like are conceivable
            for us now, but they will not be conceivable in the limit. On this view, it currently seems that
            Mary lacks information about the phenomenal, but in the limit there would be no information that
            she lacks. And on this view, while we cannot see now how to solve the hard problem in physical
            terms, the problem is solvable in principle.

            * http://consc.net/papers/nature.pdf

            Liked by 2 people

          3. Something to be wary of concerning Chalmers here is that he’s making up distinctions that he consider wrong. So there may be implicit implausible elements to each of them whether designed or not. One of the problems that I have with these pop philosophers like he and Dennett is that they write up these difficult to grasp meme exercises, and the public really takes them to heart. People seem to become proud of themselves for deciding that they’re smart enough to understand what these distinguished intellectuals are supposedly saying, and so start worshiping them like they’re great sports stars or whatever. It’s a sort of “my team is better than your team” kind of thing. Conversely I enjoy people who instead speak to be understood rather than devise all sorts of encrypted academic word games, or the opposite of what the market seems to pay for.

            Anyway to play along with the game (since yes, I am indeed a consciousness dork), let’s see. If type A means the p-zombie is “inconceivable”, then that’s clearly nonsense. Would he like a chance to rephrase? It’s not like we can’t “conceive” of something which functions like us without phenomenal experience! If type B means that it’s merely not metaphysically possible for them to exist, I guess that sounds about right to me. It does align with my own metaphysics anyway. On type C, if someone can initially conceive of something which acts like it has phenomenal experience without having it, how “in the limit” is this going to change? I guess he’ll need to fix that “conceive” term in “A” before “C” can be something other than obviously “No”.

            Here’s what I actually believe: The brain in general functions as a non-conscious computer from which to help operate a given organism. But just as our state of the art robots tend to fail under more “open” circumstances, evolution also wasn’t able to build its non-conscious organisms to function well under such circumstances. So beyond this main computer it must have developed a tiny auxiliary “purpose based” computer by which existence may be experienced, or something which is motivated to function by means of positive to negative valences. This would be a computer which doesn’t act in itself, though the vast non-conscious brain host uses its processing as a means from which to better base its own programming function. Notice that you personally don’t operate your muscles, but rather decide to operate them and depend upon your non-conscious brain to grant you those desires (which it usually does).

            If evolution could have developed p-zombie organisms to be effective in more open environments, then it clearly would have. Thus I think we should presume that it didn’t because it couldn’t. Therefore we don’t have p-zombies. If Chalmers has a letter for such a monist then that’s my choice.

            Where might the “p-zombie” line exist regarding organisms on our planet? It’s hard to say right now though I personally suspect that even many insects have at least some element of phenomenal experience. I appreciate recent stories about how bumble bees are now documented to use “vision” to recognize something that they’ve only tactily “felt”. To me this suggests qualia based correlations rather than programming complexity. Not that it should be impossible for something to be programmed so that its body sensors correlate with its cameras, but to me this seems like a stretch.

            Liked by 2 people

          4. I know what you mean about Chalmers’ descriptions. But it’s interesting how much stuff he considers implausible, and strikes against a certain proposition, I actually see as simply reality.

            I mostly filter out what he says about zombies. Short of dualism, I consider the concept incoherent. It’s like saying we can have a heart without having a pump, or muscles without a mechanism for producing motive action.

            I do think he means something different with “conceivable” than the common colloquial meaning of that word. He seems to be using it in a manner equivalent to “logically coherent”. Maybe there’s some philosophical precedent for using it this way I’m not familiar with, but using a common word in an uncommon way always strikes me as misleading.

            Liked by 2 people

          5. Mike,
            You can guess that Chalmers meant something with the “conceivable” term that isn’t obviously false, though it also doesn’t hurt him when people do take it literally. That’s clearly the way Liam took it. Then apparently Chalmers won him over enough to reject physicalism. Not only does a false trichotomy exist here, but one with all sorts of potential interpretations.

            I don’t want to be too harsh with Chalmers however because I consider his work to fit well under the domain of traditional philosophy. This is to say that it isn’t about humanity figuring anything out in the end, but rather about developing material to potentially appreciate. It’s more art than science.

            I propose the development of a community of “meta scientists”. They’d present various generally accepted principles of philosophy so that scientists could do their work more effectively than it’s done today. My relevant metaphysical suggestion here is that if causality fails, as in the case of phenomenal experience not ultimately reducing back to physics, then nothing would exist to discover. And if it’s the case that there’s nothing to discover here, then it would seem pointless to try. If a given scientist did want to go that way however, they’d need to do so under a second variety of science which is open to causality voids. Ontological interpretations of Heisenberg’s uncertainty principle would be considered outside of mainstream science as well. Given that standard science lacks this founding principle however, standard scientists today are free to dabble in the supernatural.

            Liked by 2 people

          6. Eric,
            I mentioned to someone else recently that one of the reasons I find Chalmers’ writing interesting, despite disagreeing with many of his stances, is he does a pretty good job describing the position landscape. When he describes “type-A materialism”, it’s mostly a fair description of my actual view. And people have said that the type-B materialism is a fair description of theirs. In my mind, this is what philosophy at its best excels at, clarifying concepts and definitions.

            It’s been a while since we discussed your aspirations for science. They seem to add philosophical tests for scientific theories. You’re not proposing the old Aristotelian tests, but your tests would still involve throwing out scientific theories, like Heisenberg’s uncertainty principle, that work in predicting observations.

            While I agree that scientific theories should strive for a causal account, I’m also aware that sometimes the current data just isn’t there yet, and difficulties often have to be bracketed to find a path forward. It’s what Newton had to do gravity, Darwin with inherited traits, and Bohr and Heisenberg with quantum physics. It seems like saddling science with philosophical tests would just unduly constrain it. I think science should be pragmatic and focus on what works.

            Liked by 2 people

          7. Mike,
            To me Chalmers hasn’t done a good job with these classifications. Don’t forget that earlier you said, “using a common word in an uncommon way [in this case for “conceivable”] always strikes me as misleading.” Instead I appreciate clear speech that doesn’t leave the listener with all sorts of vague notions of what might be meant. Furthermore some may fall for his false trichotomy and so decide that dualism must be the best answer. Surely we can do better than Chalmers, though he does seem to add to the mystique of philosophy.

            You don’t seem to quite grasp what I mean with “meta science” (a label I recently fabricated to help differentiate this from traditional philosophy). It’s not about “throwing out scientific theories”, but rather the creation of a respected professional community with the sole mission of helping science function better than it does today through various generally accepted principles of metaphysics, epistemology, and axiology. This would provide scientists with boundaries in terms of what they study (or metaphysics), how they study it (or epistemology), and also address theorized “value to existence” (or axiology). Conversely today there is little agreement regarding such questions, and so scientists can only wing it until a respected community can nail down effective principles in these regards. Just as we have scientists who develop various agreements regarding the domains that they explore (and whether or not those agreements are always effective), here scientists would develop various agreements regarding the boundaries of science itself (and also here, regardless of how well they do that job).

            Note that such a community might agree upon things which I oppose. Still I do have four principles of meta science that I suspect would be effective. I mentioned my single principle of metaphysics, but may not have been clear enough. It states that it’s not conceptually possible to figure out how things function when causality is absent, and thus limits standard science to ontological naturalism. Any scientist unable to tolerate this stipulation would be free to continue working under a more open classification of science. This would be a “natural plus” distinction.

            More specifically however, Heisenberg’s uncertainty principle would not be thrown out under such a plan. It would remain specifically because the theory seems effective. That’s all any science can potentially have. I guess you didn’t grasp what I meant by “ontological interpretations of Heisenberg’s uncertainty principle”. I meant “fundamental randomness”. While every other event in the natural world is presumed to have causal antecedents which mandate its existence, a perfectly random event would not have such a premise from which to occur. Thus my single principle of metaphysics would be violated. Conversely an epistemic interpretation of HUP puts the theory square with the rest of science. I’m not sure why some supposed naturalists consider it more than an approximation.

            Liked by 2 people

          8. Eric,
            On Chalmers and conceivability, I also noted that I wasn’t sure about precedent in philosophical literature. A quick Google of “philosophy conceivable” gets the SEP on zombies as the first hit, but the second hit is this exchange: https://philosophy.stackexchange.com/questions/10767/what-do-philosophers-mean-by-conceivable

            I think I do grasp what you mean by “meta-science”. First, I think we already have that. It’s the philosophy of science. Much of what you discuss would fall within it.

            The issue is I think we already have a “respected professional community”, two in fact: philosophers of science, and the scientists themselves, along with the journals they publish through and all the standards used to review submissions. As I noted in my post on Strevens book (Strevens being a philosopher of science), the rule about empirical reasoning is centuries old, not because it’s a sacred axiom, but because it works in establishing knowledge with some degree of reliability. It’s lasted because it works.

            On the Heisenberg uncertainty principle, people more familiar with it than me have attempted to explain it to you. All I’ll say is you seem to be conflating it with the wave function collapse, which isn’t quite the same thing. (I’m not a fan of the collapse postulate myself, although rejecting it comes with implications.) No one talks about sound waves collapsing, yet they still have Fourier conjugates, and so reputedly a sort of uncertainty principle, because of their wave structure.

            It’s worth noting that no scientific theory is guaranteed to be ontological. There are theories we commonly regard as realist ones and others as antireal, instrumental, or epistemic. But it’s always possible that in the future we’ll discover that a realist one was only epistemic after all. In the end, all we have are prediction instruments. (I still like theorists to shoot for realism, but we often can’t know whether they’ve truly hit it.)

            Liked by 1 person

          9. Mike,
            From that SEP article I see that apparently the conceivability argument is: 1. Zombies are conceivable. 2. Whatever is conceivable is possible. 3. Therefore zombies are possible.

            That’s bullshit of course. Only what’s real is possible in the end, and regardless of any clever argument. A good way to fix this would be to finish each statement with “to us”.

            “User 5172” in the conversation that you shared, presented a relatively “ordinary language” and thus normal definition for “conceive”. I cannot conceive of a round square because as I understand these shapes, one contradicts the other. And if there were a question that I wasn’t sure about which does ultimately have a false answer, truth here might be conceivable to me given my ignorance itself.

            So given user 5172’s conception, can you “conceive” of something which in all ways appears to have phenomenal experience, but does not have it? And if you cannot conceive of such a thing, then why?

            Not only can I conceive of such a thing, but many things that I consider ridiculous though not impossible. It’s instead my metaphysics which dissuades me from notions like p-zombies, gods, and so on, not their conceivability. In any case this does still seem like standard philosophy to me rather than the “meta science” which I propose. It’s stuff to potentially appreciate rather than get anywhere with.

            I agree with you that philosophers of science and scientists themselves explore the realms of metaphysics, epistemology, and axiology. What you didn’t get into however was the second required component that I demand, or the generally accepted principles which such a community would need to develop. That’s actually the whole point. For example Sabine Hossenfelder would absolutely love it if there were “an iron rule of empiricism”, contra Strevens. Instead some distinguished modern physicists argue for “beauty” to help determine what’s effective to believe. But more importantly I suspect that our soft sciences should find it difficult to harden without effective principles of metaphysics, epistemology, and axiology to use.

            On the uncertainty principle, what am I missing? It essentially states that there is “a fundamental limit to the accuracy with which the values for certain pairs of physical quantities of a particle, such as position, x, and momentum, p, can be predicted from initial conditions” (which I copied from somewhere). The backstory is that matter seems like neither “particle” nor “wave”, but both at once. Thus greater measurement accuracy for us in one regard seems to provide greater uncertainty for us in the other.

            Regardless this wave / matter duality principle doesn’t explain why our measurements go this way, which would instead be an interpretation of the HUP. My observation is that if someone interprets it to reflect a fundamental randomness regarding reality (or where a given event is not ultimately caused to occur exactly the way it ends up), then this void in causality may effectively be termed “magic”. I have yet to meet up with anyone with a coherent ability to deny this. Over the years of my blogging some physicists and non-physicists have taken exception with that reduction, perhaps not initially grasping its tightness. Do they ever realize this and retreat to the safety of “all theory of our world is epistemic”? Nah. People tend to get into these discussions in order to beat their opponents rather than concede anything.

            As I understand it when people say “wave function collapse”, they mean that upon actual measurement, other potential quantum states of being that we might have expected, no longer exist. (Well except for people like Sean Carroll, who put all these unfulfilled states of being into shiny new universes otherwise the same as our own!). Anyway I don’t see how I could have been conflating wave function collapse with the HUP itself.

            Liked by 1 person

          10. Eric,
            It sounds like you might have missed 5172’s distinction between conceivable and imaginable. Under their definition, I can’t conceive of a p-zombie except under psycho-physical parallel dualism (mind and matter do not interact but are still somehow in sync), because the concept is otherwise logically incoherent. (For details: https://selfawarepatterns.com/2016/10/03/the-problems-with-philosophical-zombies/ ) Again, under their distinction, I can imagine such a thing, but along the lines you discussed, I can imagine Santa Clause, a being who in one night can visit every household in the world.

            On the iron rule, Strevens’ point is that it exists in official scientific communications, but not in private or popular musings. So a scientist can use beauty to judge a theory of they want to, but they can’t publish that in an actual scientific paper. There, they have to refer to empirical data or the logical and mathematical extrapolations from it. Which is to say, there are generally accepted principles. Although they’re more complex and varied than philosophers typically acknowledge, and are constantly being revised, themselves a product of science.

            On the HUP, it’s related to the wave nature of quantum objects, and actually applies to any wave system, not just quantum ones. But it’s hard to get into without diagrams. Luckily, someone did some.

            You conflated the HUP with collapse with the discussion of randomness, which only comes about if there is a wave function collapse. No collapse, no randomness. Note that the deterministic interpretations are the ones with no collapse: https://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics#Comparisons But these interpretations still have to deal with the uncertainty principle.

            Liked by 1 person

          11. Mike,
            All I see regarding 5172’s reference to “imagination” is that some things that he/she can conceive of, such as a 1000 sided polygon versus a 999 sided polygon, he/she can’t quite imagine the difference. So here he/she seems to mean something like “picture”. And even if not imaginable it would remain conceivable. Furthermore he/she even seems to open the door to false things being conceivable as long as a given person doesn’t happen to be so enlightened, as in “conceivable to a person who happens to be ignorant in that regard”. The only true constraint here seems to be a contradiction in terms, such as a round square (that is if a person also grasps what we mean by these terms).

            Anyway I’m pleased with your assertion that you can conceive of the philosophical zombie under psycho-physical parallel dualism. Me too! Thus in Chalmers’ scheme we wouldn’t be type-A materialists, or even type-C materialist I think. We’d be type-B, or as Chalmers said, “zombies and the like are conceivable, but they are not metaphysically possible.” Good old Saint Nick probably wouldn’t have gotten too far in our culture with merely the conceivability of a round square!

            On Strevens, I do consider his rule to generally be correct, though not worthy of the “iron” title. Unless Hossenfelder has been telling us lies about how modern physics is formally pursued today, his rule doesn’t seem to deserve it. Perhaps if modern physics were excluded? And even if we do have a modern community of epistemologists who are utterly smitten by the virtues of empiricism, then my point becomes that we need principles which are far more extensive than this one feature, and the same in all facets of meta science.

            On “no collapse, no randomness”, I guess that’s fine with me. But what you’re talking about here are QM interpretations. I don’t have one. My reduction concerns QM interpretations in general. If someone decides that QM reflect fundamental randomness to reality, then I argue that this should effectively be considered a belief in supernatural dynamics. It seems to me that physicists tend to only get ontological when they speak of QM. I think they should exclusively make epistemic proclamations, and so here as well.

            Liked by 1 person

          12. Eric,
            “The only true constraint here seems to be a contradiction in terms, such as a round square (that is if a person also grasps what we mean by these terms)”

            Given that Chalmers’ assertion is that conceivability is what makes something logically possible, I think that’s the constraint that makes sense.

            I don’t think me admitting zombies are conceivable under psycho-physical parallelism is enough to put me in the type-B or type-C camps. First, I regard psycho-physical parallelism as akin to deism, a heavily compromised concept formulated to save some semblance of the original. I can’t prove deism isn’t true, but there’s no positive reason to suppose it is true. Same for psycho-physical parallelism. And short of some variant of theism, paralleism seems profoundly improbable. Second, I still regard the hard problem as a category mistake, and see nothing preventing consciousness from being fully reducible to physics.

            From Chalmers:

            The characteristic feature of the type-A materialist is the view that on reflection there is nothing in the vicinity of consciousness that needs explaining over and above explaining the various functions: to explain these things is to explain everything in the vicinity that needs to be explained.

            http://consc.net/papers/nature.html#:~:text=The%20characteristic%20feature%20of%20the,that%20needs%20to%20be%20explained.

            That’s pretty much my view.

            “Unless Hossenfelder has been telling us lies about how modern physics is formally pursued today”

            I think people like Hossenfelder and Baggott are misleading about what actually happens in most of physics. They imply that physicists are claiming speculative theories as settled science. However, exceedingly few people actually take that stance. It’s worth noting this statement from Briane Greene (a frequent target of this crowd) in the beginning of his book on multiverse theories:

            The subject of parallel universes is highly speculative. No experiment or observation has established that any version of the idea is realized in nature. So my point in writing this book is not to convince you that we’re part of a multiverse. I’m not convinced—and, speaking generally, no one should be convinced—of anything not supported by hard data.

            Greene, Brian. The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos . Knopf Doubleday Publishing Group. Kindle Edition.

            What this crowd seems to object to is theoretical exploration. But as I’ve noted before, far too many successful theories start out as exactly this kind of exploration. No one knew how to test Copernicus’ theory when he proposed it in 1543. It couldn’t be tested until 76 years later.

            That’s not to say that some theories aren’t far more speculative than others, and so are on shaky ground. But too often they attack any speculation as being as bad as the worst examples.

            On QM, any ontological statement can be interpreted as an epistemic one. The ones that can only be interpreted in an epistemic one may be correct as far as they go, but they do little to inspire experimental research. Science is more than just reliable knowledge, it’s the pursuit of reliable knowledge. Sitting comfortable in the knowledge we currently have doesn’t lead to progress.

            Liked by 1 person

          13. Sounds good Mike.

            By the way I think I mentioned Sergio over here somewhere, and he did get a bit of a response from Chalmers to his strong emergence question. He’s still confused however, which is exactly what I predicted in the previous post that he’d be with a Chalmers answer. I told Sergio that he’s been too hesitant to characterize Chalmers in a way that we consider unflattering. https://sergiograziosi.wordpress.com/2020/12/05/correction-on-what-i-did-not-understand-about-chalmers-concept-of-strong-emergence/#comment-4558

            Liked by 1 person

  9. Mike,
    Just an FYI, I don’t have a problem with any of Eric’s remarks. We are good friends in a way that others wouldn’t necessarily understand: so it’s cool.

    Liked by 2 people

  10. Mike,
    Just to clarify, it is weak emergence and furthermore, that “separate and distinct system” occurs within the physiology and confines of the physical brain. In addition, my model would also fit the profile of Type A materialism where everything is ultimately reducible to what I refer to as an Irreducible Imperative. (different subject)

    Here are some additional anecdotes to consider:

    As a system, the brain has two intrinsic functions. First, the brain manages all of the involuntary systems of its biology as well as all of the voluntary systems managed by the mind. Second, the brain generates another system called mind, a “separate and distinct system” that emerges from the neural processing of the first system (the brain). Mind is a system of unprecedented power, a power not found anywhere in the natural world. Nevertheless, this unprecedented power comes with a catch; and that catch is called subjective experience.

    Since the system of mind has a subjective experience in contrast to the brain’s objective experience, the system of mind cannot be relied upon nor trusted to manage the objective systems of its own biology, a biology that is absolutely essential for its own survival. Furthermore; as a single system, the brain cannot have both a subjective experience as well as an objective experience. That would be a contradiction of unprecedented proportions. In order to avoid this built in paradox of a tautology two systems are required, one that has an objective experience and another that has a subjective experience.

    We are now confronted with the ultimate paradox built into the architecture of the mind/ matter dichotomy; a twisted, subjective experiencer with unprecedented power in an otherwise objective reality.

    Liked by 1 person

    1. Thanks Lee. That does sound like it might be more plausible, although the Irreducible Imperative sounds like something I might have concerns about.

      How would you describe the difference between subjective and objective experience?

      Liked by 1 person

      1. Positing an Irreducible Imperative is post-graduate metaphysics so, it’s best to leave that hypothesis on the shelf in order to avoid obfuscating what we are discussing here.

        An objective experience of any given system would reflect the true nature of reality, whereas a subjective experience is subordinate to interpretation; and those interpretations are expressed as intellectual models. Furthermore, the interpretation of any given experience is determined by a system with unprecedented power, i.e. the system of mind. Clearly, a subjective system could not be trusted nor relied upon to run the vital, objective systems that support our own biology, hence the separation of powers.

        Liked by 1 person

          1. It would be very interesting to see what other individuals take on this would be. The fundamental question being: Is it possible for a single system to have both an objective experience as well as a subjective experience at the same time, and if so why?

            Like

          2. The short answer is yes, and for the most part subjective experience gets it right or we wouldn’t have survived as a species.

            Regarding metaphysics and the inquiry into the true nature of being, namely the mind/matter dichotomy; the short answer is again yes, it is possible.

            Liked by 1 person

          3. “Is it possible for a single system to have both an objective experience as well as a subjective experience at the same time, and if so why?”

            What always concerns me with the subjective/objective debate is that the subjective very quickly becomes objectified. For example, some might say that there are n objective brains in the world each with a subjective viewpoint. But then this is saying we have n subjective viewpoints and immediately we have objectified the subjective.

            The subjective as I understand it is a viewpoint which only I have at the moment. There is only one conch so to speak. I might hand that conch over to you as you speak/write and you might hand it back. But to talk of each brain having a subjective viewpoint I think is missing the point.

            So I wouldn’t say that a single system can have an objective and subjective experience at the same time. There is only one subjective experience in the universe and at the moment that is with me.

            Liked by 1 person

          4. Hi Dr. Michael,
            I’m not sure if I’m following your point here. When you say “conch”, what do you mean by that word? Or that there is only one subjective experience? Are you saying there’s only one in any one person’s universe? Or in the universe?

            Like

          5. Hi Mike, a conch is a large sea shell. The writer William Golding used it in his novel “The Lord of the Flies”. There was only one conch on the island and whoever had the conch had the right to speak.

            Perhaps not the best analogy, but I was trying to suggest that one’s subjective experience is not something we can objectify. If I say that all humans potentially have “a subjective experience” like mine, this term tries to objectify something I experience. But then it completes destroys the meaning of “subjective”. My subjective experience, the one I’m having now and the one I will always have, is qualitatively different from anything objective in the world.

            “Are you saying there’s only one in any one person’s universe? Or in the universe?”
            – good question. My guess is that the two are the same. My person universe is the universe. There is no way I can subjectively experience it any other way. Best, Mike.

            Like

          6. Thanks for the clarification. If I’m understanding correctly, your point is that subjectivity is always relative to the subject. I can see what you’re saying.

            Although I wonder about a case where someone says they love Byzantine iconoclast art, but someone else says they find it repulsive. Wouldn’t these be cases where we would refer to their preferences as subjective, even if we were a third party observer with on particular stance on the issue?

            Like

          7. “Although I wonder about a case where someone says they love Byzantine iconoclast art, but someone else says they find it repulsive. Wouldn’t these be cases where we would refer to their preferences as subjective, even if we were a third party observer with on particular stance on the issue?”

            – I would classify this notion of ‘subjective’ as different to the one I was talking about. This notion is concerned with judgments that might be classed as ‘biased’, ie. subjective.

            My notion was concerning the viewpoint we have. My first-person perspective is my ‘subjective’ experience. I can only ever see the world from the first-person.

            Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.