The urge to downplay the brain

For much of human history, most people thought the seat of the soul was in the heart. There were some ancient thinkers who managed to figure out the role of the brain, but widespread acceptance of it is an early modern development from the scientific revolution. But it seems like something a lot of people have never been fully at peace with.

In recent years, there’s been a movement toward the idea of embodied cognition, that thinking is a full body phenomenon. I think this movement makes some important points. Considering the brain outside of its embodied context can lead to a lot of problematic notions. The vast majority of what goes on in the brain is in terms of the body’s sensory inputs, motor outputs, and overall homeostatic state.

That said, many in the movement seem to get carried away with this principle. They see it as a trump card against artificial intelligence and ideas about mind uploading. I think the reality is that embodied cognition has to be taken into account when thinking about these things, but seeing it as closing the door on them is just not thinking through the options carefully. For example, AI and uploaded minds can be in artificial or virtual bodies.

I thought about these issues when reading Riccardo Manzotti’s IAI article: There is no problem of consciousness (warning: possible paywall), in which he lays out his idea of what consciousness is, which he calls the mind-object identity hypothesis.

The basic idea is that we make a major mistake by separating our minds from the world, of seeing consciousness only existing in our heads, and dividing the subjective from the objective. Manzotti says we need to back up and rethink this notion. His idea is that consciousness is the relationship between our bodies and the objects that exist relative to it.

He acknowledges that different people and creatures will have different experiences of those objects, but that’s because they have different bodies. He also addresses dreams, imagination, and hallucination by noting that we can only dream, imagine, or hallucinate some combination of things we’ve actually perceived at some point. So we can’t imagine a color we’ve never seen. And while we can imagine a pink elephant, we do so by assembling properties from other things we have perceived, such as normal elephants and pink things. All of which means that our consciousness is not just extended in space, but also in time.

So rather than just talking about embodied cognition, this seems like an expansion into in-world cognition. It has some resonance with the extended mind thesis, but seems more radical and comprehensive.

Manzotti rejects the idea that his theory fits within panpsychism, which he regards as “dualism on steroids”. He emphasizes that his theory is the ultimate physicalist view, only having room for objects that exist in relation to each other. But there are different variants of panpsychism, not all of which consider themselves distinct from physicalism. And objects existing in relation to each other seems like a universal condition.

I think the only way his theory can avoid panpsychism is to regard a brain, or a system with similar functionality, as being crucial for consciousness. So a bunch of rocks in relation to each other aren’t conscious. But a brain in a body in relation to all those rocks would be. The thing is, once we do that, we’re no longer just talking objects existing in relation to each other.

And this gets at my overall concern with both this theory and overly expansive conceptions of embodied cognition. The brain remains the crucial component. Just remove it from the body or the object relations, and there’s no consciousness, or cognition overall. (Or at least nothing like our version of these things.) But with the brain back in the mix, it’s not clear how distinct Manzotti’s theory is from other materialistic theories of mind.

Still, it’s a fascinating idea, and my short description here doesn’t do it justice. I recommend reading his article if you find it interesting. He also put out a paper on this idea in 2019, which I haven’t read yet, but looks like it goes into a lot more detail.

What do you think? Am I over emphasizing the role of the brain here? Or missing something important in the embodied cognition idea or mind-object identity theory? What do you think of Manzotti’s overall?

108 thoughts on “The urge to downplay the brain

  1. While I don’t want to make an ass of myself given I haven’t read the other papers, I would think this summary makes sense. Without the body where these neural connections originate, there would be no cognition. Needless to say, it is impossible to imagine cognition without a brain.

    Liked by 1 person

  2. I’m happy with the idea that we put ourselves in relation to other things, as a two way process (I also think a relational view may have wider applicability in metaphysics). However clear directionality of subject to object is imposed by us assigning a valence to what is current incoming, and what the consequences might be for us of outgoing actions that we might take, and doing this every cognitive cycle in order to survive and thrive. That’s an actual calculation that not only has to be done (in the brain supported by the body), but also has to be brought to a head as a coherent set of actions, attention and discriminationsa few times a second. I don’t see anything in his description that gets to grip with this specific detail.

    Liked by 1 person

    1. Good point on valences. The affective part of consciousness often gets overlooked in these theories.

      I like Manzotti’s theory for its out of the box thinking. I honestly don’t think there’s any real fact of the matter on whether it’s a description of consciousness. But it is useful for getting us thinking in along different pathways.

      That said, definitely the brain is where a lot of crucial aspects take place. And it seems like those aspects need to be situated at the causal nexus, so it seems meaningless to talk about valences happening in the objects in the environment.

      Like

  3. I like your presentation of Manzotti’s theory of consciousness and it makes me want to read his article and paper, which I have not yet. Responding to your presentation of his ideas, I like his approach to the supposed hard problem of consciousness because it gracefully leaps over most of the obstacles learned and well-meaning experts have created to quash any attempts at AGI (Artificial General Intelligence), by ignoring our various subjective experiences of “what it’s like to experience consciousness” or whether our sense of reality is an illusion or a user interface, and asserting, whatever is outside us or inside us, a crucial function of our brains is to maintain a viable relationship with the world in which we exist, in order to survive and thrive. To my way of thinking, this approach is clearly functional. Consciousness is whatever it does. I believe there are countless kinds and degrees of consciousness and it’s high time, we increased the resolution of our approach to the subject. I’ve recently read Nir’s and Tononi’s paper, “Dreaming and the brain: from phenomenology to neurophysiology”, Hobson’s, Hong’s, and Friston’s paper, “Virtual reality and consciousness inference in dreaming”, and Peña-Guzmán’s book, “When Animals Dream: The Hidden World of Animal Consciousness”, which I’ll attempt to summarize in a future blog post. Current research indicates that most mammals, some birds and fish, octopuses and cuttlefish, and numerous other animals dream during REM and non-REM sleep. Peña-Guzmán, a philosopher reviewing the research, makes the case that sleep is a form of phenomenal consciousness detached from all sensory inputs. Doctors also indicate that fetuses achieve consciousness in utero around the seventh month. Some theorize that something in the DNA predisposes us to pre-build a basic model of the world and puts us in the center of that modeled world. After we are born, we are constantly verifying and correcting our model against external and internal sensory and motor inputs. Why can’t a set of assumptions about the world and an AGI system be pre-coded and then corrected based on its relation to the real world, such as it is?

    Liked by 1 person

  4. “The basic idea is that we make a major mistake by separating our minds from the world, of seeing consciousness only existing in our heads, and dividing the subjective from the objective.”

    I can agree with that part.

    ” His idea is that consciousness is the relationship between our bodies and the objects that exist relative to it.”

    But here I have to disagree with Manzotti.

    From my perspective, the objects that he refers to are themselves created by our perceptual systems. It our perception and our minds that divide the world into objects. But Manzotti seems to be assuming that those objects are independent of us.

    Manzotti: “The notion of subjectivity is an invention”

    I’m more inclined to the view that objectivity is an invention.

    Liked by 1 person

    1. … and ‘I’ as a single thing am also an object created by the mind for ease of computation. ‘I’ am actually messily distributed over space and moving around, a loose coalition of moving parts that survives while it does well enough at approximating what is going on.

      Liked by 2 people

    2. Yeah, Manzotti presents his theory as the ultimate physicalist one, distancing it from both panpsychism and idealism. Here’s his response to idealism.

      Idealism, on the other hand, requires thinkers who have ideas, while MOI, has no place either for ideas or thinkers, only for relative objects that bring each other into existence by means of mutual causal relations. We are objects, yet not the objects – our bodies – that scientists have always pointed to, anxious that not to identify self with body meant accepting the notion of an immaterial soul. Fortunately, MOI advances another solution: we are the objects that take place relative to our bodies.

      Of course, you can flip this and say that all of these objects and bodies are creations of the mind rather than the other way around. I’m closer to Manzotti’s view, but not entirely there.

      Liked by 1 person

    1. Good point, but as you note, they are radically different. Whether any particular system is or isn’t conscious is, I think, about how much like us that system is. So something radically different isn’t going to trigger most people’s intuition of a fellow being.

      Like

  5. I wrote this on my blog:

    “The evidence for this model is hiding in plain sight. We need not point to brain scans, physics, or information theory. The primary evidence is your own conscious experience itself. Your conscious experience is the model. It may be difficult to appreciate how literally this is meant. Consciousness is a model of the world with yourself in it. That the model projects a reality externally from the body (and the brain) adds to the illusion that what we see, hear, and experience is exactly what is there instead of a model of what is there.

    The biggest hurdle to understanding this concept is understanding that you, your body, your room, the tree outside your window, the clouds in the blue sky at a distance, all you remember about the past or imagine about the future, are parts of the model. You are not seeing a tree outside your window. You are seeing in your model a representation of a tree outside your window (which is also part of the model). The thumb that hurts if you accidently hit it with a hammer is not your real thumb. It is your model of the thumb. What is more, it is not just the tree or the thumb that is in the model, it is the “you” seeing the tree or feeling the thumb. Our self is embedded in the model, not separate from it.”

    https://broadspeculations.com/2022/07/10/what-wind-tunnels-can-tell-us-about-consciousness/

    I think what I am saying is a different way of saying what Manzotti is saying.

    Liked by 1 person

    1. My understanding of Manzotti is he would probably deny the model part and just say that we just are all those things in relation to our bodies. So his view of what’s going on the brain would be something different than the models. Of course, that might come down to how we define “model”. It might be that the views are the same once we reconcile definitions.

      My reaction is the same as it was back then. I mostly agree, but think it’s a mistake to say we “see” the model. (It’s why I’m not a fan of the word “representation”, since it implies something being re-presented to some internal observer.) I think we see the outside world, and the model is how we see it. That those models can be retroactivated by imagination or dreaming doesn’t seem to change that. In that sense, I’m closer to Mazotti here (I think).

      If we say we see the representation, then we’re faced with having to explain how that internal “seeing” happens. Is there a model of the model involved? (Which actually is what higher order theory is about.) But then is there a model of that model of the model, etc? In other words, we face the danger of an infinite regress, which we have to break out of at some point.

      Liked by 1 person

      1. ” I think we see the outside world, and the model is how we see it. ”

        In a sense I could agree with the caveat that the outside/inside distinction is part of the model. The outside world as experienced is part of the model.

        To quote:

        “The hypothesis is simple: there is a world of physical objects that take place relative to your body – the laptop, the mug, and all the rest. There is no inside and no outside. There is no here and no there. There is just your existence, you, as one would expect in a physical world. ”

        “Is there a model of the model involved?”

        Please. The model(s) derives from the integrated processes in the brain and constantly changes but there is no reason to imagine a homunculus or models of models of models. The internal observer like the outside world is a part of the model, not the thing that sees the model. To quote myself: ” It is not just the tree or the thumb that is in the model, it is the “you” seeing the tree or feeling the thumb. Our self is embedded in the model, not separate from it.”

        Liked by 1 person

  6. Okay, having now read the article, here goes:

    I don’t think Manzotti’s Mind-Object Identity theory does anything useful. It simply re-defines identity, i.e., “you”, which is pretty much the same thing that most (all?) other theories of consciousness do. In this case he redefines “you” as being the stuff in your body in relation to all the stuff outside your body. And that’s fine, and I can see how it “works”, but I don’t see what that does for me. What it fails to do is recognize the nature of those relations (mutual information) and how they come to be via brain processes. It also fails to recognize that for a given physical system and its physical description there is a non-physical, information processing description, and this provides the objective/subjective difference.

    *
    [to say it again, “you” are software]

    Liked by 2 people

    1. Yeah, I’m not sure how much the view actually buys us myself. I like the way he’s thinking since it coaxes us into new ways of thinking. But as I noted in the post, the brain (or equivalent system) remains crucial, and how it does what it does is still a critical piece of the discussion.

      It does highlight how much any theory of consciousness is first and foremost a philosophical proposition, which must be accepted prior to trying to do any science with it.

      As I’ve noted before, I’m not wild about your language referring to “non-physical” information processing. I think information processing is 100% physical. It increase entropy and produces waste heat. And the hardware / software divide really refers to an innovation of technological computation I don’t think biology ever really found. But we agree that the information processing is central to the story.

      Like

      1. Oooo, so we have a point of contention! 🙂

        Agreed information processing is 100% physical, but the informational description is separable from the physical description because it is multi-realizable. The informational description tells you nothing about the physical description, except it limits to a certain set of possible physical descriptions.

        As a tenet of information theory, the informational description could theoretically be given entirely as a combination of the information operators COPY,NOT,AND,OR. (Or just NOT and AND, if you want to get technical.) That includes what neurons are doing, although the description of what a single neuron is doing in this form would be ridiculously long.

        And subjectivity refers to the informational description. Whether something “sees red” is explained entirely in the informational description, and will be the same for any system that has an identical informational description.

        It is unfortunate that the hardware/software divide was first recognized in the programmable computer context, just as it is unfortunate that the fundamental nature of information (correlation) was first analyzed in the communications context, which ended up relegating it to a side topic in information theory. But here we are.

        *
        [I do love a good rant]

        Like

        1. Hey, those points of contention happen!

          I agree that it’s possible for it to be made multi-realizable. But biology doesn’t use that multi-realizability naturally in the case of the mind. We may be able to do it technologically someday, but that would be us adding new capabilities. (A case could be made that it does happen with genes in DNA and RNA.)

          But here’s the thing, every realization of information is physical, and if all the physical realizations are destroyed, then the semantic information doesn’t exist anymore. We could go platonic here and insist that it still exists platonically, but that doesn’t help us with anything practical. Neither does the fact that it would still exist as physical information in some transformed manner. In any pragmatic sense, the semantic information system would be gone. For example, we can’t read many ancient literary works because they’re lost, in that every copy of them that might have existed were destroyed over the centuries.

          So I think it’s possible that a mind will someday be run as software, but it’s not how they exist in evolved brains. While brains are computational systems, they’re not general purpose ones with the right architecture to erase the mind that’s there and load in a new one. It’s just not a capability that was ever selected for.

          [Hope that doesn’t come off as a rant. Let me know if I’m missing anything here.]

          Like

          1. Nothing you say here is wrong, but I think you’re not getting what I’m trying to say. I think we may need to jettison the word “software”, because you’re not seeing it how I see it. Hardware/software is a special case of physical/informational. I tried “informational description” but that didn’t seem to catch. Did the COPY, NOT, etc. part make sense?

            To try again, I’m not saying that what a neuron is doing is like software on a programmable computer. I’m saying that what a neuron is doing has an information description (COPY, NOT, etc., at bottom). And the informational description is independent of the physical description, except that there must be some physical system which implements the informational description. Just like a book needs some physical implementation, whether printed on paper, or stored as digits in a computer, or memorized by a traveling bard. But the content of the book, the story, is independent of the medium. The story is the informational description. Subjectivity, aboutness, is in the story, in the informational description.

            Does that help?

            *

            Like

          2. I think I understand what you’re saying. It’s a view I’ve tried on a few times over the years. I didn’t pull the platonism reference out of thin air. It came from a lot of previous reasoning. Much of which culminated in the view that if we’re going to talk about abstractions existing independent of their physical implementations, we’re talking about something like platonism.

            Citing the logic gates is an interesting move. Yes, they are information processing components, but they are also causal mechanisms. This gets into how we define “physical” and “information”. I define “physical” as operating according to understood or understandable physical laws and causally interacting with other physical systems. COPY, NOT, etc, seem like straight causal machines to me. And as we’ve discussed before, I see “information processing” as causation and “information” as a snapshot of causal processes.

            So if you still say that this information is non-physical, I’d ask what definitions of “information” and “physics” you’re using.

            Like

          3. So if something is a straight causal machine, does that mean you can tell whether it executes an AND vs an OR? Because you can’t, without further info. As an example, here’s the system with two inputs that either “a” or “b” (choose your own physical values) and the outputs are either “x” or “y”:
            a + a => x
            a + b => y
            b + a => y
            b + b => y
            Is this an AND or an OR? The answer is, it depends whether “a” is “true” and “b” is “false”, or vice versa.

            My definition of information is “correlation”, which is the same as “mutual information”. Physical interaction creates or “causes” correlation. Because interactions follow rules (“physical laws”), the output of an interaction is correlated (mutual information) with the inputs (and the mechanism, if you’re separating that from the “input”, as I’m inclined to do).

            This correlation is a result of a physical process, but is not a physically measurable property. I can’t just give you X and ask what it correlates to.

            So what does a COPY copy? Correlation. Say “a” is correlated with “y”. If the operation is COPY(a) -> b, then b has (approximately) the same correlations as a, which is to say, b is correlated with y. A NOT operation inverts the correlation, etc.

            Questions?

            *

            Like

          4. As you note, correlations are caused by physical processes. But if we can’t measure correlations, then how do we recognize their presence or absence?

            I agree that a correlation in isolation is only going to be detectable if we already know something about the causal history of the correlates, or can observe patterns in the instances of those correlations that clue us in to their shared causal history. For example, noticing the relationship between the number of tree rings in a tree trunk and the age of the tree when it was chopped down.

            Correlations are only correlations because of that causality. When we take an observed value and use it to conclude something about another value someplace else, that first value is having a causal effect, leading to correlations in our brain with the other value.

            All of which is to say, it seems like causality remains at the center of this. And a correlation, considered in isolation, is a snapshot of causal processes. So I think we’re on a similar same page here.

            Except for whether “non-physical” is accurate or misleading here.

            Like

          5. I was with you until
            “When we take an observed value and use it to conclude something about another value someplace else, that first value is having a causal effect, leading to correlations in our brain with the other value. “

            First, I want to keep causality simple. Causality is about interactions between physical systems. When you say “the first value is having a causal effect” I’m not sure what you mean. Would changing the correlations in the first value change anything that physically happens? I hope not.

            I guess you could say that given changes in the correlation in the first value, the same physical process “causes” the effect to have a different correlation. But then you’re just giving an informational description as opposed to a physical description. Again, for any physical interaction there is a physical description and an informational description. And when talking about causation, it would be good to be clear which description you’re using.

            *

            Like

          6. For me, as soon as we talk cause and effect, we are already in thought space (informational not physical) because we have arbitrarily partitioned up the unfolding universe into separate systems (including observers) and categorised possible causes and effects to a mentally manageable number of categories at defined times in order to measure a correlation. We (including physicists) never actually know the current physical situation or the detailed material properties at every point to infinite precision, so we are unable to precisely project forward from a cause to an effect with 100% accuracy, except in a non-existent perfect experiment. We are always only striving to make a useful enough mental approximation to reality (what ever that is!).

            Liked by 2 people

          7. Some of it might be the way I worded that sequence. What if it’s done this way?

            When we take observed value-A and use it to conclude something about value-B someplace else, value-A is having a causal effect, leading to correlations in our brain with value-B.

            “Would changing the correlations in the first value change anything that physically happens? I hope not.”

            Changing value-A would, since that’s the one we’re interacting with. (Assuming the change happens before or during our interaction.) Changing value-B wouldn’t, at least until we interacted with it. Sound better?

            I don’t think there’s any sharp boundary between a physical description and an informational one. What we call the informational one is just a description with many of the lower level details abstracted out. But even a description we label “physical” exists at some level of abstraction. Of course, a different set of lower level details could implement the same higher level abstractions, which is multiple realizabilty. But it’s always physical, just with different levels of description.

            Clear as mud?

            Like

          8. “ I don’t think there’s any sharp boundary between a physical description and an informational one.”
            This statement suggests a category error. The two descriptions are describing two distinct categories. And you have to distinguish them if you want any hope of explaining consciousness, experience, subjectivity.

            Let me try this: you can describe any physical interaction as
            Inputs(a,b,…) -> [mechanism] -> Outputs(x,y,…)
            this will have an informational description of
            Correlations(a*,b*,…) -> Correlations(x*,y*,…)
            Changing just the correlations of a (so, changing a*) will not change the outputs(x,y,…), but may change the correlations x*,y*,…

            On further consideration, I don’t think we’re differing on the metaphysics. I think we’re just differing on phrasing, and I will suggest that if your project is to explain consciousness, experience, subjectivity, etc., you will be better served to make the above distinction clear, because those things (consc., etc.) are best explained in terms of the informational description.

            *
            [and informational description = software, dammit]
            [I can leave it there unless you have questions]

            Like

          9. “Changing just the correlations of a (so, changing a*) will not change the outputs(x,y,…), but may change the correlations x*,y*,…”

            Maybe, maybe not. Changing a* means changing the causal chain that caused a to be a. And it changes the causal effects of the output x,y, which is what it means to change x*,y*. The issue is you’re drawing a line around the system, talking about its outside correlations changing, but then only looking at the system in isolation. Changing a* and x*,y* changes things outside the system, both in its precursors and aftereffects.

            I agree that we’re on the same page ontologically. The reason I resist the idea of a sharp distinction is it causes confusion. Christof Koch, using a Shannon model of information, thinks information processing isn’t what’s important for consciousness, but the causal structure of the system is. I say that’s a distinction without a difference, because an information processing system is a system with a high degree of causal differentiation relative to its energy levels. In my mind, that’s what it is to be an information processing system, at least once we situate it in its environment.

            I view software as an easily changeable causal structure, with hardware being one less easy to change. The brain seems somewhat in between, since the strengthening and weakening of synapses essentially changes its causal structure, but not as easily as loading new software on my laptop.

            I’m enjoying the discussion, but totally okay if you’re ready to wrap.

            Like

          10. I was ready to wrap until “Maybe, maybe not”.

            Yes, I draw a line around the system. Well, actually, I put brackets around it and call it the mechanism. But this is standard and necessary practice to say anything about anything.

            “ Changing a* and x*,y* changes things outside the system, both in its precursors and aftereffects.”

            Changing a* changes something in the prior causality, but has no effect on how the system responds to a. x* and y* may be different, but x and y are not, and nothing that comes after is physically different. That’s how you get hallucinations or illusions or simply mistakes. You respond to a as if its correlation is a*, but the actual correlation is different.

            Yes?

            *

            Like

          11. Ah, I drew you back in. 🙂

            I think what you’re saying applies to every logic gate, circuit, opcode, subroutine, or subsystem, all of which we could consider in isolation, but whose placement can change its causal role. In every case, there’s both a physical and information description. (It’s worth noting that, somewhat related to Peter’s point, a description is itself information.)

            Or consider homonyms, like the word “bear”. The word itself is information, but it’s information that can have different causal structures around it. If I say, “You bear the weight of that burden well,” it likely has very different prior causes and aftereffects from, “Look out for the charging bear!”

            Illusions can be from misidentified correlations (such as the color of the infamous dress), although I think a hallucination typically implies the mechanism(s) are going wrong as well.

            Like

          12. Re: logic gates, … subsystems, systems: yes.

            Re: homonyms, it’s worse than that. Don’t know if you want to start this tangent, but every input a doesn’t just have a single correlation. It has vast numbers of correlations, although most of those do not have a correlation value that’s very high. My go-to example for this is the hand-written sign in a Japanese mountain village that says “Come in for the best breakfast for miles around. Today’s special is fried fish”. That sign has a high correlation with the establishment being a restaurant, but it also has a high correlation with someone understanding colloquial English living nearby. Which correlation is relevant is determined by the responding mechanism.

            Re: hallucination implying mechanism going wrong, I think you need to tread lightly. There are two mechanisms involved here: 1. The mechanism which creates the input a, and 2. the mechanism which responds to the input a, with the assumption that a has the correlation a*. Mechanism 1. goes wrong if it generates a, but without the correlation a*. Mechanism 2. only goes wrong if it responds to something other than a. Otherwise it is correctly responding to a, even if a doesn’t have the assumed correlation a*.

            *

            Like

          13. Are you familiar with the idea of unlimited pancomputationalism? It argues that because we can always map a physical system arbitrarily to any meaning, that every physical system implements every algorithm.
            https://plato.stanford.edu/entries/computation-physicalsystems/#UnlPan

            We had an epic debate about this years ago on the blog.

            Are rocks conscious?

            Although your point that most don’t have a high correlation value sounds similar to my view, that the mappings for most of the algorithms is so involved that they themselves amount to an implementation of the algorithm, while blaming it on the other system. Overall, I don’t think we can ignore a system’s environment, essentially it’s structure and relations in the world. (Which actually resonates with Manzotti’s view, now that I think about it.)

            On hallucinations, tread lightly? Am I in danger of offending hallucinators? 🙂 Anyway, my point is that a lot of hallucinations come about because the mechanism where a leads to x,y can itself go awry. Think of a neural lesion caused by a stroke. At that point we might still have the right a*, but get m,n rather than x,y, leading to m*,n* instead of x*,y*.

            Like

    2. Hi all! thanks for considering my hypothesis (which is an empirical hypothesis and not a philosophical slaight of hands!) and thanks to SelfAwarePatterns for the excellent post.
      However, I’m always struck by the misunderstanding in which many readers (no doubt in good faith) stumble. As regards to this thread consider that James got the very opposite of what I state in my work. That is, James writes that
      “[Manzotti] redefines “you” as being the stuff IN[SIDE] your body in relation to all the stuff OUTSIDE your body”
      This is the opposite of my view, which states that (let me borrow the above wording):
      “[Manzotti] redefines “you” as being the stuff OUTSIDE your body in relation to all the stuff INSIDE your body”
      So great is the strength of commonsensical assumptions that many readers snap back to old ideas.

      Liked by 1 person

  7. If a bunch of rocks in relation to each other aren’t conscious, I can’t see any reason why a brain should be. And what does it mean “being in relation to”? I didn’t go through all the paper, but saying that an apple has the properties of the experience of the apple sounds to me only as a tautology (or an impossibility?) that doesn’t clarify much. Adding that an experience of an apple is the apple that exists relative to the body sounds also something either obvious or semantic trickery. Or, as Manzotti states at the end of the paper, that “experience is identical with the external physical object that exists relative to my body” seems to me either a void statement or, to the contrary, a good starting point for an idealist conception of reality. The latter would make much more sense to me. But, as I understand him, that’s the opposite of what he had in mind.

    Liked by 2 people

    1. I think his thesis is maybe a little more robust than that, but I’m not sure how much. I do agree that I’m not sure what it’s really buying us. We could view it as the causal effects of consciousness rather than consciousness itself, which might come down to where we decide to draw the borders around “consciousness itself” vs the things that affect it. As I noted to others, it’s very much a philosophical proposition rather than an ontological claim. Except, as you note, Manzotti sees his view as very strongly physicalist.

      Like

    2. I think I have referred to my view which I think is similar to Manzotti’s as limited solipsism, which would be a form of idealism.

      In short, the only reality we know is in our mind but not all reality is in our mind. A question would be how and in what way is what is in our mind accurate to what is not inside it. Science helps to answer it but also is still subject to limitations because there is no knowing without mind.

      Like

      1. Isn’t that the question Donald Hoffmann addresses with his interface of reality theory? Roughly speaking, we have only an approximate or even completely false perception of reality because it is that which favors evolution. But I don’t think this renders the idea about why we perceive a figment of the world. It is not about approximations. We can have a whatever precise representation of the world, but that would not bring us an inch closer to the things as they are in themselves. It is like living in a world of shadows. Whatever precise knowledge of the shadow we might have, it will always remain just a shadow. Once we clearly grasp this, lots of question dissolve.

        Liked by 1 person

        1. I mention Hoffman in another comment. Yes, the idea is similar but I think some of Hoffman’s formulations have problems. The first part of the below linked post is a more extended critique. The concept of consciousness as interface predates Hoffman so it isn’t even original with him.

          https://broadspeculations.com/2020/01/15/evolution-learning-and-uncertainty/

          In short, I think we must have a greater degree of congruence with reality than Hoffman seems to think because Hoffman’s model doesn’t account well for learning. Also even Hoffman admits that there may be areas like logic and mathematics were we may be perceiving reality correctly. Edelman also criticizes Hoffman’s view that perceptions can never be truthful. Of course, Kant’s thing in itself will always be elusive because it doesn’t really exist but it an open question how much does consciousness – our model of the world – match the real world.

          To quote from my piece linked above:

          “The brain, of course, is all about seeing relationships, identifying differences and similarities. It does this with learning and memory. What our senses present to us may not be veridical but the relationships in what is presented must be veridical or we could not interact consistently with the world. When people are fitted with prism glasses that turn everything upside down, they learn in a few weeks how to interact with the world using completely upside-down input. This is possible because the relationships between the objects are relatively the same. Regularity and consistency in the world still exists and the brain can learn about the regularity and how to operate with it. Hoffman almost acknowledges this when he writes: “Whereas in perception the selection pressures are almost uniformly away from veridicality, perhaps in math and logic the pressures are not so univocal, and partial accuracy is allowed.” I would expand Hoffman’s statement to include the ability to distinguish relationships in general.”

          Like

          1. I see you wrote an extended piece on Hoffman. Will try to have a look later. Meanwhile, some comments on the above. I don’t have a disagreement with that but my critique is different, it is about the relationship between his theory and an idealist view of reality. His theory is not about idealism, as so many seem to misinterpret him. I don’t question the degree of congruence in Hoffman’s theory, because I think the degree of “congruence” or “accuracy” or “approximation” (name it) of consciousness with reality isn’t what characterizes an idealist perspective. Kant’s thing in itself can’t be known because of a lack of congruence, it can’t even be known with whatever degree of accuracy. In this sense, it has nothing to do with evolutionary fitness functions (even though, evolution might have added a coarse grained perception on the top of that). Even if Hoffman would be completely wrong and it would turn out that we have a perfect one to one representation of reality in our heads, that would not change the fact that we still live in a sort of virtual reality that has nothing to do with the things in themselves.

            I don’t think Kant would agree that the noumenon doesn’t exists. It exists but it is unknowable. That’s why I like the Platonic cave allegory: The 3D objects projecting the 2D shadows exist, but if we try to know it by representing it with yet again other shadows, we will never know those objects. Even not in principle. And this impossibility has nothing to do with congruences, approximations or accuracies that match the world by identifying relationships, differences and similarities. And I don’t think consciousness is a representation of the world (at least not what is called phenomenal consciousness). Consciousness is that which becomes aware of those representation but is not a representation. This doesn’t imply that nothing is veridical. There is a relationship and a degree of veracity and consistency between the cylinder which projects a shadow of a circle, or rectangle, or square and the cylinder itself. But the lack of congruence isn’t due to an insufficient permanency of regularity. Also the prism analogy isn’t an example that highlights what idealism wants to point out. Even with no selection pressures and perfect accuracy we would still live in our phenomenal world alien to any noumenal truths.

            So, this is not the real issue that points out the idealist, as Hoffman, and many others, seems to misinterpret it.

            Liked by 1 person

          2. ” It [the noumenon ] exists but it is unknowable.”

            Where does it exist? In your mind as another concept. Like infinity it can be conceptualized but never reached.

            Like

          3. It is not in our minds like an infinity concept. No more and no less, like the object projecting a shadow, is only in our minds because we are only capable of thinking in terms of shadows.

            Like

          4. Marco,

            If you’ve read any of my comments you know that I don’t find analogies or allegories useful. The “reason” we do not have access to noumena is because our minds are thinking in binaries. Binary reasoning is a limitation that we impose upon ourselves…….

            Free your mind and your ass will follow.

            Like

  8. I take a position similar to what Jordan Peterson describes as another published article that is 99.99% recycled trash with nothing new to offer.

    There are fundamental problems with the mind/body dichotomy that neither physicalists nor idealists are able to grasp. Both schools have their own take on the topic but both schools of thought offer absurd assumptions upon which their theories are based. For the functionalist, asserting that “consciousness is what it does” is as hollow and empty as the idealist’s claim that a car, rock or atom is “what consciousness looks like through the dissociative boundary.” Functionalist’s are unable to account for consciousness in a physicalist framework and likewise, idealists are unable to account for a physical universe in an “everything is mental” context. The only thing I have to offer is that recycling the garbage of ideas that have been absurd from day one is not productive.

    Touching on Jim’s comment: there is no such thing as “information” or “rules” in the absence of mind. And that grounding premise is a deal breaker for functionalism; so say good bye to the notion of algorithms in the absence of mind. Remove the system of mind from a physical universe and all that is left is a fundamental reality.

    The pathology of being human is our inability to view that fundamental reality from a perspective where the self is not the reference point. Solipsism is a problem because that reference point is reinforced by the psychosis of anthropocentrism. Introspection has limitations, but for the most part those limitations are self-imposed; and those self-induced restraints are fundamentally driven by deeply entrenched prejudicial biases or cognitive dissonance.

    “What a fine mess this pathology has gotten us into Ollie…….”

    Liked by 2 people

    1. I’m not sure if Manzotti would consider his theory functionalist. The name implies it’s an identity theory.

      You say the functionalist claim, that consciousness is as consciousness does, is hollow. Aside from the problem of qualia, which illusionism provides an answer to, what problems with the mind and body do you see it not addressing? Or if the issue are qualia, then what about the illusionism answer, aside from incredulity, do you find hollow and empty?

      Is the dissociative boundary thing from Bernardo Kastrup’s idea that we’re all multiple personalities of the same mind?

      Anthropocentrism is something we have to be on guard against. But so is anthropomorphism.

      I don’t think it’s accurate to blame ourselves for the limitations of introspection. Certainly it can be an issue in some cases. But in general the limitations seem due to what introspection is optimized for, what is was naturally selected for, which doesn’t seem to have included providing accurate information on the architecture of the mind. Is there a line of reasoning to indicate it should?

      Like

      1. “…what problems with the mind and body do you see it not addressing?”

        The elephant in the room is that functionalism “is” an analogy. I admire the intellectual honesty of idealists that I’ve engaged with online because they at least recognize the limitations of analogies whereas physicalists appear to double-down on the authenticity, credibility and explanatory power of their analogies. It is these gaping holes created by those analogies that give illusionists their non-sensical platforms.

        “But in general the limitations seem due to what introspection is optimized for, what is was naturally selected for, which doesn’t seem to have included providing accurate information on the architecture of the mind. Is there a line of reasoning to indicate it should?”

        Yes…. that line of reasoning is called synthetic a priori judgements followed by rigorous synthetic a priori analysis. This is sound reasoning at it purest state, far surpassing empiricism of the physical sciences however, nobody has been taught how to use this method. And due to the fact that nobody knows how to use this method, the very idea that synthetic a priori judgements followed by rigorous analysis can contribute anything of value has been marginalized.

        And the marginalization of synthetic a priori judgements followed by rigorous analysis is without a doubt a deeply entrenched prejudicial bias. So yeah, those limitations are self-imposed…… Free your mind and your ass will follow.

        Like

        1. We talked about the analogy thing in another thread. My response here is the same. Everything outside of our immediate sensory environment seems to be understood with a symbolic framework which basically amounts to analogies and metaphors. So I don’t see tagging functionalism with that as a problem. Or as I asked before, what would be a theory beyond our immediate sensory perceptions that doesn’t involve analogy or metaphor? The only real question is how useful those analogies are.

          My statements about introspection are based on what I’ve read from psychology experiments that demonstrated how limited our access is to our own minds. So I’d say there’s both a posteriori and a priori reasoning here. And the acid test will be in how well this reasoning holds up under future a posteriori knowledge.

          Like

          1. “The only real question is how useful those analogies are.”

            Analogies are very useful if one is only interested in playing the zero-sum game of a priori analysis, but that’s the limit of their usefulness. Idealist’s recognize that fact but it’s pretty clear that physicalists do not.

            “So I’d say there’s both a posteriori and a priori reasoning here. And the acid test will be in how well this reasoning holds up under future a posteriori knowledge.”

            You left out synthetic a priori reasoning in this assessment. Where does synthetic a priori judgements followed by rigorous synthetic a priori analysis fit?

            Like

          2. Honestly, I still don’t have a good handle on what synthetic a priori reasoning is.

            What I can say is that if we’re not testing and adjusting our judgments based on reasoning and observation, those judgments won’t be worth much. So putting them first in some linear sequence seems wrong to me. I think it’s more like a continuous cycle.

            Like

          3. “Honestly, I still don’t have a good handle on what synthetic a priori reasoning is.”

            Don’t feel like you’re the Lone Ranger here, I doubt that any of our celebrated academics have a handle on it either.

            Liked by 1 person

    2. “Solipsism is a problem”

      Yes. It is. Language and the ideas it can communicate tries to bridge the gap between your mind and my mind but it doesn’t go all the way. Can human history and the relentless push outward be thought of as an attempt to escape its limitations -its pathology as you say?

      Perhaps eventually a technology where we actually share thoughts? A Borg like reality where the “our mind” replaces the “my mind”.

      Like

      1. “Can human history and the relentless push outward be thought of as an attempt to escape its limitations -…”

        Sure, one could look at it that way; and the physical sciences have been very successful in that attempt. However, the physical sciences are not the savior of mankind, their accomplishments are a double-edged sword.

        It is because of the limitations of the physical sciences that I am so adamant about synthetic a priori judgements backed up by rigorous synthetic a priori analysis. Unfortunately, nobody knows how to use this technique because it is not understood. The academic community as well as the physical science’s exclusive, standard mode of operation is a priori analysis, but this technique is a circular zero-sum game where there are no winners.

        I am a firm supporter of the physical sciences but due to the discipline’s limitation, I recommend and support an innovative discipline that could be call the synthetic sciences. But before that could ever happen there would have to be a rigorous learning curve for anyone who is interested in the discipline.

        Call me the “Mad Synthetic Scientist”; sounds like a great Sci-fi novel, eh????

        Like

  9. Could Manzotti’s “mind-object identity theory” be relabeled as “mind-object overlap theory” without losing part of Manzotti’s point? The latter phrase might just refer to a combination of “extended mind” and “enactivism”, which I would be happy with. But if “mind-object identity theory” is really something more radical than that, then I’m sharing Mike’s worries that the brain is being downplayed.

    (I haven’t read the 2019 paper, in case that’s not obvious.)

    Liked by 2 people

    1. Not sure on your question. I haven’t read the 2019 paper yet either. Still trying to decide if it’s worth the time and energy. My sense is that Manzotti at least considers his thesis more radical. Which for him to be saying something new, I think it has to be. But you never know which way these things will go once we start seriously unpacking them.

      Like

    2. The “problem” with the brain is that it is both a part of the model and in our current understanding the cause of the model. It is another part of our existence in Manzotti’s terminology. So, I don’t think the brain is being downplayed at all.

      Like

  10. After a sentient being experiences the world are all their reflections upon it derivative? Mashups of what they’ve seen, heard, felt and understood? For much of that I’d say — yes. But this “nothing new under the sun” restraint seems to leave out mutations in thought. If DNA can be mutated into some heretofore never existed sense or capability, cannot memory or at least the thoughts as they are reprocessed by the brain?
    Your mention of Arthur C Clarke’s 2001 and other novel sci-fi inventions of Clarke, or of Jules Verne or even Da Vinci, Newton, are all their novel ideas — merely derivative?

    Liked by 1 person

    1. I do think it’s all derivative, but many derivatives are far from obvious. And derivatives from a wide range of sources appear like something completely new to someone who hasn’t been exposed to all those sources. Clarke and Verne used their knowledge of science to derive stories the rest of couldn’t see.

      As I noted on your blog, I doubt Clarke was the first person with the idea of an alien intelligence affecting our evolution. When you think about it, it’s a derivative of theistic evolution, which is as old as evolution itself. But Clarke’s unique background allowed him to put it in a framework no one else had before.

      Liked by 1 person

      1. You realize that there must have been something original from which the derivatives are derived, right?
        I suppose the original, singular thing could have spawned similar but not identical things, which then form the substrate from which the process could continuously evolve, new from the old. In that case, everything is derived from the first, however lengthy the lineage.
        I wonder about the cellular mutation that allowed some biological molecule to be sensitive to photons. Trillions of steps later we have eyes that see. We might be derivative, but that first mutation?
        And what of other and continuously occurring mutations? And not just of biology — but of thought?

        Liked by 1 person

        1. From what I understand, eyes evolved multiple times, so it’s an example of convergent evolution. (Although I’m not sure whether all the variants didn’t have some precursor molecular mechanisms.)

          Whether it only happened once or several times, I think the early mutations would have been a protein whose signaling was thrown off by some of its atoms absorbing photons. If the altered signaling resulted in adaptive action by the organism, then it got passed on. Later, organisms with multiple detectors might be able to have tailored actions depending on the light direction, and so would have had an advantage. Later yet, even more detectors might lead to classification of patterns with more tailored responses, the beginnings of sight.

          In my mind, it’s the continuation of a pattern where evolving organisms become ever more sensitive to their environment. Vision brings in an enormous amount of information for that.

          But ultimately how far the derivations go might depend on the reality of quantum physics. If we do live in a block universe (or block multiverse) then it all goes back to the initial state of the universe / multiverse, assuming that’s a meaningful statement. And that initial state itself might be derived from something else we know nothing about.

          Liked by 1 person

          1. So “net new” is only possible by reformulating the original through derivatives? Or is “net new” just not a thing? The first spider with 8 eyes, for whatever reason, is not “new”, but only a serendipitous derivation?

            Like

          2. I’m not clear what “net new” is. But I’d say “new” is relative. Something that’s new to me might be old hat for you. Would anything be “new” for Laplace’s demon? Assuming its knowledge spans all of reality, I’m not sure it would.

            Liked by 1 person

  11. I haven’t read Manzotti’s paper yet (I’ve downloaded the PDF), but coincidentally I was just introduced to his ideas through Tim Parks’ _Out of My Head: On the Trail of Consciousness_.

    Manzotti’s views on panpsychism are relayed on page 277, where Parks is talking about it with his companion Eleonora. He says, “I can understand the people who believe the world _as a whole_ is, as it were, alive, in the sense that it forms an all-encompassing, interconnected, living system, but how can you believe that a stone is conscious?”

    Eleonora responds, “Isn’t that exactly what Riccardo is saying when he says the object is the experience? So if you’re looking at a stone, the stone is your experience, so the stone is conscious. It begins to sing when you look at it.”

    “No, sorry,” replies Parks. “The panpsychics believe the stone is conscious _without_ my looking at it. Riccardo’s hypothesis is that when I look at a stone there is causality between body and object such that an experience occurs which he locates at the stone, but happening there only because of the presence of my body.”

    It may be only Eleonora’s parse that make me think this, but the implication seems to be that, if I am not looking at it, and by extension, if no-one is looking at it (or anyway conscious of it), the stone is not conscious; but when someone is conscious of the stone, the stone is conscious.

    If Manzotti is suggesting that someone must be conscious of the st9ne for the stone to partake in consciousness, his dismissal of panpsychism overlooks an obvious consideration. It is not that _someone_ must be conscious of the stone, but that _something_ must be conscious of it. If everything is conscious, then that could be another stone resting against it, or a raindrop falling on it. Thus panpsychism easily survives this particular objection: that experience relies on the presence of a human, or something else presumed to be capable of consciousness. In fact, the objection itself seems to assume that some things bestow consciousness, while others only partake of it, without explaining the dividing line between them.

    While I’m here, I have a couple of other comments. If Manzotti believes panpsychism is “dualism on steriods,” as you say, then he may not have properly appreciated Whitehead’s thought. Dualism posits two separate substances: mind and matter, or something along these lines. It’s true that some expressions of panpsychism call for a “dual aspect monism” between them. But Whitehead goes a little further by _deriving matter_ from living process. It’s not an “aspect” of matter, or a “ghost” inhabiting matter, but the foundation of matter. The apparent existence of the stone as a “hard lump” is merely the manifestation of the process entangling the observer and the observed — or more properly, the other observer.

    The other thing I wanted to bring up was whether JamesOfSeattle might also be Jim of Seattle, of Song Fight fame. If so, the Internet is a small world.

    Liked by 1 person

    1. Thanks for your detailed and considered thoughts.

      I suspect Manzotti would say that the stone isn’t conscious and doesn’t by itself become conscious when someone looks at it. I think he’d say that it’s the body and stone together that are conscious. Of course, this still leaves a crucial role for the body, which requires explanation. But maybe in his estimation these would only be what David Chalmers calls “the easy problems”, that is, not easy by any normal measure, but scientifically tractable, as opposed to the hard problem, which isn’t.

      I do think if he insists that the body isn’t special, that it’s just any collection of objects has some degree of experience, it ends up being a sort of backdoor naturalistic panpsychism.

      That said, I haven’t read the full paper either. So I might be getting some of the details wrong.

      Liked by 1 person

      1. I’ve now skimmed the paper, and I think Manzotti’s conception of panpsychism might be based on a misunderstanding.

        He writes, “”If the world is made of relative properties, once again, a puzzled reader may wonder if physical objects have an existence independent of being an object of experience. And would this not be a form of panpsychism? Not at all! I propose that the properties of experience are nothing but physical relative properties instantiated by an object relative to another object (which in our case is the body).”

        The supposition here is that panpsychism assumes there are independent objects as usual, but with psychic aspects. This sounds like Cartesian dualism, but with “mind” extended to all of matter. It seems to be a common confusion; Carlo Rovelli and Mark Solms have also complained about panpsychism entertaining the “ghost in the machine.” But Manzotti’s alternative proposal above is pretty much what I understand by panpsychism.

        Here he does mention the incidental character of “the body” to the instantiation of objects and the generation of “properties of experience.” If we follow this through, then objects other than “the body” ought to be capable of experience. I’d be interested to know what others make of his ideas with regard to this point.

        Noting a reference to Harman, I have the impression Manzotti’s philosophy is in the orbit of object-oriented ontology.

        Liked by 1 person

        1. Thanks. Sounds like I might need to make a pass through the paper.

          The issue, I think, is that there are many versions of panpsychism. Manzotti is rejecting a particular variant, which admittedly is the one a lot of contemporary panpsychists discuss. (Although they’re not often clear about it, sometimes talking up the pandualism aspect, other times retreating to a more naturalistic version.)

          But I agree with you that Manzotti’s view is a type of naturalistic panpsychism since every object exists in relation to other objects, and is affected by those objects, albeit only to a minute extent in many cases. This fits since contemporary panpsychism says that most objects only have minuscule amounts of what we typically call consciousness.

          Interestingly, this view might have some resonance with Integrated Information Theory, since each object existing in relation to others and being affected by them, makes each of the their own nexus of integration. (Although some of IIT’s axioms, like ruling out anything other than max phi, might short circuit this relationship.)

          Like

  12. From Riccardo Manzotti’s article:

    However, ever since Plato and more forcefully Descartes, we have conceived the matter thus: I am in here and the world is out there because I am separate from the world. This way of conceiving the matter produces a mystery. How can the world be present here as a part of me, while continuing to be out there?

    I think I have a reasonable answer for his question that should leave me as part of the world. It’s that my brain is able to create a representation of the world that itself exists as me the experiencer of existence. Similarly cameras and microphones are able to create representations of what’s real rather than to exist as that specific reality. Apparently eyes, ears, and so on, are hooked up to a massive computer that’s able to create phenomenal experiencers of existence. Thus here I should exist in the form of some kind of brain physics. So what kind of brain physics might phenomenality exist as?

    Apparently neuron firing synchrony is the only reasonable neural correlate for consciousness discovered so far. Johnjoe McFadden thus theorizes that this firing synchrony tends to create fields of electromagnetic radiation that itself exists as phenomenal experience.

    While I’ve answered his question coherently, I wonder if he could do the same? In a practical sense what does his proposal imply? Is he saying that for anything real which has a causal relationship with anything else real, a consciousness of some kind will exist between them in the form of their causal relation to each other? Thus two rocks will have consciousness in the form of their relation with each other? Or if not quite that causally ridiculous, then what does he claim?

    In any case this seems to be yet another unfalsifiable consciousness proposal. The only way to finally kill them off I think, should be for a proposal which is falsifiable (like McFadden’s) to be empirically validated quite well. Thus our softest forms of science should finally begin hardening. Until then we should be left with endless unfalsifiability here and soft science.

    Liked by 2 people

    1. My impression is that Manzotti isn’t trying to explain physically how consciousness works. He seems more about trying to explain exactly what it is philosophically and how the “hard problem” arises.

      Without getting into neuroscience, physics, or philosophy, from the ordinary perspective of common language, consciousness is usually thought of as the internal part of our experience whereas matter is thought of as the external part of experience. The internal part we seem to have some control over. The external part we can control only to the extent that we can take actions in the world. Manzotti’s view is that this internal/external distinction is false because both the internal and external are projections from a single consciousness.

      So it isn’t an attempt to explain how it works but more about correctly understanding what it is. I use the terms “model” or “simulation” to describe this. Hoffman and others before him (Dennett even?) uses the term “interface”.

      In any case, the view I think is highly compatible with McFadden’s. McFadden in his paper about spatial integration of information through the EM field never really addresses what exactly the information is about, what is it doing, or how it works. Presumably the integrated information would be of survival value which would mean it must be able to control actions of a real organism in a real world. To do this in a evolutionary economical way (biggest bang for the buck) the spatially integrated information might need to create an analogue of environment of the organism. The analogue would have digital unconscious components, innate patterns plus learned patterns that have been automated to perform automatic tasks with a conscious analog overlay to coordinate and handle exceptions that aren’t automated. The analog or model or interface that we are referring to as consciousness creates a useful internal/external divide to demarcate the boundary where we can take action in the world.

      Liked by 2 people

    2. I agree with James on what Manzotti is trying to do. It isn’t to address what happens in the brain. I think he sees that as a scientific problem which doesn’t touch the hard problem. In other words, those are what Chalmers calls the easy problems.

      His overall thesis is that consciousness is only a problem because we’re conceptualizing it as something in the brain that needs to be explained. His view is that it’s actually explained by the relationships between objects. I’m not sure how fruitful this view actually is, but it’s definitely a philosophical take rather than a scientific one.

      Ironically, his attitude toward the science seems like it would be very similar to illusionists and epiphenomenalists. He doesn’t expect science to be able to find anything there. Although each of these groups reaches that conclusion for different reasons.

      Liked by 3 people

      1. Notice that I brought up “relationships” in my response to Marco.

        The relationships between objects relates directly to the veridicality of perceptions and the entire project of science. It isn’t surprising to find it at the heart of consciousness and what the brain does.

        Liked by 1 person

        1. One of the things science reveals is that there’s a lot more to those relationships than we perceive, a variance between the manifest and scientific understanding of things. Manzotti would probably say the portions we experience are because of our specific body. But one thing your point is highlighting for me is just how much work he’s relying on from that statement.

          Liked by 1 person

      2. Okay James and Mike, I’ve read it again with your thoughts in mind. One unfortunate thing about philosophy is that to get popular with it one needs to say funky things that people thus don’t understand, and yet still like the sound of. Chalmers, Dennett, and Frankish display this quite strongly I think. Conversely reducing a philosopher’s ideas down so they’re quite simple to grasp, should essentially be an assault on that person’s career. Our mental and behavioral sciences often suffer from this problem as well I think.

        Anyway aside from a few extra quirky statements, apparently “Mind Object Identity” merely means that all of reality exists through worldly causal dynamics. Thus consciousness too. Of course with an explicit theme which is this dry his paper shouldn’t have gotten anywhere. Thus he needed to get creative. So this seems to be the same thing that Dennett and Frankish do with their talk of consciousness being “an illusion”. It would be too simple to understand if they just said, “I don’t believe in supernatural consciousness”. Conversely Chalmers takes a spooky route here, not that his faithful tend to grasp it as such given his complex verbiage.

        In any case my observations for Manzotti remain. The world can be here as part of me while continuing to be out there… as a representation which exists by means of the right kind of brain physics. McFadden’s theory is falsifiable specifically given that the physics which he proposes could conceivably be demonstrated to not function that way. I’m not sure of a second falsifiable consciousness proposal on the market today. Beyond McFadden’s, does anyone know of a proposal which could conceivably be demonstrated false empirically?

        What these observations cry for I think is a new community of “meta scientists” (rather than traditional philosophers) whose only purpose would be to provide scientists with various sensible and accepted answers regarding metaphysics, epistemology, and axiology. If this new community were to place causality as a founding premise of science, then progress might finally begin to occur this way. Things might go the other way however, with McFadden’s theory becoming progressively more validated empirically (and so inciting the greatest paradigm shift that science has ever known), and only then would a new community of meta scientists take form to cement such progress afterwards.

        James on McFadden’s position on spatial information, that sounds like his 2020 paper where time isn’t a factor. I think all he means there is that we’re dealing with light speed radiation that in the head produces rather than temporal brain circuits essentially driving down neural roads to do what they do. Thus different parts of the brain could add to EM field consciousness instantly by means of the proper synchronous firing. This field itself would exist as the experiencer of existence. So here we have non-conscious dynamics which create consciousness. That’s the essential question that needs answering.

        Liked by 1 person

        1. “Anyway aside from a few extra quirky statements, apparently “Mind Object Identity” merely means that all of reality exists through worldly causal dynamics.”

          I’m not sure what “worldly causal dynamics” is but I think “Mind Object Identity” is simply saying that the objects of consciousness, including the experienced external world, are mental. The world is a simulation generated by the brain might be another way of looking at it.

          Liked by 1 person

          1. James,
            Let’s say that there is some sort of agent beyond our realm that can cause earthquakes and such to happen here. Let’s say it can will anything here which isn’t logically impossible, like make someone into a a better athlete. Traditionally most people have believed that such a being exists, and even today pray to it on the hope that fealty and faith will earn them various rewards. We see this displayed when people acknowledge achievement awards, since before publicly taking credit they might first give thanks to the supernatural being who’s presumed to ultimately be responsible for whatever they’re being honored for.

            It’s because such an agent would “cause” things to happen here, and yet not be constrained by the system based causality that we’re familiar with, that I’m technically not able to describe my own naturalistic metaphysics as “causality alone”. In that case a person could say than their god causes things here and so I must be open to the existence of their god regarding whatever we’re discussing. Nope. So I must instead say that I believe in causal dynamics “of this world”. Unless they’d submit that their god is constrained by the physics and such that we seem to be, from my own metaphysical premise their god does not exist. I haven’t noticed others to also use this phrase so I’ve been meaning to explain myself on this for a while.

            I don’t need Manzotti’s “mind object identity” slogan, though I do agree that “the objects of consciousness, including the experienced external world, are mental”. In fact all that’s required is to define it this way. Beyond experience type “input” as mental, I add thinking type “processing” and decision type “output” as mental as well. I think you forgot to add one word to your last sentence though. Try this: “The [mental] world is a simulation generated by the brain…” Otherwise you sound like an idealist!

            I guess one thing that could be mentioned in general is that even if Manzotti’s Mind Object Identity doesn’t explicitly posit that the brain is crucial for human and standard organism consciousness, then it sure ought to. That way Mike wouldn’t have been able to posit him to downplay the brain. For anyone unsure about the brain/mind connection, consider the text book case in the mid 1800s for the personality change of Phineas Gage. Apparently after surviving a spike through his head he changed from a responsible citizen to an irresponsible lout.

            Liked by 1 person

          2. “I think you forgot to add one word to your last sentence though. Try this: “The [mental] world is a simulation generated by the brain…” Otherwise you sound like an idealist!”

            I pretty much meant what I wrote without qualification.

            I think I’ve acknowledged as limited solipsist I am an idealist.

            Matter and the physical are mental constructs in the world as I’ve said demarcated by the boundaries of our own agency with “agents [or something(s)] beyond our realm.”

            Liked by 1 person

          3. Let me also repeat another comment:

            The “problem” with the brain is that it is both a part of the model and in our current understanding the cause of the model. It is another part of our existence in Manzotti’s terminology. So, I don’t think the brain is being downplayed at all.

            Liked by 1 person

          4. Wow James, for some reason I didn’t know that you were comfortable with the label “idealist”. But then I suppose there’s always the question of how a given label gets cashed out. My conception of what this term represents may or may not be what you mean. So let’s check. I’ll present what I believe to be the case as opposed to my essential conception of “idealism”. Then you might add any clarifications or alterations regarding what you believe.

            My position is that we are part of a realm of existence governed by means of causal based physics from within this system. Nothing beyond it (if there is a beyond it) will affect it in any way and therefore what happens here must ultimately be deterministic on the basis of its causal dynamics alone. While nothing here should be fundamentally mental, and also not created by means of mental agents from beyond, I believe that there should be causal dynamics from within which can create experiencers of existence. Furthermore it would seem that evolution harnessed these dynamics, not merely with the creation of life, nor with the creation of brain based function, but rather with brains that implement the proper sort of physics. I suspect that physics to be a certain kind of electromagnetic radiation associated with the right kind of synchronous neuron firing.

            If brains create an experiencers that model what’s real, I see no problem with them also modeling what it is that creates such modelers themselves. (My first comment above presented that position more fully.) At some point I’d expect certain causal agents to speculate about this, and even if humanity tends to suck at it.

            Regarding solipsism, I actually consider myself to be an epistemic solipsist. This is to say that all I should ever know to exist with perfect certainty, is that I have subjective experience itself. I’m not an ontological solipsist however, which is to say that I don’t believe my own mind to be all that exists. Furthermore my conception of idealism is essentially ontological solipsism except that a number of agents essentially create and share the same ultimately mental realm of existence. Here if no mental agents were to exist, then nothing would exist. Since causal dynamics aren’t presented here to create such agents (unlike in my own model), my presumption is that this position is essentially magical.

            Liked by 1 person

          5. “Epistemic solipsism” may be a better term than “limited solipsism” I’ve been using.

            The catch is that since our experience is all we know then the existence of anything else, especially anything other than the mental, is a hypothesis of the mental. The physical isn’t known except though or by the mental which makes pure physical theories almost nonsensical. If the only thing real is physical, then nothing exists in the world that can know anything let alone do any science on it. A purely physical world would be an ignorant as a rock appears to be. There would be no place for science or knowledge of any sort. The question for the physicalist: How do you know?

            On the other hand, ff the hypothesis of a physical world has some measure of truth, then the mental must arise from physical, which would mean the mental is in some way physical or vice versa. Once that conclusion is arrived at then the divide between idealism and physicalism pretty much falls apart.

            Liked by 1 person

          6. “…if the hypothesis of a physical world has some measure of truth, then the mental must arise from physical, which would mean the mental is in some way physical or vice versa. Once that conclusion is arrived at then the divide between idealism and physicalism pretty much falls apart.”

            This is a good example of thinking in binaries, a point I brought up to Marco. In our thinking, we “begin the process of thinking” with a division and then we try to reconcile that which is now irreconcilable. This duality in the thinking process is an artifact of solipsism, and I’m not referring to the epistemic type.

            Ontological solipsism places the locus of our own consciousness at the center of the “known” universe much like how Eric describes himself. Now Eric can call it epistemic solipsism but functionally it’s ontological. That’s is the very reason it’s called subjective experience because the locus of our own consciousness is sovereign, and everything in the outside world is subordinate to that reference point; at least according to this solipsistic rationale.

            Good luck with that one guys…… around and around we go.

            Liked by 1 person

          7. How is that a good example of binaries when I dissolve the binaries in the last sentence?

            By the way, aren’t these binaries the product of this synthetic a priori reasoning at work? Only by comparing and contrasting can we make any statements about the world. Duality isn’t a product of solipsism but of thinking itself. It is how consciousness builds the scaffold of existence.

            Liked by 1 person

          8. Solipsism is an ontological “frame of reference”, the very substrate from which thinking begins and it’s a dual mental architecture. It goes like this: first there is me, that’s one thing; and then there is everything else; that’s another thing.

            It’s not like we have much of a choice other than to use binaries just like any other animal. And like all other animals we rely upon the empirical evidence that maps to the world we find ourselves in order to survive. We call ourselves intelligent, but what we are is very clever. Intelligence is the ability to take it to the next level of understanding, and only the synthetic scientific method is capable of that task.

            As far as synthetic a priori reasoning goes; this method I refer to as the synthetic sciences is not recognized by academia or the physical sciences of having any value. Regardless of whether it is accepted by others or not, it has been my experience that only synthetic a priori judgments followed by rigorous synthetic a priori analysis is capable of providing a proof of any kind.

            Now, one is forced to ask: what constitutes a proof? And the answer is pretty straightforward: Universality. And that is something we do not see in the physical sciences. What we see is inconsistency, exclusiveness, contradictions and paradoxes.

            Around and around we go……

            Liked by 1 person

          9. It’s great to hear that we’re both essentially epistemic solipsists James. I guess the difference is that while the position makes perfect sense to me, for you there may be some inconsistencies to potentially resolve. So let’s consider this a bit to see if I’m wrong to think that things make sense, or maybe that they also seem sensible to you. Perhaps Lee and I could have some discussion on the matter as well since I know that he considers his position superior to mine.

            My own metaphysical primitive is that there’s a world which casually creates everything that exists, including me as an experiencer of existence. Thus here there must be some kind of worldly physics by which existence can be experienced. Furthermore I think it causes you and sentient function in general. If so however then this logically mandates that any given experiencer should only grasp things by means of this causal experiential medium. So here the single thing that a given experiencer could possibly Know regarding reality itself, is that it does experience its existence in some capacity. Logic mandates that nothing else could be Known with perfect certainty regarding what’s real itself. Therefore everything else that an experiencer might think about what’s real might better be classified under “belief” that’s more to less credible on the basis of empirical evidence. From this brand of metaphysics hypothetical scenarios like Descartes’ “evil demon” simply cannot be overcome epistemically, and this is because a given experiencer will reside within the domain which it’s trying to understand. One would essentially need to be an outside observing god to potentially grasp true reality in itself rather than remain under the inherent constrains of being an element of the system. Furthermore I reject any outside observer position metaphysically, not that this should be impossible as I see it, but rather that my own metaphysics lies in opposition with such supernatural notions.

            The good thing about my position is that because causality mandates that explanations always do exist, in the end it should be possible to grasp some of these causal dynamics by means of empirical observations. Even from a somewhat skewed representational perspective from within, this should still be possible. The rise of science does at least suggest that progress has been made in some regards.

            So when you say, “On the other hand, [i]f the hypothesis of a physical world has some measure of truth, then the mental must arise from physical, which would mean the mental is in some way physical…”, I agree wholeheartedly. Yes the mental should exist by means of worldly causal dynamics, or at least given the premise of causality. But I also don’t think that you should add “or vice versa” to this, that is unless you’re speaking epistemologically. In an ontological sense this would mean that the mental somehow also creates a causal world itself. While the former would be natural, the latter would be supernatural. But yes, to us it should seem like the world is epistemologically mental, and merely because we are mental and so our constructions of reality must exist by means of a mental medium (whether EM fields or something else).

            Lee believes that he has a better model than I do however. Rather than depend upon empirical evidence through merely subjective representations of what’s real (aka the scientific method), he begins from definition itself. Here we wouldn’t reduce our observations back to tentatively reductive theories that may seem reasonable but are impossible to ever prove as True given the imperfections of subjectivity. Instead he proposes that we bypass subjectivity altogether and use definitional truth to ascertain how things work. So in the end he’s essentially a priori while I’m essentially a posteriori.

            The problem with his position I think is that even though definitional truth may often be helpful for us to keep things straight, as in the case of mathematics, it never presents us with evidence of how things actually are as opposed to any other logical possibilities of how things might be. Essentially such logical possibility massively under determines how things actually work. So I think naturalism mandates the need for empirical science rather than just the “beautiful science” that Sabina Hossenfelder berates some of her colleagues for relying upon fully.

            If strictly interpreted for example what’s commonly known as “functionalism” cannot even possibly be false in a causal world, since a difference in function will represent a difference in existence. So even though Lee berates Mike for calling himself a functionalist, I smile and wonder why Lee doesn’t accept that title as well. It’s something which cannot impossibly be false under a causal premise, or a priori truth that underdetermines how things actually are.

            This gets into the pre scientific revolution somewhat incited by the thirteenth century monk, William of Occam. His nominalism helped overcome idealistic platonic notions of what’s real. From a platonic perspective “chairs” for example exist as such because they’ve been supremely founded with metaphysical chairness. Conversely Occam essentially laid waste to this notion by arguing that we simply name things in ways that seem appropriate to us.

            Liked by 1 person

          10. I honestly don’t care a great deal for much metaphysical philosophy so a good bit of what I write in the idealism/physicalism space is more tongue in cheek that anything.

            Mostly I agree with what you wrote.

            My point might have been more succinctly put is that if the mental and physical derive in some way from each other (pick your favored route of causation) then at their base there must be similarity.

            But I don’t consider much of that discussion useful. I am anti-metaphysics. I am in favor of a pragmatic metaphysics that doesn’t consider the discussion useful. The useful questions are the scientific ones not how finely we can differentiate nuances in abstract categories.

            Like

          11. Your post has some really great thoughts Eric.

            Since our experience is entirely mental, I would say that a posteriori knowledge is a good indicator but not a proof of anything; whereas, since a priori knowledge is entirely mental, it would be capable of providing a proof. The only question that remains open to this synthesis is establishing the criteria for, “what constitutes a proof?”

            Why we have duped ourselves into believing that a posteriori knowledge is the acid test for proof is a mystery to me. Clearly, this rationale is a limitation that we have imposed upon ourselves. It is a psychological restraint, not a limitation of intellect itself. A posteriori rationale works as long as it is not questioned; but when challenged, the rationale breaks down very rapidly.

            Unfortunately, we have never been taught to reason correctly and the expansion of information technology is contributing to the problem and making us dumber, not smarter.

            Liked by 2 people

          12. Lee,
            I wonder if your frustrations here exist because you hold science to a standard that’s impossible to achieve? As I’ve noted earlier, the only Truth that you can have about reality in itself, is that you are an experiencer of existence. All else must inherently remain tentative theory.

            Here I expect you to say that I’ve overlooked something. I expect you to say that you can know what’s true by definition. Yes you can know that, though it’s not unique to our world. This applies to all potential worlds. Thus if we’d like to understand any specifics regarding our world, then we’re forced to merely theorize on the basis of empirical observations. Here scientific understandings will always remain tentative. I see no way around this.

            Still I do think that things could be improved. I personally am frustrated that science remains without a respected community of specialists providing it with various accepted principles of metaphysics, epistemology, and axiology. Without such founding constrains science should have various problems. These problems largely explain the softness of mental and behavioral forms of science today I think.

            Like

          13. I’m not sure there is really such a thing as a priori knowledge in its purest sense.

            I think likely all knowledge originates with life and evolves through evolutionary processes. The brain and whatever knowledge it innately has came from adaptation. It was learned but learned on a much longer time scale.

            Like

          14. Not some, but all of our battles are won or lost in our heads (a priori) long before any action is taken to move a single muscle (a posteriori).

            Good luck guys……..

            Like

  13. “In any case this seems to be yet another unfalsifiable consciousness proposal.”

    The ability to falsify a proposal is a real problem for the physical sciences however, the synthetic sciences are more that capable of proving any proposition as viable or untenable. Of course, empiricism rejects the synthetic sciences in spite of the “fact” that the physical sciences are unable to prove things false.

    The rebuttal of empiricism rejecting the synthetic sciences is the ultimate oxymoron. But we are used to living in a world of contradiction right?

    Liked by 1 person

  14. I should really hang out here more often.

    Nice article. I think you have the right take. If anything, I don’t think you go far enough. As you mentioned with the previous conversation about rocks being conscious, your view is that ultimately there have to be relations with the environment. On the other hand, I think minds can be conscious and seem to themselves to be conscious of things even if the things they seem to be conscious of do not exist and never did and they’re basically hallucinating everything they’ve ever experienced.

    Liked by 1 person

    1. You’re always welcome here! (I’ve actually been trying to participate more in Twitter discussions, but it’s still an unnatural venue for me.)

      I do think relations with the environment are crucial, but I’m open to being convinced otherwise. The key question for me is what would a mind that’s always been isolated actually think about? Even a late term fetus has its bodily sensations, hearing, and seeing light variances, to give it some content. A mind that’s completely isolated, never having a body or environment? I’m not sure what that means.

      At least unless someone has designed the mind to have well developed innate concepts, but that seems like a type of relations with the environment.

      Like

      1. You know the swamp man thought experiment, where a complete and perfect physical replica of a person is assembled by chance? It’s supposed to show that relations to the enviroment are required for thought and meaning. Donald Davidson argued that the swamp man would have no thoughts and could not mean anything by its utterances. I think that it shows the opposite. If it’s a physical replica, then it must have the same experience as a regular person.

        Same thing goes here. Suppose you’re a Boltzmann brain. If what you’re saying is true, you can know that you’re not a Boltzmann brain just because you have real thoughts and your thoughts mean something. There may be other ways to defeat the Boltzman brain hypothesis, but I don’t think this is one of them. What do you think?

        Like

        1. A point I made in our earlier discussion I should have noted above: I think a physicalist has to accept that what is or isn’t a mind is not anything fundamental. Like any complex system, whether it fits into a particular category is a matter of definition, particularly the criteria for that definition. And for such systems, it’s always possible to construct scenarios to evade a set of criteria yet still intuitively feel like they belong there. The categorizing strategy might be fine for pragmatic purposes, but in principle there will always be conceivable cases that defeat it.

          Both swampman and Boltzmann brains strike me as those types of scenarios. Essentially they get around the causal relations criteria by positing a profoundly improbable event that just so happens to produce the same result. I would note that in the case of swampman, his subsequent interactions with the world go on to establish those relationships the original had, even though he has no causal relationship with that original.

          The Boltzmann one is more difficult because over the imagined timescales, even the most improbable events should happen, and the Boltzmann brain never interacts with anything but itself in its brief existence. I’m not sure there’s any way to rule out that we may not be one right now. But it doesn’t seem like a productive assumption to make.

          That said, in terms of understanding minds we have a reasonable probability of actually encountering or building, the relations and interactions with the environment still seem like a crucial aspect.

          Liked by 1 person

          1. Hi Mike,

            To me, this basically concedes the point, because of course I agree with you that for all practical purposes, and in all realistic scenarios, relationships between the subject and the environment will fix the meanings of the subjects thoughts.

            I’m only saying that it isn’t logically necessary, and the meaning of these thoughts from the subject’s POV doesn’t come directly from these relationships. I think the meanings are somewhat sui generis, it’s just that in practice they are forced to correlate to things in the environment by the way the brain processes information.

            Like

          2. Hi DM,
            As I noted above, for me there’s no fact of the matter beyond the practicalities. But it seems like we frequently establish in our conversations that you’re more concerned about matters of abstract principle than I am.

            Like

  15. I am only responding here to a narrow point made in your post, because I do not have the time to read the Manzotti article, and because I think that even the secondary summary you offer already highlights a consequential conceptual error. In particular, it baffles me why thinkers about consciousness (especially those who are dismissive of its puzzles as Manzotti’s title seems to imply) conceptually limit themselves to relations between a subject (thinker) and an object (real-world physicalist phenomenon) when laying out their ideas to characterize what consciousness “is”. A great proportion of my daily thinking, or consciousness contents, involves no such simplistic SUB-OBJ relationship. If I am say trying to prove to myself the Pythagorean Theorem without a blackboard, or evaluate a passage of poetry, or compose it, or assess the moral implicature of some news event or personal conundrum, there is no such relation. Yet this is serious core consciousness activity. Or, if I scan memory for what babaganoosh smells like, I am dealing with the shadow images of various past sensory perceptions along with perhaps their feling associations. Only by gumby stretching the notion of Object to useless extents can this idea be made to apply. Thinking, as an activity within consciousness, cannot be translated into physical operations such as software phenomena or electrical neural sequences. The S-O trope is just a construct to avoid the reality of what thinking actually is, which can be experiences directly by anyone who tries.

    I would say something similar concerning sensory perception subjective experiences, but that is not the thing being discussed in this post. Any effort to model consciousness ‘functionalistically’, i.e. as a black-box process, carries no persuasion for me because it does not resemble anything I would call consciousness.

    Liked by 1 person

    1. I think Manzotti takes himself to be addressing this criticism when he talks about the object relations not just existing in a particular moment, but also in time. So he’d say the thinking you’re doing is only possible because you have a relation to the objects in space and time. I suspect you’ll confirm that this is just what you meant by stretching the notion to useless extents.

      My own take here is I think Manzotti does have a point. If we can think in terms of base concepts we’ve never encountered before, it’s not clear to me what they’d be. (Note: composite concepts are a totally different matter. I can imagine pink elephants, but I can’t imagine an elephant that is a color I’ve never encountered.)

      On the other hand, the ability of brains to be influenced by long ago past events and to take actions in pursuit of far future ones is noteworthy. But I suspect Manzotti is happy to concede this while pointing out that this capability is something that science can investigate.

      Like

  16. Hi there! It’s been a while, hasn’t it. Enjoyed your post.

    Well, I did read the article (I guess the first article is free to read), and I can’t really wrap my mind around Manzotti’s theory. Mainly my trouble is reconciling his ‘physicalist’ stance with dissolving dualism (or claiming it’s a pseudo problem). What makes something physical vs. not physical? Doesn’t saying that something is physical imply that some things are not? It seems the very notion of the physical is a regress to the naive conception he sought to undermine. Or, as you like to say, am I missing something?

    Liked by 1 person

    1. Hi Tina,
      As always, good to hear from you!

      Manzotti’s theory is definitely not intuitive (as he admits in the article). It’s really a way of thinking about consciousness, rather than a strong ontological stance, at least other than the physicalist one. (Or at least that’s what it feels to me when I try it on.)

      Imagine, if you will, a cat climbing a tree. The traditional notion is the experience of the cat climbing the tree happens in your head. The standard objection is that there’s nothing like a cat or a tree in the brain. The dualist answer is we have to bring in something non-physical to account for it.

      Manzotti’s answer is that the experience of the cat is the actual cat (or cats) you encountered in the past allowing you to imagine the one climbing, as well as the actual tree or trees you encountered allowing you to imagine the tree, as well as whatever actual climbing you’ve encountered allowing you to work that in. Your experience of the cat climbing the tree is distributed in time and space in the relationship between all those objects and the object of your body.

      The idea is that it dissolves dualism by situating your experience not in the brain (as most materialists do) but in the world in time and space. So per Manzotti, your conscious experience is vast and not located in your brain.

      Of course, your ability to imagine the cat climbing requires that your brain, to some extent, combine reactions it once had to the cat (or cats), the tree, and to the climbing. My reading of Manzotti is he’s content to leave how that works exactly to science. He just asserts that that is not where the experience is. The experience is in all the things being reacted to.

      A regular materialist might insist that the experience is just in all those reactions, which is also counter-intuitive. Manzotti seems to feel like his answer is more intuitive. Where better to find the experience of the object, he argues, other than the object itself in relation to the body? But I’m not sure there’s a fact of the matter distinction between Manzotti and other materialists, just a difference in semantics, on what we label “the experience”.

      I’m also not sure how much conceptual work his theory really does. I do like the pathways it takes me down. Conceivably it could be an easier answer to accept than the more standard materialist ones. But it seems inevitable that no physicalist answer will ever be as intuitive as the dualist one.

      On the notion of the physical itself, yeah, “physical” can be hard to define. My take is that it’s anything that evolves according to discoverable principles or laws and interacts with other things we consider physical. The question is whether there’s anything in reality that doesn’t fit that description. The physicalist would say there isn’t. This is difficult, because anything we encounter that doesn’t follow our currently understood laws usually leads us to expand those laws. Under this paradigm, the non-physical would be, I think, something that consistently could not be tamed in this manner.

      Ok, this reply ran long. Sorry! You just asked a hard question.

      Liked by 1 person

      1. That is a good summation of my understanding too. I got his book and, after reading part of it, I’m not sure whether he is “downplaying the brain” or not.

        However Manzotti means it, I think his kind of reversal of normal thought is a useful corrective.

        In our daily life we think and act as if the cat climbing the tree is somewhere external to our brain.

        Contemporary philosophy mostly sides with neuroscience and puts the climbing cat in our brain.

        So where is the cat? I think the key is that the internal/external is a part of what is happening is in our mind. Our mind by itself places it external to our brain. But everything depends upon what “where” means. Certainly even the brain location people would agree that the where the cat is climbing must be an elaborate projection or simulation that has been generated to make the location appear to be external to our brain. And nothing like this projection can be discerned by actually examining the brain or measuring its neural circuits. So isn’t it reasonable to entertain the notion that the climbing cat is actually external. In other words, WYSIWYG.

        Anyway, I think that may be the point.

        I wonder in some of my more far-out speculations whether the cat actually does have a physical location in a fifth-dimension. Consciousness would be something like those virtual particles in physics that exist only to provide a mechanism for the interchange of information between real particles. In that case, the climbing cat we experience wouldn’t be internal or external but instead on a manifold or plane connecting the information in the brain with a real cat.

        Liked by 1 person

        1. I forgot about the book. I did read his 2019 paper over the weekend, and his attitude toward the brain seems seems to be that it’s crucial “in our case”. Consider this from the paper.

          The traditional notion of subjectivity, as something whose existence is relative to a subject, is thus substituted by that of object-relativity, as something whose existence is relative to another object, which is, in our case, the human body.

          source: https://www.frontiersin.org/articles/10.3389/fpsyg.2019.00063/full

          He uses the “in our case” phrase repeatedly when noting the role of the brain and body. It implies that in other cases it may be different, which implies panpsychism. However, he is adamant that his view is not panpsychism or idealism. But I think a case can be made that it’s a naturalistic form of panpsychism.

          And as you point out, rather than saying the experience is the object, we could go the other way and say the object is the experience. Although I’m not sure how it all unpacks when he discusses that a single object can be innumerable experiences, each experience being its relationship with another object (“in our case” our body). So my experience of the cat may be different from yours of the same cat, because we have different bodies. (A stipulation that, I think, he depends on for a lot of work.)

          This kind of starts to seem far out when he talks about our experience or a star we see in the sky being the star light years away. In other words, our conscious self can be seen as spanning light years. He does reminds us that it’s all physical, so he’s not claiming anything outside the laws of physics. (Of course, another more grounded way of looking at it is our self is a nexus of causes from throughout the universe.)

          Liked by 1 person

      2. “But it seems inevitable that no physicalist answer will ever be as intuitive as the dualist one.”
        So true. Which makes me wonder if we ought to keep fighting it.

        Honestly, I think Manzotti’s theory is harder to accept than a standard physicalist one. To me it sounds like he’s either confused or he’s hijacked the language of phenomenology and is using it to suit his own physicalist bias, but with the consequence that he now has the disadvantages of both sides rather than one or the other.

        To me the strongest argument physicalism has going for it is the fact that the brain really does seem to be essential to consciousness. No brain, injured brain, etc., etc., all that. If you want to argue against physicalism, you’ve got those facts to contend with.

        Physicalists have experience, the phenomena, the qualia, the ‘what it’s like to be…’ to contend with.

        It seems to me his MOI theory fails to deal with either problem.

        You: “So per Manzotti, your conscious experience is vast and not located in your brain.”
        I wonder, then, why it has to be located anywhere, and tied to physical body. He hasn’t made that clear to me. Why should anything at all be physical? Why should space and time and relationships be physical if there is “inside and no outside, no here and no there”? Why won’t he include ideas as objects? He talks about hallucinations as delayed perceptions, but what about my idea of the existence of the world as whole, which IS an idea (since I never actually perceive it, not in the sense he means ‘perceive’)? And what about the number two?

        But he says, “MOI has no place for ideas.”

        “Different people perceive objects differently because they have different bodies. Thus they are identical with different relative worlds. There are as many relative worlds as bodies.”

        But that’s not what I experience. How could I even talk to people who lived in different worlds than my own? Now he’s got to explain yet another problem that needs explaining. How can different worlds interact with one another? How can he know that there are as many relative worlds as bodies when his theory can’t even allow for a unified physical world?

        Anyway, please do not feel like you have to answer all those questions on his behalf…consider them rhetorical. I just think too much of what he’s saying falls outside the limits of his own boundaries.

        And I do thank you for writing about this. I’m actually working on a goofy mystery story about philosophical zombies (or the inverse, or some such thing, I dunno) and I need something for my mind to chew on while I come up with the next scene. This guy’s article might make it in as a viewpoint somewhere down the line. Who knows.

        Like

        1. On fighting the dualist intuition, for me it’s not so much fighting it as evaluating it, whether by itself it’s enough to accept a proposition. Science tramples on so many intuitions that I’ve long concluded that they need to be backed up with evidence, or at least compelling logic. And neurology seems to have put constraints on dualism. The stronger varieties, such as substance dualism, are widely considered untenable these days. But property dualism is a much more elusive beast, particularly in its epiphenomenal form. It’s not clear to me that there will ever be a way to conclusively rule it in or out.

          “Anyway, please do not feel like you have to answer all those questions on his behalf…consider them rhetorical.”

          Sure. I think the only one I might respond to, because it threw me as well, was the point about MOI having no place for ideas. I think it gets at what it means to experience an idea. For your examples, I can’t imagine the world as a whole accept by imagining imagery like world maps, with maybe crowds of people blended in. And thinking about the number two throws up an image of a “2”.

          And yet, it’s not like these images stand alone. I know that “2” pertains to the concept rather than just the shape. Or my image of a world map and people pertain to the whole world. At a neuroscience level this might amount to pattern completion, the firing for the pattern “2” triggering a galaxy of related patterns, which from a physicalist viewpoint is what an idea might amount to. But then, we haven’t rule out ideas. So I find Manzotti’s eliminativism about them hasty. (Although it’s possible space constraints prevented him for laying out more nuance.)

          Your mystery sounds interesting! I really need to get back to my own writing. It’s just that work keeps me so saturated these days. Anyway, hope to see your story somewhere.

          Like

          1. “Science tramples on so many intuitions that I’ve long concluded that they need to be backed up with evidence, or at least compelling logic.”

            Haha, science tramples on everything. Science loves to trample, especially on common sense. I’m more inclined to defend my loyal intuitions and let science clean up its own mess.

            As for ideas of the whole world and the number two, what you describe is what I would picture too if asked to imagine those things. I guess “imagine” wasn’t the right word choice. Maybe “think about”? I mean, there’s no way to completely capture those ideas through mental imagery, except by some form of symbolism or representation which I take as such. I don’t see neuroscience really capturing those ideas either, not by observing patterns in the brain. But as you say, a physicalist viewpoint doesn’t necessarily rule out ideas.

            Like

          2. I used “imagine” because when I try to think about something without imagining it, it seems like sensory impressions come into my mind unbidden. I can say I’ll suppress it, but that maybe results in the visual portions being suppressed, but not the audio ones, or in some cases the tactile ones. For example, if I focus on the world while suppressing imagery, what I get are statements I’ve heard people say about it, or maybe that I’ve read.

            Maybe I’ve just been poisoned by Hume, but then his statements along these lines already rang true when I first read them.

            Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.