What does it mean to be “like something”?

When it comes to my philosophy of consciousness, I’ve noted many times that I’m a functionalist, someone who sees mental states, including conscious ones, as being more about what they do, their causal roles and relations, than what they are. Since functionalism focuses on functionality exclusively, it often gets lumped in with illusionism, which typically denies that phenomenal consciousness exists.

But I’ve long been uncomfortable with the “illusionism” label. Aside from the problematic connotations of “illusion” implying that consciousness is a mistake or something maladaptive, it’s historically seemed hasty to dismiss phenomenal consciousness, at least in the sense of apparent consciousness, of how consciousness seems to us.

However, I’ve recently had a couple of conversations, one with a dualist (or at least non-physicalist) and another with an illusionist. Interestingly, the dualist was onboard with the concept of illusionism, even if he didn’t agree with it, and thought I was using incorrect definitions. The illusionist said something similar, and pointed out that phrases like “phenomenal consciousness” and “qualia” have to be assessed in terms of their historical usage, not the literal definitions or etymology of the words. In other words, trying to use those words in a theory-neutral or “innocent” fashion is ignoring too much of the history behind them.

I thought this final point was interesting. As someone who strives to use words in their most commonly accepted manner, and to be clear when I’m not, I decided to investigate.

The history of “qualia” does turn out to be complicated. The singular “quale”, when first introduced by C.S. Peirce in 1866, may have been relatively theory-neutral. But the plural “qualia” introduced by C.I. Lewis in 1929 wasn’t, and the term has had different meanings since then.

Michael Tye, in the SEP article on qualia, identifies the simplest use of the term as being “phenomenal character”, as there being “something it is like” to undergo a particular experience. Interestingly enough, Tye doesn’t associate Thomas Nagel with this meaning, even though he uses the phrase Nagel coined. Instead he associates Nagel with qualia as intrinsic non-representational qualities. As we’ll see below, this may be a distinction without a difference.

The term “phenomenal consciousness” has been around for centuries, but according to Google’s Ngram viewer, its use spiked after Ned Block’s paper that made the distinction between phenomenal consciousness and access consciousness, indicating most of the contemporary usage refers to Block’s version. Block admits in that paper that he can’t define “phenomenal consciousness” in any non-circular manner, that he has to use synonyms. But he states that what makes a state phenomenally conscious is that there’s “something it is like” to be in that state, and he cites Nagel explicitly.

So we appear to have the conventional contemporary meaning of “qualia” and “phenomenal consciousness” both being based on Nagel’s “something it is like” standard, even though both those terms predate it. It seems like philosophers take the “something it is like” or “like something” phrase to be a theory-neutral or “innocent” way to reference consciousness.

But while “qualia” can be taken from its Latin roots to mean “what kind” (fitting my categorizing conclusion treatment a few posts back) and “phenomenal consciousness” as apparent consciousness, it’s not clear what “like something” can mean. It seems to express a similarity to an unspecified entity, which taken by itself is meaningless. It only seems able to function as a tag. The question is, a tag for what?

And that takes us to Nagel’s famous 1974 paper: What is it Like to be a Bat? Classic interactionist dualism is considered to have been taken out as a reputable intellectual position by Gilbert Ryle’s 1949 book: The Concept of the Mind. Nagel’s paper seems to begin a revival of property dualism and similar outlooks, a more modest form for philosophers unhappy with physicalism.

Along those lines, I think the best thing for me to do is quote what I see as the key passage from Nagel’s paper.

Conscious experience is a widespread phenomenon. It occurs at many levels of animal life, though we cannot be sure of its presence in the simpler organisms, and it is very difficult to say in general what provides evidence of it. (Some extremists have been prepared to deny it even of mammals other than man.) No doubt it occurs in countless forms totally unimaginable to us, on other planets in other solar systems throughout the universe. But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism. There may be further implications about the form of the experience; there may even (though I doubt it) be implications about the behavior of the organism. But fundamentally an organism has conscious mental states if and only if there is something that it is to be that organism—something it is like for the organism.

We may call this the subjective character of experience. It is not captured by any of the familiar, recently devised reductive analyses of the mental, for all of them are logically compatible with its absence. It is not analyzable in terms of any explanatory system of functional states, or intentional states, since these could be ascribed to robots or automata that behaved like people though they experienced nothing. It is not analyzable in terms of the causal role of experiences in relation to typical human behavior—for similar reasons.

I do not deny that conscious mental states and events cause behavior, nor that they may be given functional characterizations. I deny only that this kind of thing exhausts their analysis.

Nagel goes on to conduct his famous discussion about how we can never know what it’s like to be a bat, a creature that perceives the world through echolocation.

This is far from a theory-neutral view of consciousness. Going through the paper, I detect a number of theoretical commitments.

  1. Fundamental: The implication is that it is either like something to be a particular organism or it isn’t. There’s no mention of something possibly being partially like something.
  2. Epiphenomenal: At least to some extent, Nagel’s conception seems epiphenomenal, a version of consciousness with no causal effects in the world.
  3. Biocentric: At least in this paper, Nagel’s conception seems to assume consciousness only exists in living things. The implications are that machines can’t be conscious.
  4. Intrinsic: Conscious states are “unanalyzable” in terms of functionality or intentionality. In other words, they’re not representational or relational.
  5. Private: The bat discussion states that we can never know a bat’s experience, no matter how much we learn about its nervous system. So this isn’t a limitation of technology, but a fundamental one.

As a functionalist, I don’t think any of these are true. 1 doesn’t seem to hold up under the light of brain injury or pathology cases, mind altering substances, or evolution. 2 seems incompatible with making any assertions about what might or might not be conscious, which seems to make 3 moot.

4 could be considered true subjectively, that is, we’re unable to analyze these states from within our experience, but I see no reason to assume it holds objectively.

5 could be more plausibly seen as the situation today with the current state of technology, although that’s less true now than in 1974 and is constantly changing. It could also be seen as absolutely true in the far more limited sense that we can never have a bat’s experience. We can never be a bat. But that’s no different than saying my laptop can never be an iPhone. It might be able to have the iPhone’s state in a virtual machine, but it would always be a laptop with a (virtual) iPhone inside, never an iPhone itself.

It all seems like theoretical commitments based only on intuition. By Nagel’s own admission, there can’t be any evidence for it, which also means there can’t be any evidence against it. It’s a metaphysical add-on we can choose to believe in or ignore, without it making any detectable difference in the world.

All of which is to say that the people I was talking with were right. The most common usage of “qualia” and “phenomenal consciousness” are based on Nagel’s “like something” concept. Call me an eliminativist or illusionist if you want, but I think this group of phrases, what Pete Mandik calls a “synonym circle”, refers to a version of consciousness that doesn’t exist.

Of course, I continue to think the functionality we label “consciousness” exists, the mechanisms and capabilities that can be scientifically studied, so there’s no change in ontological view here. But I’ll probably take Mandik’s advise and stop using these terms, at least without careful qualification. They just seem to be inviting confusion.

What do you think? Are there reasons I’m missing to resist the common usage of these words? Or am I misinterpreting Nagel’s conception of what “like something” means? Or missing something else?

Featured image source

111 thoughts on “What does it mean to be “like something”?

  1. Hi Mike,

    You wrote …it’s not clear what “like something” can mean. It seems to express a similarity to an unspecified entity, which taken by itself is meaningless. It struck me there are a couple of ways to read this notion of “like something” and that I don’t read it the way you do.

    I could say a bowling ball is like a polished granite sphere, and thus express a similarity. This is the way in which it seems to me you’re parsing the term “like something.” It’s a way of expressing that two entities have similar characteristics or traits.

    But when Nagel says there is “something it is like for the organism” I think he’s simply referring to a notion along the lines there’s a “sense or awareness of something.” We’d typically say something like, “it was so hot out, I felt like I was going to faint.” I could be wrong but I wonder if Nagel felt that saying “felt like” involved the word “feeling” which brings all sorts of baggage as well. So he just said, “something it is like.”

    But… are lots of people confused by what this means?

    I could see your point in a rigorous sense: if I say “there’s an awareness of something” that’s a pretty nebulous statement because the word something is perfectly vague. And I guess I wouldn’t disagree this is based on commitments, but the only commitment required it seems to me is to accept that it is possible to be aware that one is aware of things.

    The rest, like whether or not silicon-based machines can have this experience, or what kind of gradations are possible and why, or how this works physiologically are all fair questions–and maybe Nagel went too far by implying the “something” he references has particular attributes related to these questions–but why is answering them relevant to agreeing we seem to be aware that we are aware of things?

    Michael

    Liked by 3 people

    1. Hi Michael,
      Being aware of our awareness is a reasonable description of what someone could mean by “consciousness”. The problem is I can’t see how you find it in the phrase “like something”. I think you’re projecting your own reasonable sense of how to characterize it unto that phrase.

      That’s been one of my long time complaints about this phrase. It’s so vague that people nod at each other like they’re talking about the same thing, but there’s no guarantee they are. So are people confused about it? I think a lot are, but without realizing they’re confused about it.

      As to what Nagel means by it, that’s why I quoted that passage, so I wouldn’t muddle his meaning. I think we should take him at his word. And I read Block, Chalmers, and Goff, among others, as accepting that meaning, with some deviations in each case. (And of course, Nagel’s views have evolved over the years too. It sounds like he leans panpsychist now.)

      The question is what does any other philosopher, like Michael Tye in the SEP article, mean when they use that phrase? In Tye’s case, he doesn’t seem to mean it in Nagel’s sense, since he makes a distinction between qualia just being about “something it is like” as opposed to qualia that are intrinsic and non-representational. But the problem is that’s what Nagel himself means by “something it is like”. In that sense, I think Tye is confusing the issue with that phrase, as are probably a large number of other philosophers.

      If someone wants to say that consciousness is awareness of awareness, or just first order awareness, sentience, attention, or something else, I think they should just say it. But maybe we wouldn’t be talking about their paper 48 years later if they did. 🙂

      Liked by 2 people

      1. Hi Mike,

        Very interesting conversations here, as always!

        You wrote, The problem is I can’t see how you find it in the phrase “like something”. I think you’re projecting your own reasonable sense of how to characterize it unto that phrase.

        I don’t know another way to use the English language, to be honest Mike. I’ve stated there are multiple ways to read it. I am very curious about this. I mean… I know I cannot claim to be free of projecting my own reasonable sense of what the words mean. But neither can anyone else. This is the difficulty with language that we all face. Words are representations and not the reality they describe.

        I went back and read again your passage from Nagel and I don’t think I was imparting much interpretation. He comes right out and says that we could call this subjective experience. As to how other philosophers use words, I must beg out of any intelligent conversation on it. If Nagel is using words in a very sophisticated way that only a few appropriately educated people (specialists) understand, then he did a good job of serving up something that works on multiple levels I think. And assuredly I’ve misunderstood because I do not possess that specialized knowledge.

        Shifting gears, there’s often a polarization that comes up around these posts on consciousness. We know we each see the world differently. But I continue to think there are many axes (think forestry) being ground on both sides of this topic and I fail to comprehend why several things cannot be true at once: that clearly human consciousness has deeply embedded roots in human biology, and that it may also have its leaves in the sky of something which is fundamental.

        A huge part of the disconnect I see is the requirement that the sky behave like the ground. There is tremendous openness for approaches that fulfill certain requirements, and little for others on the basis of categorical grounds. When you note that those who do not accept reductionist accounts as complete are …resist[ing], looking for mystery where it can be found, and grounding their hopes that there remains something about us unanalyzable, fundamental, mysterious, that just has to be accepted as irreducibly special… I think you are dismissing a great many open questions. I don’t know exactly why you draw lines where you do, and I draw lines where I do, but I do think you undersell the motivations of those who see the world differently than you do.

        On your recommendation to read some Susan Blackmore I’m reading her book on OBEs right now. I’m only about halfway through so this may be premature, but there is a tendency, for instance, to suggest that because people may have been wrong about various aspects of OBEs—drawn conclusions about what they are that they are probably not (tours through the waking world)—that the only option is to shift to the other side of the coin. Meaning: OBEs are used by some people as evidence for heaven or for soul travel or what have you, and she notes that in her own experience the style of gutter that buildings possessed was not the style they actually had on closer examination. On this basis, (and similar types of evidence; she’s not relying on one account I know), she concludes the notion that her awareness left her body to tour Oxford during her drug-induced experience must have been false. I agree. And then she seems to be suggesting—(and I haven’t got to the point where she draws final conclusions just yet, but I’m using this as an example)—that on this basis the entire experience was likely generated by particular brain activity.

        Now I think she’s probably right that brain activity correlates to her experience, or she wouldn’t have spent her time on the book, but it seems like people very strongly want things to be one way or the other. Either we have spirits that detach from our bodies and wander around in the same world we examine with our physical sense during broad daylight, and the brain has nothing do with it (wrong), or everything is reducible to physics and only the brain has something to do with it (also wrong, in my opinion). If we could hold both polarities at once and acknowledge they probably both have much to tell us, I think we’d get to a more productive place.

        She also, so far in the first six or seven chapters, has restricted her research to accounts from persons ensconced in modern western culture, largely in Britain and America. Now if I want to know about quantum mechanics I probably shouldn’t sit down with a Shoshone elder to get my intel. But if I’m correct that a history of practice and an appropriate environment are relevant to the capabilities of individuals when it comes to accessing what may be beyond the brain, but also interactive with it, then she is examining a narrow slice of the potential field.

        It may seem as if I bend myself in knots to preserve something, but I think we all bend ourselves in knots, Mike. And for this reason I’m probably similar to Lee in the sense that neither polarized view (materialism or idealism) cuts the mustard for me. But I think it well could be the case that the metaphorical earth and the sky exist concurrently, and views that hold both as open fields of inquiry make the most sense to me. I don’t think it’s because I just like mystery: science would go nowhere without a longing to transform mystery into knowledge, and I think the same is true of other views.

        What is the reason we cannot align on the notion that transforming mystery into knowledge is the fundamental endeavor we’re all actually engaged in? Each with our proclivities and predilections, assuredly. But driven by the same fundamental desire to understand? Why not?

        I will be frank about my motivation: I think it matters to the quality of our world and our collective future. I think it matters to the quality of all life and has always mattered. I don’t believe I’m in pursuit of being special or above another person or afraid to think for myself. But I do think everyone’s got a piece of this…

        Thanks for the forum to kick around these fascinating topics, Mike!

        Michael

        Liked by 2 people

        1. Hi Michael,
          On Nagel’s language, we’re obviously seeing different things in it. The second paragraph in particular makes many theoretical commitments. It does use some philosophical jargon, such as “intentional states” (representational content), but the overall meaning doesn’t seem too buried. Maybe those sentences strike you as self-evident so they’re not grabbing your attention, but from my perspective they’re jarring.

          All I can say is I plan to avoid the phrase “like something” and its variants. (Not that I’ve been a heavy user.) I don’t agree with Nagel’s meaning, and don’t know what anyone else means by it, unless like you, they explain it.

          Definitely we approach the topic of consciousness from very different worldviews. I’m not opposed to seeing multiple things being true at once. There are many levels of description of reality, and they each are useful in their own way. But I do think there are more and less useful descriptions, some that are more predictive and others that aren’t. I don’t think every viewpoint is valid. It means we will disagree at times, which I’m okay with. (There’s no one I agree with 100%, particularly my past self.)

          I tried to acknowledge in that other comment that people like me have to be on guard against dismissing mystery when it’s really there, of not overlooking it because it doesn’t fit with current science. I do think there are many who need to equally be on guard from the opposite direction. That’s just reality as I see it.

          On Susan Blackmore, I have to admit I haven’t read her stuff on OBEs. I thought you might find her Zen and meditation stuff interesting, particularly in how she might relate it to consciousness. But I should have warned you that she’s a skeptic, so anything she’s written on OBEs, NDEs, or parapsychology should be approached with that in mind. Hope I haven’t wasted your time.

          Why can’t we align on the idea that solving mysteries is our common endeavor? I hope we can. Mysteries captivate me precisely because they’re challenges to be solved. But often solving them means assessing whether they’re a real mystery, and that’s often as contentious as the mystery itself.

          For me, the motivation is to get the best understanding of reality I can. I do think that kind of understanding matters for the world, but it would be dishonest of me to claim that as my motivation. I just want to understand reality, wherever it leads, even if it leads places that upset me or anyone else. I can’t really explain why.

          As always, I enjoy our discussions Michael. Thanks for making this forum a better place!

          Like

  2. How is it possible to employ the word “I” without the recognition that there is, phenomenologically, something it is like to be themself? No other entity knows what that is like, nor will they ever; thus the use of the undefined “something.”

    It is not only the fact that one cannot experience what it is like to be a bat. One cannot experience any other state than what it is like to be themself.

    How could an artificial intelligence experience that state? And how could it be said to be conscious if it did not?

    Liked by 4 people

    1. Hi Elizabeth,
      Employing the word “I”, as far as I can tell, requires a representation, a model of self, along with a language processing capability including symbolic thought. In other words, the functionality seems to give us everything we need.

      If you’re asking how it can be possible without the intrinsic non-representational unanalyzable thing Nagel discusses, I guess my question is how it can be possible with it and only it? Without just referencing the synonym circle, what does it bring to the table that the functionality lacks?

      On the privacy thing, again, I think we have to be careful not to conflate knowledge of another system’s experience, which is just a technological limitation, with not being able to be that system.

      If my point above about functionality is right, there’s no barrier to AI having consciousness, but as always it will come down to how we define “consciousness”.

      Liked by 1 person

      1. You said;

        “I think we have to be careful not to conflate knowledge of another system’s experience, which is just a technological limitation, with not being able to be that system.”

        That’s exactly the point, isn’t it? Technologically there is no limitation on producing a system that is able to “be” another system. Why does that limit exist between human “systems?” How and where in science is that limit being tested?

        Liked by 1 person

        1. I actually touched on this in the post. A laptop can’t be an iPhone. We might imagine taking the laptop, reducing it to its constituent materials, and using them to manufacture an iPhone, but then we’d have an iPhone, not a laptop. Similarly, we can imagine a significantly advanced society that takes me, reduces me to my constituent materials, and uses some of them to create a bat. But then all there is is the bat. Nothing of me remains, much less anything human.

          But in both cases the system can have an arbitrary amount of information about the other system. A laptop can have information on an iPhone and how it works. We can gain an increasingly accurate idea of how a bat works, how it perceives the world and makes decisions, etc. We can know its experience to an increasingly accurate extent, so that nothing private remains of the bat’s cognition. We just can’t ever have that experience.

          Like

  3. I agree with a lot of what you say — but perhaps not with all of it.

    I’ll start with a quibble about your title. It should read: What does it mean to “be like something”? The “like” refers to the being, at least in Nagel’s title.

    I think we do (or should) have some idea of what it is like to be a bat. Of course, our knowledge of this is incomplete. I don’t know what it is like to be you, and you don’t know what it is like to be me.

    For the bat, we do have a pretty good idea of the kind of information it is getting about its environment. Here, I do not mean the sound pulses. Rather, it is getting information about the location (direction and distance) of object. And its phenomenal experience really should be the experience of having this information. So I expect it would be rather like our vision, though perhaps only in monochrome. For the bat, the “let there be lignt” moment would be when it starts emitting the sounds used for echo-location.

    Liked by 3 people

    1. Thanks Neil. Point taken on the title, although my goal with titles is to communicate what the post is about rather than be strictly accurate. In this case, I wanted people to recognize the phrase.

      I agree with everything you say about the experience of bats. It could also be argued that the very fact we understand that bats use sonar indicates we have ideas about their experience. And science will continue to give us an increasingly accurate picture in the future. We’ll never have perfect knowledge of course, but perfect knowledge is a false standard, since I don’t have perfect knowledge of my own experience 10 seconds ago.

      Liked by 1 person

  4. Frankish just tweeted a paper (https://jaygarfield.files.wordpress.com/2014/01/illusionism-and-givenness2.pdf), probably after you wrote this, discussing this issue somewhat. The gist was, I think, that there is nothing it is like to be something. “Like-ness” can only refer to things out there, some being like others based on perceptions. [don’t hold me to this description.]

    For myself, while I agree the use of “what it’s like” is nigh on pointless, I can see where it comes from, given my framework of unitrackers and pointers. “What it’s like to be a bat” is another way to say “what it’s like to have the set of unitrackers that a bat has”.

    *

    Liked by 1 person

    1. Thanks. Yeah, I saw Frankish’s tweet just after hitting publish on this post. Although I had read Garfield’s paper when that JCS issue was published in 2016. BTW, all of those papers are in a book available from Amazon: https://www.amazon.com/Illusionism-Journal-Consciousness-Studies-Frankish-ebook/dp/B077G81RWW/
      Although I think the individual papers are all available from free on the various authors’ web sites, just scattered around the internet. The Mandik paper I linked to in the post is another one from that JCS issue.

      But I’m glad I didn’t see it prior to formulating my own thoughts. I understand what the illusionists are saying, and agree ontologically. Maybe their language is clear to a philosophical audience, but it doesn’t seem clear to the public. Dennett and Blackmore seem to have the best ability to break through, but even they, I think, are mostly misunderstood. Which is why I prefer focusing on the functional / non-functional divide.

      Rather than using “what it’s like” with unitrackers, I think I’d just reference the perceptions and memories of the set of unitrackers the bat has.

      Like

  5. Thank you for another interesting blog. I don’t have your knowledge of philosophical history and terminology but you always manage to communicate potentially abstruse concepts so clearly that I can, at least, feel I understand them! A few thoughts arising:

    Something that seems somewhat skirted over in discussions of ‘what it is like to be’ is how remarkable it is that there should be anything singular arising from distributed molecules and a few billion hairy neurons. Why is that not just a bag of body parts with just an approximate location in common and no core single nature? Physically what makes an entity singular is that the parts of its body are physically connected together and mostly therefore have to move around together as they pull and push each other. Mentally, something singular is being created here that can then be used to give that physical entity coherent behaviour as a ‘thing-as-a-whole’. We too much take that for granted because it is so intrinsic to our experience, but it is a remarkable and unusual capability that requires a particular architecture of neural and physical connection and timing.

    For me the point of creation of this single thing is the periodic commitment to a single interpretation of what is happening, a single set of priorities for what’s good and bad for the individual entity, and a single action and attention set that will improve that outcome for the entity as a whole. The single thing that is conscious is then the underlying process that gives rise to that ‘singularity’ (much abused term). The content of consciousness at each update cycle is the actionable representation of the network of sensed state, feelings and action commitments. Built into this is a representation of past and future that gives that single thing continuity in time.

    Are you old enough to remember the old style text adventure computer games? You go north. Now you are in a cave with a jewelled casket and a fierce dragon. To the east is a tunnel. To the west is a lake. To the north is a path deeper into the cave. What do you want to do, go N, S, E or W, open the casket or fight the dragon? The single conscious you is the thing that evaluates these options and tracks their history, driven by the will to succeed in the terms of the game. The content of consciousness is the current state, that lets you know where you are, how well you are doing and your action options. You remember what has happened in similar situations, or in this location, before, which influences your expectation of outcomes, leading to expectation, surprise, arousal and new learning. ‘You’ are the single thing that can evaluate and commit to a course of action. You have a unique perspective because you are always in one place and one time and weighing up the options that will help you prosper in the presence of incomplete information and uncertainty.

    On this interpretation, ‘what it is like to be’ is how the individual (this decision-making process) characterises where it is in that set of state transitions, and the self is the process that drives action/attention selection and learning. The different levels of self revealed by introspection and meditation are the different layers of that cyclical process. The level of self awareness depends on the extent to which a model of self, and relation of self to world, is part of the state representation.

    Therefore to explain consciousness we need to start by describing the core nature of the individual thing that is conscious. Not a bat, but the action selection process of the bat in the context of its representation of its current internal and external state, and action/attention options. The trouble is that people don’t want to hear they are just a cyclical process driven to survive and reproduce; they feel they must be more! They don’t need to worry, because such an architecture is self-modifying and limited only if it ceases to question (and blog about!) its own nature.

    Liked by 3 people

    1. Thanks Peter. I’m grateful for your kind words. Although really this post is a record of how much I didn’t understand about philosophical history and terminology until the last week or so. And of how much many scientists still don’t understand it. For instance, I doubt many of them would use “phenomenal consciousness” if they understood what philosophers typically mean by it.

      I think I agree with most if not everything you say here. Yes, the evolution of animals with central coordinating nervous systems is a remarkable thing. Although it’s worth remembering that even unicellular organisms have internal communication mechanisms to translate sensory stimuli to action, although with far less behavioral diversity than even simple animals.

      You’re right that people don’t like reductionist accounts of what they are. If I could sum up much of the sentiment in the Nagel camp, it’s that we shouldn’t trivialize the human being, or any thinking living thing. You and I don’t see reduction as doing that, but a lot of people do. So they resist, looking for mystery where it can be found, and grounding their hopes that there remains something about us unanalyzable, fundamental, mysterious, that just has to be accepted as irreducibly special.

      Of course, they see us as ignoring that specialness so we can slot everything into a scientifically understandable view. I do think we have to be on guard against that, but not so much that we ignore what knowledge we do have, or accept magical or exotic answers without incontrovertible evidence.

      Like

      1. Something could be fundamental but not “unanalyzable … mysterious, that just has to be accepted as irreducibly special.” The difficulty of explaining it in any other than circular terms might be an indication that is in some way fundamental.

        Liked by 2 people

      2. There is nothing “magical or exotic” in wanting to understand what consciousness is or in being aware of what it is not. Words like that are used to trivialise the effort to understand of those who do not adhere only to a “functionalist” perspective. They are incorrect and insulting.

        Liked by 1 person

  6. Hi, great post. I have actually no comments on it. I guess I agree with you. I just came out of the hospital after an operation. The evening food car had a certain eastern smell. It had so many spices and all in all it was just a weird smell. Then the second night I farted and low and behold I had to smile because there was this big smell of that food car. But then in an instant it became clear the diaper of my neighbour had slipped off and it was her smell. So the smell I could well tolerate and thought was funny one second ago in an instant became absolutely puking gross.
    I have thought about this. What does this mean for consciousness. I don’t know. The brain really produce a conscious experience based on what it thinks something is and what information it has. pan psychism seems not to work here, because intrinsically the smell had no objectivity. The brain changed it completely depending was it my smell or my neighbours smell. I was curious what you had to say about this Mike. You think also the brain produces consciousness or not? greetings

    Liked by 1 person

    1. Hi Oscar. Thanks. Hope your recovery is going well.

      That’s an interesting question on the fart smell. It goes to how much our perceptions are shaped by things other than the sensory stimuli. Do you have a conscious perception of the smell, and then judge whether it is pleasant or gross? It would be strange to say so, since they’re not distinct in your memory. It seems more likely that the experience of the smell is affected preconsciously by all the affective reactions.

      It’s similar to how an acquired taste is perceived. The first time we taste something like that it typically tastes awful, but then we learn to enjoy it over time. Interestingly, growing up, black coffee tasted like liquid asphalt to me. But in college, mostly by necessity, I learned to enjoy the taste of it. About fifteen years ago, I gave up coffee for a few years. When I started drinking it again, the taste of black coffee was again awful, and I’ve never reacquired the taste for it, since I have time and money for cream and sweetner now. So was I wrong about the taste during the years I enjoyed it? Or am I wrong now about it? Is there a conscious taste before I judge it tasty or nasty? It doesn’t seem so. The conscious taste seems laden with that judgment prior to my conscious experience of it.

      What does all that mean for the idea that our experience is fundamental and irreducible? If so, which parts are the fundamental ones?

      Like

  7. Nice summary of the issue.

    I think Nagel’s “something it is like” is primarily an attempt to evade the circularity problem in trying to define consciousness. It probably doesn’t fully succeed but then other approaches don’t do much better.

    I always thought of the “something it is like” to tie directly to the idea that consciousness is somewhat species specific. This should be obvious from the bat in the title where the concept was introduced. Every species comes with a set of neurological and sensory capabilities that determines the qualities of the subjective experience. Of course, those capabilities can still vary between individuals of a species. Humans can be deaf or blind and still be human. They can be inebriated and still human.

    We’ve had the functionalism debate before. Since consciousness is subjective experience, attempts to define it by external observable behavior completely miss the mark. Attempts to define by some set of internal functional capabilities usually fall into the circularity problem.

    Liked by 3 people

    1. Thanks.

      I think you’re right that each species’ experience is unique. Each species has its own umwelt, its own self centered world, enabled and constrained by its evolutionary affordances, sensory modalities, and other factors. To say that species X is conscious is, I think, to recognize enough similarities between us and them to regard them as like us. But this is always unavoidably an anthropocentric endeavor. And we have to be very careful not to project the full breadth and depth of our own experience on them.

      Definitely we’ve debated functionalism before, and we shouldn’t expect resolution anytime soon. My view of it though isn’t that it’s circular. I think it’s the approach that enables us to break out of the circularity. But that requires accepting that the breakouts are legitimate, that the correlations between an experience and the functionality do more than just reliably coincide, that they’re different perspectives on the same thing.

      Liked by 1 person

  8. I don’t know Mike; these circular arguments that you and folks like Frankish keep posting about to reinforce your own position of functionalism is getting old. In agreement with Roger Penrose, the one thing that is missing from the conversation regarding the function of computationalism is the notion of “understanding”. Understanding is a big deal and I can’t for the life of me “understand” how information processing can produce anything other than more data; it cannot produce sentience. Garbage in, garbage out……..

    The positive (+) and negative (-) poles of two magnets are not processing information when placed close to each other, spin because they don’t like the feeling (sentience) generated by similar poles and then slam into each other and stick together because opposing poles feel better; it is a direct, immediate action called relational mechanics. Something causes this motion (spinning and movement) that results in the form (the two sticking together). Are the magnetics “aware” of what feels good on a linear scale of value in contrast to what feels bad on that same linear scale? Absolutely not; this is because according to my model of pansentientism, sentience is the substrate of matter which makes it ubiquitous and universal.

    I’ve recently posted on Wyrd’s site about my theory of pansentientism; and he, like everyone else I engage with keep insisting that “their understanding” of my theory is derived from me providing enough information to convince them that it has validity. But that is not how understanding works, understanding is derived from metaphorically walking over to that rabbit hole of pansentientism and at least looking down that hole for oneself; further understanding is derived from literally journeying down into that hole.

    We are so used to being intellectually spoon fed from the so-called experts that we’ve completely lost contact with how understanding is even derived. If one wants to know what it’s like to be a bat, one has to be a bat, that is the only way “understanding” is achieved. As long as one chooses to distance oneself from the rabbit hole of new ideas and make judgements about what is down those holes from one’s own metaphysical position………… Seriously, do I really need to finish that thought?

    Liked by 3 people

    1. Lee,
      What do you see as the circle of reasoning that I’m in? For example, I mentioned the “synonym circle” that Pete Mandik talked about, where when asked to define “qualia” someone will often say “phenomenality”, but when asked to define that, they’ll say “like something”. When asked to define that, they’ll often refer back to one of the other worlds. Or go into a discussion like Nagel’s with intrinsic unanalyzable concepts.

      I perceive functionalism to avoid a circle like that by demonstrating an equivalence between a perception or feeling with functionality, and then being able to reduce the functionality. The example I often use is a software bit. A bit within the operations of the software is fundamental, irreducible. But a bit is typically implemented by something like a transistor or vacuum tube, which of course have components and can be further reduced.

      I think most physicists would say that your description of the magnetic poles does actually involve processing of information. Granted, at this level we’re talking physical information rather than semantic information, but “information” is the term they’ll typically use.

      So if we can’t achieve understanding of pansentientism by having you describe it to us, then what are you asking us to do? If the rabbit hole isn’t a description of the philosophy, then what does it mean to go down that hole?

      I second James’ question on what you mean by “understanding”. That word, to me, implies a model that allows us to make predictions related to the subject matter that are, at least to some extent, accurate. Of course, that requires taking in information. What does “understanding” mean to you?

      Liked by 2 people

      1. “by demonstrating an equivalence between a perception or feeling with functionality”

        The reason I think it is circular is you started with ‘perception or feeling” so you are already into the circularity before you even begin to define the equivalent function. In other words, you are looking for the equivalent function for something you can’t define without referencing consciousness itself. Hence, the function must in some way encompass or involve consciousness itself or it wouldn’t be equivalent.

        Try defining the function without referencing anything that implies or is entangled with consciousness itself. Embedded, embodied world simulation might encompass part of it unless we considered that the same as being like something.

        Then we get to the understanding or knowing part, but I don’t think understanding or knowing has to do with accuracy. I might know the sky is falling. I would be wrong, of course, but my knowing might drive me to an action like going inside. So I think knowledge, even if incorrect, that can acted on may be part of it, even though we can probably think of dozens of things we know where it would be difficult to discern an action component.

        The agency component in a biological organism likely has a lot to do with why perceptions and feelings are like they are. The hotness of the stove makes us withdraw our hand from it.

        Embedded, embodied world simulation with an ability to act enabled by the simulation for some benefit to the entity itself. It makes sense that evolution would gravitate towards this sort of solution by natural selection with unique experience varying by species and ecological niche.

        Liked by 1 person

        1. I think we have to start with the psychological terms in order to acknowledge what it is we’re working to reduce. The key is not to stop with “perception” or “feeling”, but then to proceed with a description in terms that are not those things. For perception, we can talk about predictive models formed from sensory data. (“Sensory” I think is ok here. There are lots of technological sensors.) For feeling, we can talk about a quick automatic draft assessment of a situation, which can then be used in action scenario simulations and possibly overridden (although unfortunately, due to the evolutionary history, not easily dismissed).

          Of course, someone can insist that a “perception” or “feeling” isn’t fully covered by these descriptions. But then they should be prepared to articulate exactly what is missing. If they can only do that using synonyms of the original term, then they’re just refusing to consider the reduction.

          If someone runs inside because they think the sky is falling, it seems strange to say they “understand” the situation. Although I supposed we might say their “understanding” of the situation is wrong, which in my conception would be to say that they have predictive models that are making the wrong predictions, and so causing them to make decisions like running inside.

          Like

      2. I’ve got to chuckle a bit here Mike because your concerns are exactly what I would expect from a highly complex computational algorithm that just does’t get what understanding means. I think Cross does a pretty good job of addressing most of your concerns as I’m in agreement with what he stated.

        “So if we can’t achieve understanding of pansentientism by having you describe it to us, then what are you asking us to do? If the rabbit hole isn’t a description of the philosophy, then what does it mean to go down that hole?”

        Any new, original, innovative idea is just words; it requires an investment of personal capital on one’s part to gain understanding. The fundamental reason one cannot understand a new idea by simply reading the words someone else provides is because first of all, those ideas will be filtered through the prism of one’s own metaphysical assumption to see how they conform or correspond to that assumption; where for the most part, that new idea will be rejected simply because it does not support one’s own confirmational bias. That assessment is not my opinion, it’s proven science. And second, any new, innovative idea is constantly in a state of flux as it is being developed because it is just that; it’s new, it’s innovative, it’s complex, it’s got a lot of moving parts and there is nothing else to contrast it against.

        But, if you are in the camp of those who do not believe that “genuinely original thoughts are rare, if not non-existent”, then the entire notion becomes moot.

        Like

        1. Lee,
          It is true that we all evaluate new ideas in terms of how we already understand the world. I’m not sure we can really do anything else. We’re all Bayesians to at least some degree. The question is what we require to change our world view enough to accept the new idea as plausible, or even true.

          In my case, I can accept some pretty counter-intuitive ideas (see my blog archives), if they fit the data well. It’s tough to do that without the actual idea itself. But it’s totally up to you when you think you’re ready to present it. Until then we can only react to the snippets you reveal in conversation.

          Like

          1. Yeah, I kind of stumbled onto the idea after I spent a great deal of time engaging with the idealist community. Those folks are highly intelligent and present some very challenging questions that deserve answers. After spending a great deal of personal capital exploring the what “ifs” of idealism, I came to the conclusion that the model is untenable. The notion of panpsychism doesn’t work either and materialism is naive realism at best. Substance dualism is an artifact of religious superstition, property dualism is an intellectual maneuver meant to avoid the superstition label and neutral monism avoids the question altogether. So as I see it, if all that we have to work with is the standard junk metaphysics, it’s time to move on and look else where for answers.

            Just another snippet here: what the model of pansentientism provides is a framework that does not deny the existence of a material physical universe like idealism does for example. So in that sense, the context of the idea is limited to materialistic architecture. Of course, the idealist community will want to address compelling questions like an irreducible imperative for example or an ontological primitive, an answer which my overall metaphysics does address. But from a pragmatic aspect, we do live in a physical universe and I truly believe that the conundrum of mind is a physics based dynamic, so that is where the focus should be.

            For example: Roger Penrose is convinced that mind is a quantum system and so am I, although I do not necessarily see it as quantum for the same reasons that he does. His reasons are certainly valid and I do not dispute them, but there are other more compelling reasons that are intellectually logic based, not just limited to Gödel’s incompleteness theorem.

            Change is difficult because our metaphysical model of the world essentially defines who we are as a self-model, the intrinsic need for self-preservation and the need for a sense of control enhances the prospects of survival both physically and and psychologically. It’s a psychical dynamic more than anything, so one must be cognizant and respectful of those psychical needs. As an individual, I know that I’m not going to change the landscape of the consciousness debate nor do I have the desire; but these back and forth point and counterpoint arguments help me to develop a model that actually works, is 100% inclusive and has zero contradictions; and that is cool as shit.

            Liked by 2 people

  9. I’ve always thought that the term “illusion” or even “delusion” when applied to “consciousness” or “free will” is not that there is not some phenomena exist that we call consciousness and free will, but that the feeling or perception of what they are is illusory. More obvious for free will: it feels like we can make snap decisions willfully, under the control of a conscious will, but that has been increasingly shown to be at least suspect, but probaly false: our will isn’t free, and certainly not free of physical causes, and is free only in a constrained way, in that localised autonomous action is still caused overall be a confluence of causal components, internal and external.Similarly, consciousness feels like a free floating mind, perhaps because the consciousness processes do not rely on identifying or sensing the individual low level phsyical neurons that create it. The *feeling* of having a will that is actually free of phsyccal causes, and the feeling of a free flaoting conscious mind are the illusions and delusions, not the physical phenomena themselves. I would class these illusions as I would “out of body experiences” where the subject feels as though they are out of body, but the experiences are happening entirely between the ears.

    Liked by 1 person

  10. Your post does highlight the problem with philosophy. If you try to explain something without reference to old philosophy precisely because the terms are so weighted with historic plethora of meaning, often contradictory, not only between one philosopher and another but also among modern philosophers that read them, ity’s easy to be led down a rabbit hole of incomprehensible meaning.

    But, in trying to avoid the pitfalls of the past, and describing ideas only in moder terms, it is common to hear that such and such a philosopher already covered this … and what you then find is that your intelocutor has bought into such a position and ignores the many advances in science and philosohy that make the ancient ones not only out of date but misleading.

    Of course old philosophers all have their merits in their own right, for what they contributed in their time; but it’s like talking about relativity and quantum theory today only to be met with “Well, of course Newton covered all this and so clearly relativity can’t be right.”

    To ignore late 20th and early 21st century neuroscience on consciousness is like ignoring modern astronomy and cosmology and referring only philosophers that supported the geocentric view of the solar system.

    Why would it make sense to pay too much attention of philosophers that existed before or at a time when neurons where just being examined in detail. Even philosophers of the early 20th century were still mostly uninformed about the brain.

    Liked by 1 person

    1. I think I agree with just about everything you say here Ron.

      Illusionism is often separated into weak illusionism and strong illusionism. The weak variety says that consciousness exists but isn’t what it seems to be. The strong variety that it doesn’t exist, or more specifically, a certain type of it, such as phenomenal consciousness, doesn’t exist. Of course, a lot depends on how we’re defining terms like “consciousness”, “phenomenal consciousness”, and “illusion”.

      Susan Blackmore has started using “delusionism” to discuss her particular variety of illusionism. In short, she questions whether we’re conscious when we’re not actively introspecting. The difficulty is in how to test this. It’s like trying to see if the refrigerator light is on when the door is closed by opening the door, or to use an analogy from William James, trying to turn up the gas light to see the darkness more clearly.

      The main thing to understand about most illusionists, is they don’t see the word “illusion” as a value statement. Daniel Dennett is a free will compatibilist, but in his 2017 book admits that he thinks traditional free will is an illusion, but a vital one.

      Yeah, language is the primary obstacle in philosophical discussions. I’ve said for years that a significant portion of philosophical disagreements are people arguing past each other with different definitions. I’m often struck by how resistant people are to acknowledging this, as though making a concession that there are different definitions is somehow giving up something.

      Unfortunately science isn’t immune. Physics is at least anchored by its equations. But the higher we go up in organizational levels, the more the definitional issues arise. Ask a crowd of neuroscientists their definition of something like “emotion” and you’re going to get a variety of answers, leading to debates about where in the brain emotions happen when the participants aren’t talking about the same thing.

      I do think there is value in paying attention to at least some historical philosophers. William James and Gilbert Ryle, I think, have aged fairly well. Not that I’ve spend a lot of time reading their stuff. I agree the more recent stuff, taking into account the newest science, is going to be more fruitful.

      Like

  11. Great post. I have always wondered whether becoming a bat would be any different to the ‘something’ I experience as a human. There are two possibilities it seems. (a) One, there is nothing that it feels like to be a bat. In other words, it is a philosophical zombie. So I return to my human body with absolutely no knowledge of what my brief experience as a bat was like. For all I know, I may have been a bat in the brief second I paused between this sentence and the last. (b) Second, being a bat is exactly the same as what I am experiencing now as a human. Sure, bat flight and echo location seem like cool traits to have, but we’ve all been in an airplane and we don’t essentially change as humans when we take off. Having echo location is probably no different to a blind person regaining sight.

    Is there something in between this nothingness or all-ness? I don’t think so. If there is, how could we describe it as humans? What use would it be to us?

    Liked by 1 person

    1. Thanks.

      Interesting questions. Suppose we had access to bat’s nervous system via some kind of implants, and so could feel everything it felt or experienced. That would tell us a lot. But we still wouldn’t be experiencing batness as a bat, but as a human who had gained access to the bat’s experience.

      Although it’s interesting to think about what kind of design decisions we might have to make with such an implant. How do we map its perception to our own so that we can even comprehend what we’re receiving? The very act of doing that might complicate our effort. We might imagine numerous competing mapping protocols with various camps arguing for the merits of their favorite mapping and the issues with others. But it’s not clear there would ever be a strict fact of the matter which was “right”.

      Like

  12. Hey SelfAwarePatterns (i.e. Mike),

    It’s Alex Popescu, we spoke earlier on Emerson Green’s blog. I just encountered your blog (through an intermediary, DisagreeableMe), so this post is great timing on your part.

    To pick up a bit from where we left off, I would say that Nagel’s ‘what it is like’ phrase and all the other terms in the ‘synonym circle’ are of course meant to be intrinsic, and as such will not be definable in any relational way. Therefore, it seems wrongheaded to even ask for a definition of Nagel’s phrase. This doesn’t mean that these terms are meaningless, only that they are meaningless in an extrinsic sense. Nor does it mean that they lack content, just that their content can only be captured by being intrinsically acquainted with the conscious experience (e.g. by ‘having’ the conscious experience).

    I want to go out on a bit of a limb here, but I’ve been reflecting on this matter a little more since our last, and I can’t help but feel that the difference between physicalists and non-physicalists might be mostly owed to a difference in conceptual schemes. I suspect (although of course can’t be sure) that when many physicalists are conceiving of functional and/or physical states, that they are not fully abstracting away all the phenomenality/intrinsicality. If you conceive of physical states as partly intrinsical, then it’s no wonder that you might feel that physicalism makes sense.

    What do I mean by not fully abstracting? Well, the idea is that when we conceive of something, anything really, we are picturing it/conceptualizing it in our virtual mental model. So, for example, when I conceive of an algebraic expression, I might have a visualization of numerical characters ‘flowing together” in some virtual space in my mental model. Similarly, when we conceive of physical states, we might have a representational visual model in our head of the brain. Even if you’re not a heavy visualizer (or are experiencing aphantasia), there’s still going to be some particular mental content that’s associated with your thoughts.

    The idea is that this mental content is not supposed to be in the object of thought itself though. So, if we want to imagine what the actual physical/functional state in the world is, we’d have to completely abstract away all of the mental content that makes up those representational states in our mind. Of course, it seems at that point like there’s nothing left there. And that’s the point that Chalmers and others (like me) are trying to make, functional/physical states are completely empty of any content (that’s how I conceive of them at least). We can’t help but conceive of them as being full of some content of course, but that’s merely an epistemic limitation, and not something that is supposed to represent the ontological reality. Thus, the dualist concern is simply that it seems like our mental content is real (we are having experiences), and by definition none of this content could exist or be experienced if the world was physical. Because by definition, physical things are empty abstractions.

    Going back to the extrinsic vs intrinsic definition examples. I would simply say that extrinsic definitions are just those which refer to the relations between the completely abstracted ‘bits’, and the intrinsic part refers to the content itself. By definition, we can’t really conceive of what an extrinsic component is, we are just supposed to imagine that this is what the external physical world really is like (there’s nothing ‘there’ there), even if the very process of imagining makes it intrinsic (but again, this is just an epistemic limitation).

    If my suspicions are correct, and some physicalists are not actually fully abstracting away in their discussions of physicalism, then I feel that Chalmers et al. would be more than happy to call themselves physicalist (though in their lingo, this type of physicalism would really be panpsychism)!

    I wonder if you feel that this might capture some of the conceptual confusion on both sides of the debate? Maybe we’ve been much too focused on the discussion of what the mind is like, when perhaps the real disagreement lies in differences over how we think the rest of the physical world is like.

    Liked by 2 people

    1. Hi Alex,
      It’s good to see you here. Welcome! I enjoyed our conversation and look forward to having more.

      I think you nailed Nagel’s phrase, and managed to capture aspects of it I struggled to articulate. Reading your description, another trait that jumped out to me is ineffable. In this view, we’re talking about something that can only be experienced, not described or related to anything else.

      I suspect this intrinsicality is something physicalists like me tend to skate over and not appreciate what’s actually being said. I know I didn’t the first time I read Nagel years ago, and may not have this time if our conversation hadn’t primed me to look for it. While non-physicalists probably just tend to accept it as obvious.

      I’m not sure if I’m fully grasping your abstraction question. I might have to ponder it for a while. It doesn’t seem to be clicking for me.

      Although it’s interesting you use the word “abstract”. From my functionalist perspective, I’ve been wondering lately if that isn’t a better word, a less value-laden one, to describe the fact that the brain’s models omit many details, including in its models of self. The details that are retained are optimized for the evolutionary reasons those models developed, which unfortunately didn’t include understanding the architecture of the mind. The missing details are what create the appearance of an explanatory gap.

      But getting back to your question, I wonder if it relates to something I’ve seen from both Chalmers and Goff, what at least one of them called “the hard problem of matter”. The view is that physics explains matter in terms of its extrinsic relations, in terms of what it does, but never tells us what matter is, about its intrinsic nature. My view is, if matter does have these intrinsic properties, it’s not clear how we could ever know about them, since knowledge always seems to come from interactions of some type. And questions about what is almost always seem to eventually reduce to what something does.

      But I think every physicalist has to acknowledge we only understand reality to a particular point. At that point, we seem to be faced with brute facts: spacetime, energy, quantum fields, etc. Of course, that point has historically shifted many times, where something we thought was fundamental turned out to be emergent from a yet lower level of reality.

      Although maybe you mean abstracting all content out of conscious awareness? Are we talking about a contentless awareness? I know many meditators say those are real states. But of course they’re depending on their own introspection to make that claim. I don’t doubt there are states where there’s no introspectable content. But as long as we’re alive, the body seems to maintain its homeostatic feedback loop with the brain, creating what Simona Ginsburg and Eva Jablonka call a constant “buzz”, the feeling of existence.

      I don’t know if any of that is getting to your question. I do frequently find that philosophical disagreements amount to differences in definition, emphasis, or perspective, so I’m open to the possibility.

      Liked by 1 person

      1. Hey Mike,

        By abstraction, I mean completely taking out all the content that forms our conscious awareness yes. Not just the sensory experiences of our five modalities, but even that left over “buzz” that you describe, which forms up the feeling of existence itself. I am almost certain that when other anti-physicalists like Chalmers conceive of functional and/or physical relations, they just conceive of them as being relations between totally empty abstractions. So, our experience or conception of a table might be some mental model in our head, which carries with it a visual imagination (a 2D or 3D visual construction) as well as maybe a feeling of spatial awareness among the component parts. But the physical table itself is nothing like this, it has no spatial or component physical parts which are ‘like’ our mental content. Of course, anti-physicalists speak of the physical table having parts and taking up space, but when they do, they are just speaking of the relations between the empty abstractions.

        Similarly, “sensory experience” is just shorthand for particular neural physical states, which in turn is shorthand for particular relations among certain (empty) abstractions. All of these physical terms, like “sensory experience” are extrinsic in the Chalmers/Goff lingo, precisely because they refer to these empty relations. And they are empty, because we’ve abstracted away all the mental content. Whereas the intrinsic part (which is meant to be captured by the terms in the synonym circle) refers to the content of our actual experience. So, when we speak of phenomenality, we are in a sense speaking of ‘everything’ (that exists in our virtual model). Everything that is imaginable, is by conception, phenomenal. I think this is also why many anti-physicalists find the physicalist program so bizarre, since if it were to succeed, it would mean that reality is just a bunch of relations among totally empty abstractions (like I earlier said, there’s nothing ‘there’ there).

        But maybe physicalists don’t perceive the physical world to be an empty abstraction, and so maybe this is all much ado about nothing.

        Like

        1. Hey Alex,
          It sounds like you’re saying we could remove all the mental content but still have a functioning physical system in the brain. I think from a physicalist perspective, all mental content is physical. (Although its physical form is far from obvious from the internal perspective.) So removing it means altering the physicality of the brain. Now, we might imagine that none of that content is currently being activated in a particular moment. But to have an absence of all content in a physical neural network seems like it would be incompatible with the network continuing to function. So in that sense, I’d say we can never achieve the full abstraction you’re envisioning, at least not while staying conscious in some sense.

          Hope that helps. Sorry if it missed your point.

          Like

          1. Hey Mike,

            That’s not really what I’m saying. Although in answer to your query, I think non-physicalists are happy to concede that mental states supervene on physical ones, and that you can’t change mental states without changing physical ones, because it’s just not physically possible. The issue is just one of description; we can exhaust the physical descriptions without ever describing the mental content. And we can leave out our descriptions of the mental content while still fully describing the system.

            Anyways, back to my point. What I’m actually saying is that the non-physicalists view non-phenomenal terms (e.g. those that refer to physical systems) as empty abstractions. We all agree, physicalists and non-physicalists, that we have mental content and that we use this mental content to represent or describe physical systems in the external world. The question is, what is the content of the physical system itself? Do physical systems have some content, (meaning there’s something that it’s like to be them), and does this content potentially approximate our mental content? And if we had a super accurate mental picture of the world, would that picture basically be analogous to what the physical system actually was?

            As an analogy, picture a set. Imagine that the content describes what’s in our set, and the relations describe the abstract mathematical relations and interactions of the different sets. Non-physicalists conceive of physical matter as analogous to an empty set. It still participates in relations, which are describable by the laws of physics, but it has no content itself. It’s not just that it has no mental content (we all agree with that), but that it has no physical content which resembles our mental content.

            This is just another way to restate the hard problem of matter. That’s why Nagel uses the “what it’s like” phrase. He doesn’t just mean the phrase to refer to particular conscious systems, in the sense of “what’s it like to be a brain/mind”, as you seemed to be hinting at. He’s denying that it could be like anything to be a table or chair (unless you subscribe to panpsychism), because by definition they’re just empty abstractions. Hopefully this also explain my previous answer to your reply. If physical systems are just empty abstractions, then of course any description of such systems won’t describe our mental content. It also explains things like qualia inversion. If physical systems are like empty sets, then in theory you could have two of the same physical systems with different mental content. The relations between the sets (which describe the physical interactions) are the same, but the stuff inside the sets (the mental content) are different.

            The reason I’m bringing this all up is because all of the above is and always was very clear to me. But I’m wondering as of late whether physicalists approach the problem in the same way. Maybe from your point of view, a physical table is not just an empty abstraction. Maybe you think the real table in the real world looks/feels something ‘like’ our imaginary table in our virtual model (assuming we have a veridical model).

            I hope this all makes sense. No worries if it doesn’t, we can drop the conversation.

            Like

          2. Hey Alex,
            I think I’d say that we can’t exhaust a physical description of the brain without a description of the mental content being embedded within it, although that doesn’t mean it would be obvious. So in principle, you should have everything you need in that physical description to build a description of whatever mental content was there. Of course, that wouldn’t be true of another physical system like a chair, because a chair doesn’t have the necessary information architecture (at least not in any meaningful sense).

            You ask if there’s something it’s like to be a physical system. I’m really not a fan of that phrase. What I think is that no physical system has what Nagel describes in his paper, although at least some brains have self referencing models that imply they do. When I say “mental content” above, that’s part of what I’m referring to.

            For a physical table in the real world, it doesn’t seem meaningful to speak of how it looks or feels without reference to us. By “us” I’m referring to any system considered to be conscious. Without a system that can model it and have visual perception of it or somatosensory interaction with it, it has no look or feel. It’s like whether the proverbial tree falling in the forest makes a sound. If we define “sound” as air vibrations, then yes it does. But if we define “sound” as a perception, then unless there’s something nearby that can perceive, it doesn’t.

            I feel like I’m probably still missing the mark. But maybe there’s something in all that addressing what you’re wondering. I suspect part of the answer is that we just think about all this very differently.

            Like

          3. Hey Mike,

            The reason why I keep pressing the “what is physical content” line is because it is crucial in answering the question of whether a physical description can exhaust or encapsulate mental content.

            Changing topics a little bit, the problem of reducing the mental to the physical is that it seems like mental content is not just a set of physical descriptions. Whereas physical weakly emergent phenomena (like ‘wetness’) are really just terms which refer to complex sets of descriptions of microphysical phenomena. When we describe the heat of a fire, for example, we might describe it by referring to the macro-level changes it induces (e.g. it makes the metal glow and expand). This is true even for medieval peasants who had no awareness about the possibility of reducing heat to kinetic properties. Similarly, the micro-level vibrations of molecules are also describable in terms of the same physical effects (e.g. radiative emission, spatial expansion) at a different level of scale. Hence, talk about the heat of a fire is reducible to talk about micro-level kinetic properties because the macro and micro difference is purely quantitative.

            But if we wanted to draw an analogy with mental-physical reductionism, then our experiences should be engaged in the same processes as our physical substrates are, just at a different level of scale. And yet it seems like this is not so. When we describe our experiences for example, we aren’t merely describing physical effects at some higher level of scale. Rather we use phrases like “it hurts”. Such descriptions of pain are clearly not just a set of descriptions of what our neurons are doing. That’s because we wouldn’t be able to add up the descriptions of what our neurons are doing, while realizing “ah yes, things are gradually starting to become more painful”. But it seems like we can add up kinetic descriptions of microproperties and realize “the metal is starting to expand and glow more and more etc”. This is precisely because descriptions of micro-scale effects refer to their individual spatial and radiative properties. The description of the macro-effect is ‘built in’ to our micro-level properties, just at different levels of scale.

            If we took the reductionist program seriously, then we would have to admit that this phenomenal language (referring to our inner experiences) is completely faulty. Our mental terms don’t refer to the things we think they refer to. But once you admit this, I would argue that you then have to admit that physical descriptions don’t fully exhaust the mental. It’s not merely a case of it not being obvious either. It also wasn’t obvious to the medieval peasant how we might reduce a flame to kinetic motion among particles, but it is at least conceivable so long as the difference in descriptions is purely quantitative (about the same thing at different levels). But descriptions of mental and physical content are not about the same thing, so reduction is impossible even in principle. It’s similar to how it is impossible to keep adding positive integers and reach a negative number. It’s really conceptually impossible, and not simply a problem of the imagination.

            Like

          4. Hi Alex,
            So when thinking about how to translate the mental to the physical, I do think it’s important to accept that mental phenomena aren’t what they seem. Conscious percepts are models constructed by the brain. You’re right that we can’t reduce those models to the operations of the brain, but that’s true only from within the system, that is, only subjectively. Objectively the limitation shouldn’t apply. (Although the effort to do so is obviously a work in progress.)

            The reason is that these models are abstractions. They omit details that aren’t relevant for their effective use by the system in the manner that drove their evolution. We can’t use a typical roadmap as an effective guide for the topological terrain. It’s just missing too much of the relevant information. But if a roadmap is all we have, then we just keep using it and being reminded by how often we can’t reduce it to that terrain. Likewise, mental content is useful for day to day life, but not for understanding the brain’s operations.

            (For a description of the kind of abstraction I’m talking about here, see: https://en.wikipedia.org/wiki/Abstraction_(computer_science) )

            Accepting that mental content isn’t what it seems to be doesn’t mean we lose information. Even if we call it an illusion, everything necessary to create the illusion is going to be in the physical description of the system. That’s what I meant by an exhaustive description of the physical system will include any mental content it may have.

            Of course, if we take the extended mind thesis seriously, we could argue that a physical description may miss the relations between the brain structures and the structures in the world that make up mental content. But I can’t see any reason why a physical description can’t avail itself of these relations.

            You might challenge the initial starting point here that mental content are models. If so, I’d ask you to consider how a philosophical zombie would work. In order to have its “fake” consciousness, what would it need to pull it off? It would need functional structures that enabled it to compute that it had something like Nagelian consciousness, even if it didn’t. Now, what would you tell such a zombie to convince it (make it compute) that it didn’t have that consciousness?

            Like

          5. I find the model talk to be somewhat disanalogous because it implies that there are two things, like the map and the terrain, and that there exists a representational relation between them. We say that numerical characters on the board represent numbers, or that the characters on my computer screen when I’m writing code represent the physical parameters of the computer etc… In all cases, we are speaking of two different things.

            Ironically then, the model analogy would probably work better with dualists. But in the mental-physical reduction case, there is just one thing (physical stuff), so it makes no sense to say that the mental can model the physical. Rather you would have to argue that our mental experiences (which are physical) model some other physical stuff. But this then already assumes that the mental is physical. This kind of defeats the purpose of the model analogy, which was supposed to help illustrate how the mind might be physical in the first place! But all the model analogy really ends up showing in your case is how certain physical systems might leave out important details of other physical systems.

            As for philosophical zombies, presumably the zombie would be able to realize it’s not conscious by using the same chain of reasoning that you used to conclude that you don’t have Nagel’s form of consciousness.
            🙂

            Like

          6. Zombie-Mike likely would reach that conclusion (assuming he had the same innate and learned dispositions). But zombie-Alex might go through the same chain of reasoning conscious-Alex just did, and conclude that they’re not a zombie. What argument could we provide to zombie-Alex to help them realize their true nature?

            Like

          7. Well, it depends on your model of consciousness. If you buy into interactionism, then consciousness is causal. This might be something like Penrose’s OOR theory, or some kind of emergent add-on (where core theory physics is incomplete in certain space-time regimes, like those inside our brains). If that’s the case, then zombie Alex should have different physical structures from real Alex. Whatever causal role that consciousness plays in my brain, that effect would be missing in zombie Alex’s brain. To replicate it, zombie Alex would need some physical computational ‘add-on’ in his brain, so his brain would look different from mine (and that would be noticeable). This assumes that consciousness is computational though (which Penrose for example wouldn’t buy).

            As for panpsychism, consciousness is what grounds all physical reality. Technically, zombies wouldn’t even be conceivable under panpsychism. Under epiphenomenalism we do start to run into weird problems though, yes. We couldn’t do any third person test to distinguish a perfect zombie Alex from real Alex, but real Alex would know that he is conscious and zombie Alex would not know. Real Alex would know this from his first-person experience, whereas zombie Alex wouldn’t have such experiences.

            It’s like the alien from the movie The Thing (1982). If you’re not the thing, you’ll know it, even if you can’t prove it to your suspicious friends.

            Like

          8. Obviously from a physicalist perspective, philosophical zombies are a mistaken conception. But I think it’s interesting to think it through, because from a non-interactionist dualist perspective, the physical version of consciousness would be a philosophical zombie.

            For The Thing, an interesting question is, does someone who is the thing know they’re the thing before they start acting like it? In a lot of similar sci-fi scenarios, the replaced thing-person doesn’t know their own nature, until something triggers it.

            Like

  13. Mike,

    You’re making an invalid inference from “Nagel has a theory” to “what-it’s-like is not a theory neutral term”. That just doesn’t follow! Definitions ultimately ground in ostensive definitions, so a relevant question here is, are there any observations that Nagel might be directing our attention to?

    I think he could have made better choices than directing your attention to your hearing and asking you to compare to a bat’s. Refactoring ideas I got from Ran Lahav (a philosopher I went to grad school with), I gave you the cold and hot hand water temperature feeling example before. I think that’s a much easier entry point for understanding what qualia are supposed to be.

    Also pretty troubling is your claim 2 (Epiphenomenal) in contradiction to Nagel’s concession:

    there may even (though I doubt it) be implications about the behavior of the organism

    Where by admitting there may be behavioral implications, Nagel is explicitly denying a commitment to epiphenomenalism, though he leans toward it. But remember, it doesn’t necessarily matter what Nagel believes: no term needs to embody all the beliefs of the person who coined it.

    Liked by 1 person

    1. Paul,
      “Something it is like” and similar variations don’t show up in the literature (according to Google’s Ngram) prior to his paper. Of course, it’s possible for phrases to morph in meaning, but that seems unlikely when users of the phrase, like Block, explicitly cite Nagel’s paper.

      I’ll grant that a lot of people, particularly scientists, are likely using it, as well as the terms “qualia” and “phenomenal”, in a different manner. But if they don’t explain what they mean by it, then they’re being ambiguous. (And yes, I include myself in that for my past uses of these terms, although hopefully context helped in those cases.)

      Dennett pointed out that the phrase has become a crutch in the philosophy of mind. Its use by itself implies a precise technical meaning, but that only seems true if it refers to Nagel’s version. Certainly when most non-physicalists and illusionists use it, that’s what they’re referring to.

      On epiphenomenalism, I take Nagel’s “though I doubt it” as a statement of his actual belief. And it has to be assessed in combination with this earlier phrase: “it is very difficult to say in general what provides evidence of it”, as well as these later stronger sentences.
      (emphases added)

      It is not captured by any of the familiar, recently devised reductive analyses of the mental, for all of them are logically compatible with its absence. It is not analyzable in terms of any explanatory system of functional states, or intentional states, since these could be ascribed to robots or automata that behaved like people though they experienced nothing. It is not analyzable in terms of the causal role of experiences in relation to typical human behavior—for similar reasons.

      As I noted in the post, all of this implies epiphenomenalism, at least to some extent.

      Like

      1. On epiphenomenalism,

        I would say that it only follows from Nagel’s position if you subscribe to the sufficiency principle (physical causes are sufficient to explain all physical effects) and reject overdetermination. While rejecting overdetermination is very reasonable, if you believe in some kind of quantum mind theory (e.g. OOR), then you might get away with rejecting the sufficiency principle. Also, Nagel’s position seems compatible with reductionism, just not the ones that are commonly preferred. Note how he refers to the ‘recently devised reductive analyses’. For example, panpsychism would be compatible with Nagel’s views and wouldn’t entail epiphenomenalism either.

        Like

        1. Sufficiency seems true to me, although I’m not sure about overdetermination. A quick scan of the Wikipedia article makes it sound like a logical OR gate. If both inputs to the OR gate are on, then it’s a situation where either would have been sufficient to make the OR gate’s output be on, so it’s overdetermined? The article asks where the extra causation “goes”. I think in a physical system, all energy and information is conserved, so it does have to go somewhere, but it may only be waste heat from a functional standpoint. I wonder if that counts as rejecting overdeterminism.

          Your point about panpsychism seems similar to David Chalmers’ description of panprotopsychism, where consciousness is reducible to proto-consciousness in simpler systems. From what I understand, Nagel did eventually go panpsychist. But I’m not clear on how panpsychism avoids epiphenomenalism, except maybe by rejecting sufficiency. But then Nagel’s comments about explanation of a system’s behavior being compatible with the absence of consciousness doesn’t seem in accord with that. (Granted, I’m looking at his views from this 1974 paper, not the ones have might have had decades later.)

          Like

          1. I didn’t mean to hijack this comment thread, sorry about that. About the overdetermination thing, the logical OR gate is just an analogy, and like all analogies it fails to deliver the entire truth. The idea is that overdetermination occurs when you have two causes producing the same effect. The problem with the OR gate analogy is that the effect isn’t really the same at the physical level. If both inputs are on, then as you said, there would be some extra physical effect (heat for example). But a real case of overdetermination has no such extra effects. It seems at that point that your extra cause isn’t really doing anything (it’s basically a ghost), so Ockham’s razor suggests there is no causal overdetermination.

            About panpsychism: I would say panpsychism changes the definition of ‘physical’. It rejects the sufficiency principle according to the traditional definition of ‘physical’ but accepts it according to the new definition. When Nagel talks about the system’s behavior being compatible with no consciousness, he’s using physical in the traditional sense. This also goes back to my previous comment about how reducing the mental to the physical seems hard because the difference is not purely quantitative. The physical and mental descriptions aren’t about the same thing. But panpsychism would change the definition of physical to add some (intrinsic) property to the micro-level stuff. So that the micro level descriptions are basically “traditional physical stuff + micro-level experiential stuff”. Since we can quantitatively add micro-experiential stuff up to get macro-level experiences, panpsychism achieves the reduction the non-physicalists were looking for.

            It might be argued that this doesn’t really avoid epiphenomenalism, since the micro-level experiential add-ons aren’t modifying the physical processes. This is true; the way most panpsychists avoid this is by positing that the experiential stuff grounds the physical causes. This goes back to our discussion about how words don’t seem to have meaning unless they have intrinsic content. That’s because if every definition is extrinsic and thereby relational (understood in terms of something else), then all definitions are just circular. Granted, it’s a huge circle, but it’s still just a circle in the end. If we want true understanding, it seems like we need an intrinsic referent at some point. Similarly, the panpsychists will argue against ontic structural realism by positing that every object needs some intrinsic content, otherwise we are ultimately speaking about relations among empty abstractions.

            Another argument for this is that while ontic structural realism is maybe conceivable, if it happens to be true then it seems like there’s no difference between physics and other types of math; both are purely relations. So, there would be no way to distinguish between the actual and the possible. You could also just accept modal realism at that point though, as a way to get around this.

            If we reject ontic structural realism for the above reasons, then maybe the experiential stuff (which we think is intrinsic) forms that intrinsic content for all physical things (according to panpsychists). It’s a leap of logic though obviously, but it does have the benefit of making mental-physical reduction really easy. We can get around epiphenomenalism in this way by pointing out that while the experiential stuff is not the relation itself (it’s not defined extrinsically), it’s still the stuff that participates in the relation, without which nothing could exist (there would be no way to distinguish the actual from the possible).

            As for panprotopsychism, I see the difference between that and regular panpsychism as really a difference in magnitude (quantity not quality), and thus I find the distinction not so important. But I don’t want to get into that since it’ll be another five paragraphs attempting to explain. Hope this all makes sense!

            Like

          2. Don’t worry about hijacking threads. The original discussion can always continue in another part of the thread. And epiphenomenalism was part of the conversation.

            You probably didn’t mean to imply otherwise, but just a note that OR gates are physical. They’re a fundamental component in the device you’re using right now. And of course we all know that waste heat is an issue with these devices. Information processing always has a thermodynamic cost.

            The Wikipedia article discusses the issue of mental states being caused by both prior mental and prior physical states. At least from the physicalist view, this concern seems like a category error. It assumes that mental causation is something separate from physical causation. But if we accept that mental causation is just an alternate description of physical causation (in certain systems), then it doesn’t seem like an overdetermined situation.

            It seems like panpsychism’s path to avoid epiphenomenalism results in it being equivalent to reductive physicalism. It does include an extra metaphysical story, but one that we can’t validate or falsify.

            I should note that I am a structural realist (at least at the moment), although I’m more inclined toward epistemic structural realism rather than the ontic version, staying agnostic on what the ultimate reality might be. But similar to the above, it seems like the difference between ESR and OSR is metaphysical.

            Haha. No worries. We don’t need to get into panprotopsychism. We’ve ranged over enough topics. 🙂

            Like

          3. Hey Mike,

            I would say your comments about panpsychism are right, it is meant to function exactly like ordinary physical reduction. The idea is that it doesn’t leave out the stuff that non-physicalists think is super important (Nagel’s type of consciousness).

            About verifiability/falsifiability, the panpsychists would argue that we have an easy test. Our introspective access to our own consciousness should falsify ordinary physicalism. Assuming of course it is veridical.

            As for epistemic structural realism, yeah that seems diametrically opposed to the panpsychist claim that we can acquire introspective knowledge of our intrinsic conscious states. Although there are weaker forms of ESR which only apply to scientific knowledge about the physical world, so you can reconcile that with panpsychism.

            However, I think there’s a critical problem with the strong form of ESR (where every form of knowledge is relational), which is that we all have epistemic foundations. As you yourself appear to admit, within the system it seems like we have intrinsic mental content. If this is true, then our path to acquiring the belief in ESR must be built on the assumption that we have intrinsic mental content. But then demolishing this latter belief would be self-defeating.

            It’s analogous to the Boltzmann brain scenario. If you reason yourself into the position that you’re a Boltzmann brain because your theory of the multiverse says its overwhelmingly likely, then that’s entirely self-defeating. Because if you were a Boltzmann brain, you almost certainly wouldn’t have a veridical model of the world, thus the premises that led you to the multiverse-Boltzmann brain conclusions are very likely false. A similar logic appears to apply to ESR.

            Like

          4. Hey Alex,
            On the easy test, right. This comes down to how much trust we should put in introspection. I think you know my answer at this point. 🙂

            I admitted we have intrinsic mental content? I must have mispoke somewhere, or missed an implication of a position. Honestly, I think intrinsicality is only something that exists relative to a particular perspective, and only on the borderline of what is knowable from that perspective. But a large part of science is figuring out new and clever ways to take new perspectives and push the boundaries of that knowability.

            I’m with you on the Boltzmann brain scenario. I actually think it applies to idealism overall, not to mention full-on solipsism.

            Like

          5. Hi Mike,

            We’ve put out quite a lot of content on this comment section so far. I would say in the end that either you believe in knowledge by acquaintance, meaning you can have reliable knowledge of your inner mental phenomena by introspection, or you don’t. There’s not much that either side can do to convince the other. Just like you’re not going to be able to conceive a radical skeptic that you have scientific knowledge; no scientific test that you pass will convince him/her of this. However, I do actually think that denying reliable knowledge by introspection is mostly self-defeating.

            About intrinsicality, I’m sorry if I misconstrued your position. I was making a distinction between ontological and epistemic intrinsicness. You can think that consciousness is epistemically intrinsic even if it’s ontologically structural, and I thought that you were admitting to the former when you conceded that it seems intrinsic from ‘within our system’.

            My argument was that reasoning towards the non-existence of our phenomenal states is self-defeating. The idea is that we (most people) believe in the theories of physics because it best explains our mental content and sensory experiences. We could of course posit different explanations for our experiences; maybe there’s no low entropy physical world and we’re just Boltzmann brains made out of hydrogen gas for example. Or maybe there are no atoms at all, and we live in some exotic alien world with alien physics, and we’re just radically mistaken about the source of our mental experiences.

            But to think that we have no intrinsic mental content is to reason into a self-defeater. If we reasoned to the conclusion that our mental content was identical to certain physical structures and/or functions in our brain, then we’re engaged in a vicious circle. This would mean that we believe our scientific theories are right because they best explain the content of our experiences, but we also believe the content of our experiences is nothing more than certain structures/functions postulated in our scientific theories. It could be ontologically true, but it seems epistemically self-defeating.

            Like

          6. Hi Alex,
            I learned a long time ago that no one gets convinced during these conversations, at least no one who holds a position they’ve given substantial thought to. All we can do is share the reasons for our respective positions. With very rare exceptions, any change in mind takes place slowly over time and usually requires changes on a whole range of associated issues. I find these conversations are a lot more fun if we can remember that. (It also helps to look out for looping, repeating the same arguments, which I haven’t detected yet here. I usually just move on at that point.)

            On introspection, my thinking is that we didn’t even realize there was an unconscious mind throughout human history until the 19th century. Descartes and Lock certainly didn’t seem to have any inkling of it. And the results from cognitive science have only demonstrated it’s worse than we thought. Introspection gets us through day to day activities, and enables language communication. But as a guide to the architecture of the mind, it has a pretty dismal track record.

            I don’t think the non-existence of phenomenal states (in the intrinsic sense) is self-defeating because the functional versions of those states provide everything we need. Crucially, they provide the content of our experiences, just not metaphysically intrinsic private fundamental ones. Consider what a robot might need to process and act on scientific information. Our functional states provide that.

            Like

          7. Hello Mike,

            The issue about your argument being self-defeating is not that denying phenomenality leaves something important out (although of course I think it does) but rather that reasoning yourself to this conclusion necessitates that you abandon the impetus behind believing in functionalism in the first place.

            Once you’ve reasoned yourself to the conclusion that your inner experiences are nothing more than functional/physical states, then you can no longer justify your belief in functionalism by appealing to their ability to explain your experiences. Your justification becomes circular. It’s “I think physical functional states exist because they help explain the existence of my experiences, but my experiences are nothing more than physical/functional states” That means you already have to assume that some physical/functional states exist in order to justify them.

            Contrast this with the non-physicalist, who claims that their knowledge of the physical world is justified because it helps explain their intrinsic experiences. In turn, we have knowledge of our experiences by being immediately acquainted with them (they are intrinsic). But crucially, such knowledge doesn’t depend on our scientific theories being right in the first place.

            Of course, it might be denied that you’re starting from the first principle that we know our experiences. Maybe you’re starting from the first principle that physical/functionalism is right. But that seems backwards and a bit scientifically pretentious (we shouldn’t just assume that our scientific/philosophical theories are right). Also, starting with that assumption means that your evidence for physicalism/functionalism is experimentally unfalsifiable. For example, let’s assume that it turns out that Penrose was right, and that consciousness is quantum and non-functional. But if you start from the assumption that functionalism is right, accepting the evidence that Penrose offers is self-defeating. Since you could only reach that conclusion by assuming it is false.

            Like

          8. Hi Alex,
            I would agree that we start with our own experience, and only know anything through that experience. Everything “out there” is a theory, a mental model, an understanding we construct, both of the world and ourselves. Our mind provides initial models, first impressions. We’re constantly adjusting those models as we have new experiences, as we learn to what degree those models predict our future experiences.

            Science is simply a very careful systematic way of doing that, along with a rigorous way of constructing models. It’s often forgotten that “empirical” comes from the Greek word for experiential. In the end, the only way we have to assess these models, these understandings, is by how accurate or inaccurate their predictions of future experiences turn out to be.

            Now, what about our model, our understanding of our own experience? We certainly have an initial impression of it as something like what you describe, as something intrinsic. But the mistake, I think, is in freezing that model, of not being willing to update it. The initial model works pretty well for predicting our direct experiences. But everyone knows there’s a disconnect between that model and our models of the outside world, including the brain. We give that disconnect names like “explanatory gap”, “mind-body problem”, and “hard problem”.

            One set of approaches is to radically update the other models, which leads to paths like panpsychism, idealism, or exotic physics solutions. Or we can decide to just accept the disconnect itself, the dualism, as part of the model. These approaches all seem to complicate the scientific views of reality, at best without making them any more predictive of future experiences.

            Or we can decide to just change the model that doesn’t fit. So the question is, does it make sense to revise the model of our experience? Given what I noted above about introspection, I think it does, just like it does for any other model. Of course, any revision has to meet the predictive success of the older model, and then improve on it.

            Like

          9. I agree with everything you say, but I don’t see how this avoids the self-defeater I postulated. The issue is just that concluding that consciousness is not intrinsic can only work if you first assume (before you justify your beliefs in the external world) it is intrinsic, hence the self-defeater.

            Another way to put this is that if analytic functionalism is true, then descriptions of your experiences are just descriptions of your brain’s causal/functional states. This then means that when you say, “we should start with the assumption that we are having experiences” you are saying “we should start with the assumption that we are having (description of brain’s causal/functional state)”.

            But that now means that you are assuming what you initially set out to prove, namely that there exists an external world with objects like the brain that play functional roles etc…

            The problem is not with reaching the conclusion that our qualitative states might play functional roles, but rather with us believing that there is nothing more to qualia than these functional roles. Because in the latter case we have to eliminate whatever description of qualia we previously thought sufficed and substitute a functional description. In that case, it would turn out that we were building on epistemological quicksand, by assuming what we set out to prove.

            Like

          10. I should have been explicit above that I don’t think modifying our model of experience invalidates everything we learn from experience. Unless of course we conclude that all experience should be dismissed, which would indeed be self defeating, but isn’t what’s required.

            I do assume the external world is out there. An external world seems like the simpler option for understanding random physical events, such as when people are killed by a natural disaster, that they were killed by a purely unforeseen physical event, rather than having to figure out whose mentality killed them. Of course, that’s also assuming there were in fact people out there who aren’t there anymore, or that there’s more than one mind at all. (I’ve never understood why once we start doubting the external world we shouldn’t also doubt other minds.)

            Once we accept the external world, the question then is, what do we have evidence for? Functionality is something we can observe and collect data on from multiple perspectives. What data can we collect on intrinsic qualia? (Other than introspectively, which has the issues noted above.)

            I will admit that if you’re going to assume there’s more than what can be adjudicated with evidence, no one can conclusively demonstrate that it isn’t there.

            Like

          11. Hey Mike,

            I’m still not seeing how this answers my points, no offense. I think it’s best that I simply lay out my argument in premise-conclusion form, and this way you can just point to the specific problem that you have with it (e.g. “I disagree with premise 4”) so we don’t end up talking past each other.

            1. We can start with the assumption that our experiences are intrinsic (that we are given access to some intrinsic phenomenal feeling).
            2. Assuming 1, it’s possible to derive knowledge of the external world, including the knowledge that brain states are physical/functional.
            3. Updating our experiential model to take into account the fact that our experiences are purely physical/functional states would entail that our experiences are not intrinsic/phenomenal.
            4. Reaching the conclusion in 3 requires the assumption that our experiences are intrinsic/phenomenal (because of the reasoning in 1 & 2).
            5. 3 is self-defeating.
            6. Alternatively, we can start from the assumption that our experiences are functional/physical states (and then derive our knowledge of the external world + scientific theories from this).
            7. 6 is presumptuous, it simply assumes that analytic functionalism is correct.
            8. Either analytic functionalism is self-defeating, or it is presumptuous.

            Like

          12. Hey Alex,
            Thanks for laying this out.

            I do in fact disagree with 4. I don’t think 2 and 3 depend on our experiences actually being intrinsic and phenomenal (in the Nagelian / Block sense). They seem compatible with our experiences being functional, although we do have to account for the apparent intrinsicality, etc.

            I agree 6 would be problematic if it were required, but I don’t think it is. 3 only requires heeding the lessons from 2. If 2 were not the case, then the conclusions in 3 wouldn’t follow.

            Again, I’ll concede that 3’s update to only functionality is based on parsimony. If there are in fact non-functional intrinsic aspects of experience that don’t conflict with 2, we can’t rule them out, but we also can’t demonstrate their existence.

            Like

          13. Hi Mike,

            Sorry for the late reply. 4 doesn’t presume that our experiences are actually intrinsic/phenomenal, only that we are assuming it to be true in order to justify our beliefs in the external model/update our model of consciousness. So, even if “2 and 3 (don’t) depend on our experiences actually being intrinsic and phenomenal (in the Nagelian / Block sense)”, the reasoning in 4 still goes through. It is of course conceivable that our brains might have constructed an initial mistaken model of its own consciousness, but this is not enough to avoid the self-defeater.

            The problem is that our brain must use this initial model to build up its understanding of the world and its own consciousness. If the initial model was mistaken however, then this understanding must be faulty (or at least epistemically unjustified). Hence, reaching the conclusion that we need to update our model to strip it of phenomenality is self-defeating, even if it is in fact true on an ontological level. This is similar to my previous point about the Boltzmann brain. It’s ontologically possible that we are Boltzmann brains with an initial mistaken model/belief, but we would never be able to know it (because updating our model to reach the conclusion that we are Boltzmann brains is self-defeating).

            To put this another way, we are able to go from 1 to 2 because our scientific postulates help predict the circumstances of our phenomenal experiences, but once we conclude that we don’t have phenomenal experiences, then our very reasons for believing that our model needs to be updated becomes erroneous.

            Like

          14. Hi Alex,
            No worries on being late. These conversations happen when we have time.

            However, at this point, I feel like we’re looping. I know any response I might give will be too similar to the ones above. So I think I’ll just leave it at that. Enjoyed the discussion and hope we have more!

            Like

        2. No worries if you don’t want to continue our conversation. However, unless I misunderstand your reply, I don’t see why my response doesn’t defeat your argument. Just to be clear, I’m not denying or disagreeing with any of your points. I’m not saying, “You think it’s possible for our consciousness to be functional states, and for our brain to have formed the (mistaken) initial impression that its consciousness is intrinsic, while I don’t think this can be the case”. Rather, I am conceding that this is possible, I’m just saying that such a position is self-defeating.

          Crucially, I pointed out in my last reply that my argument doesn’t assume that ” 2 and 3 depend on our experiences actually being intrinsic and phenomenal (in the Nagelian / Block sense)”.

          Nor does my argument assume that 2 and 3 are not “compatible with our experiences being functional, although we do have to account for the apparent intrinsicality, etc”, which you argued for in your latest reply. 2 and 3 remain true even if our experiences are just functional states, and even if they can fully account for the appearance of intrinsicality (i.e. the brain having a mistaken initial impression about consciousness)!

          Best,

          Alex

          Like

          1. In other words, I don’t see how any adequate reply of yours would just be repeating your last points, given that I feel I showed that they are completely compatible with my argument, and don’t actually refute it.

            Thanks for the conversation, Mike.

            Liked by 1 person

      2. I made a tactical error by disputing your Epiphenomenal reading of Nagel’s claims. I’m not convinced by your reply on that, but I’m dropping that subject, as it’s distracting you from the important one. To repeat: no term needs to embody all the beliefs of the person who coined it.

        Let me break that down into two parts: (A) Original Intent is a lousy theory of any verbal interpretation, not just Constitutional interpretation; and (B) you can’t pack all of an author’s beliefs into the meaning of a word they use.

        Historical records prove (my just-so story goes) that the term “whale” was introduced by Akman the Phoenician. “We’re all familiar with those large sea creatures that occasionally come to the surface and noisily spout droplets of water into the air, inspiring sailors to yell Thar She Blows! Let me tell you some important facts about these creatures that I call ‘whales’.” Akman goes on to “explain” that whales are fish, they breathe water, they feed primarily on men, and they crawl onto land once a year to mate.

        Does that establish that “whale” means man-eating, water-breathing giant fish which occasionally blow and less often come on land to mate? No, not one bit. “Whale” means those large sea creatures who occasionally surface and blow. Even in Akman’s mouth, that’s what “whale” means. The rest is Akman’s speculation.

        That’s part (B). And that’s before we take into account (A) that language is social and no one, not even Akman, gets to unilaterally dictate what words mean. Now put down that book by Antonin Scalia, and go read some Roland Barthes.

        Liked by 1 person

        1. I don’t think I’m invoking original intent for that phrase. I didn’t just go directly to Nagel’s paper, but traced the meaning of concepts through the literature. (Admittedly, the post abbreviated a lot of it to keep it to a consumable length.) This is more about how the phrase, along with related ones like “qualia” and “phenomenal”, have been and are being used in the philosophical literature.

          In that sense, it seems hard to argue that non-physicalists like Block, Chalmers, and Goff aren’t using it in a manner close to Nagel’s original meaning. And that illusionists like Dennett, Frankish, and Garfield aren’t meeting them on those same terms. There are people like Schwitzgebel arguing for a more innocent conception of these terms, but it’s hard to judge how large his faction really is.

          Now, it’s possible in 30 years the terms will mean something completely different. Language evolves. But given where we are right now, I think using those terms implies what the non-physicalists mean by it. If you mean it in another manner, then I think clarity requires making that meaning explicit, if it isn’t very obvious from the context.

          Like

          1. OK, so maybe you’re not making mistake (A), original intent, but you are still making mistake (B). You are cramming the whole corpus of philosophers’ beliefs into the meaning of their terms. That’s not how it works.

            You can’t just take a weighted average of the philosophers’ beliefs and say that “qualia” means all of that. Meanings typically ground out in reference via ostensive definitions. “Gold” means gold, the element Au, no matter what people believe about gold. So, in order to find out what “qualia” means, you have to know whether there are any real processes/properties/events that explain why people use that word. Hint: yes. Schwitzgebel is just right about this.

            Like

          2. Actually, looking at the way words are used is exactly what dictionary editors do: https://www.merriam-webster.com/video/how-a-word-gets-into-the-dictionary

            People are free, of course, to try to promote their personal favorite meaning, but they can’t say it’s the one true definition, because there’s no such thing. BTW, Frankish in his response to Schwitzgebel’s paper pointed out that his definition is so innocent, it’s compatible with illusionism. https://keithfrankish.github.io/articles/Frankish_Not%20disillusioned_eprint.pdf

            Like

          3. Merriam Webster just reminds me of an excellent essay in Aeon magazine. “Despite its pretensions, the dictionary is no more than a pedantic and overexacting thesaurus,” Alexander Stern writes. Exactly. Dictionary definitions are only useful because we have real-world ostensions for some of the concepts, or ancestors of the concepts, invoked by the words given in the definition. That’s why you can’t substitute opinion polls (as you are effectively doing with philosophers’ writings) for an investigation into the ostensions, or attempted ones.

            Frankish kinda has a point, kinda not. Some philosophers do use “qualia” or especially “what it’s like” in the broad way Schwitzgebel identifies. Others are more narrow, for example including perception (including illusory perceptions) but excluding cognition unless it is accompanied by perception. My approach to “qualia” is the latter. If Frankish is only interested in the more narrow meanings of “qualia”, that’s OK if he takes them one at a time, but not if he’s just playing inflate-and-explode.

            Like

          4. Frankish’s opinion is that the word “qualia” has become too polluted and we just need to “cut the tangled kite string and develop a better theoretical vocabulary”. (I’m quoting a reply he made to me yesterday on Twitter.) I agree with it being polluted enough that I’m going to be far more cautious with it and “phenomenal”, at least until I can clearly see a decisive shift in the way it’s used by philosophers.

            Like

  14. This discussion is a little over my head, but doing deep dives into the etymology of words or phrases can be a really helpful part of the research process. I think you knew I had that opinion already, though. 😉

    Liked by 1 person

        1. I always enjoy those posts. Definitely knowing the vocabulary helps. But it seems like there’s increasing contention about terminology from biology on up. And philosophy often seems hopeless. At least the astronomers argue about the definition of “planet”, rather than just use their own while ignoring everyone else’s.

          Liked by 1 person

          1. There may be a correlation between how math heavy a scientific field is and how clearly defined the terminology can be. Orbital mechanics is pretty math heavy, whereas biology is more of a mix of math and other stuff.

            Liked by 1 person

  15. So both an illusionist and non-physicalist objected to your use of an innocent conception of qualia/consciousness Mike? I would expect that from the illusionist, since they’ve got nothing left to say once dubious ideas are removed. It’s like dentists not wanting us to floss so they’ll have more work to do (and most people seem to comply). The non-physicalist however might have went along since their position is extremely pliable.

    What does it mean to be like something? For now I’m satisfied with Schwitzgebel’s answer. As you know however I suspect that McFadden’s cemi will some day become experimentally validated well enough to add to this definition in an engineering capacity, and so eject a vast assortment of ridiculous notions that reside in both philosophy of mind and science today.

    On your epiphenomenal, biocentric, intrinsic, and private parameters, I’m in agreement that none of them should be added to a given consciousness definition. It’s the “fundamental” parameter however that I can’t so easily discount. First I must qualify this idea however by saying that in a causal world, nothing can ultimately be fundamental. Here all things should exist as a result of that causality. Thus gravity shouldn’t ultimately be considered fundamental, but rather should exist as a product of what creates it. Gravity does seem reasonably fundamental to me after that caveat however. That’s the sense in which I think qualia should be considered fundamental. Without the right kind of causal physics, an associated value dynamic should not exist. So of course brain injury, pathology, mind altering substances, and evolution, should each tend to affect such physics in a given situation, though a binary “on” or “off” should in some sense exist given that the relevant physics will or will not exist in some capacity. You might even say this given your information processing conception of consciousness. Does the right information get properly processed into other information in any capacity at all? That would seem to involved either a “yes” or “no”.

    Here’s a demonstration that it should be impossible to disprove functionalism. Let’s say that my consciousness happens to exist by means of a god that causes me to feel what I do. Thus when I eat ice cream it’s the god that causes me to feel good given the taste. A robot might also be said to consume ice cream and analyze it, though not to feel good/bad from doing so because the god doesn’t cause the robot to phenomenally experience its existence. The difference between us here in terms of function should be that one feels good while the other feels neither good nor bad. Thus it would seem impossible to disprove functionalism since all things can be said to exist functionally.

    Liked by 1 person

    1. Eric,
      I don’t think you’re being fair either to the illusionists (a group I’m basically a member of) or the non-physicalists. I’d also point out that the conclusions in the post came from research inspired by those conversations. So don’t blame them for those conclusions. Blame me, or the philosophy of mind overall for how they use those terms. If Schwitzgebel succeeds in changing that, then I’ll be happy to use those terms, but right now I don’t think we’re there.

      Qualia as fundamental as gravity? If that were so, then I think we’d be talking about a fifth fundamental force. I know you don’t like David Chalmers, but you often go to the same places he does. He also thinks a theory of consciousness might be something like an addition to fundamental physics. But I think the only reason we’re tempted to think it’s a binary on-off switch is remnant dualist intuitions. As we’ve discussed before, I think it’s a mistake to trust in those intuitions.

      Your last paragraph, if I’m understanding it correctly, seems like basically Nagel’s argument, one for epiphenomenalism. That’s the only way we wouldn’t be able to test whether the god was acting on one system and not the other. And definitely, once we go epiphenomenal, it becomes an unfalsifiable proposition. But that’s not functionalism.

      Functionalism would argue that the ice cream tasting good is functionality, the functionality of your brain doing a draft assessment on whether what you’re eating is good or bad for you. Obviously that functionality is far from infallible. But there’s no reason in principle a robot couldn’t have it, and that we couldn’t test for the presence or absence of that capability.

      Liked by 2 people

      1. I do apologize if I seemed disrespectful to either you or anyone who you’ve been speaking with Mike. That certainly wasn’t my intention. I only mean to present reasonable arguments, and if not, to be instructed on how my arguments aren’t right.

        I actually consider illusionism beneficial in the sense that it does seem to have helped shed light on various dubious consciousness claims. But once that dubious nature itself happens to be exposed and we get down to the essentials of an effective consciousness definition (like Schwitzgebel’s I think, and hopefully soon to be refined in an engineering capacity by McFadden), then I’d like to see illusionists back that position and so not continue on with their dubious “consciousness doesn’t exist” theme. You’ve mentioned a problem with that theme as well. Hopefully someday they will. On the front side they seem to have helped things, though on the backside they now seem to impede progress.

        On qualia existing as a fundamental force, my point last time was that all of reality should simply exist by means of causal dynamics, or a single kind of stuff. But we humans like to categorize things such as “gravity” that seem significant to us. Do I consider value to exist arbitrarily, or rather by means of a certain kind of physics that the brain seems to effectively implement? The second of course! Call this a fifth force if you like. I call it an element of worldly causal dynamics, and probably by means of certain electromagnetic fields. As you know I’d like this theory empirically tested with implanted transmitters for oral phenomenal report.

        I know that you’ve been hesitant about claiming that you identify with “this” or “that” position, since people will tend to interpret those positions in ways that you don’t identify with. Functionalism however is a classification that you seem to accept wholeheartedly. But it could be that the reason no one ever tells you that it means what you don’t mean it to mean, is because the position cannot possibly be wrong. I suppose you don’t mean it to mean that either. But does it not mean that? How one might one possibly disprove “functionalism”? In the scenario I’m providing a god creates the taste of ice cream for me to have. This mandates the wrongness of McFadden’s theory and therefore his theory is not only falsifiable, but false. It seems to me that functionalism would remain in the clear however since it’s only the taste of ice cream that would matter rather than any behind the scenes physics and whatnot. Or can you think of a way for functionalism to even potentially be wrong?

        Liked by 2 people

        1. Eric,
          I think the central thesis of illusionism is that consciousness isn’t what it seems. People seem intensely resistant to that proposition. Maybe at some point in the future they won’t be and we won’t need to talk about consciousness in that manner. But when an illusionist says “consciousness doesn’t exist”, they’re always talking about a particular conception of it. I’m not wild about saying that, because people misconstrue it. But it’s becoming increasingly obvious people are going to misconstrue positions they don’t like anyway.

          However, I’d say theories like the EM field ones are not heeding the lessons of illusionism. As far as I can tell, it’s an attempt to validate an intuition about consciousness being something in addition to the workings of the brain. Instead of theorizing about the reality implied by that intuition, we should be trying to figure out why we have that intuition in the first place.

          I’m often reluctant to embrace labels because they are, at best, approximate indicators for a collection of conclusions, some much more approximate than others. There are almost always positions people in that camp hold that I don’t agree with. I’m sure that’s true with functionalism as well. I know it’s true with illusionism. But I’m gradually coming around to Julia Galef’s idea that it can be productive to wear some of them lightly.

          I think functionalism is really the insight that the way we think about everything else, hearts, lungs, cells, etc, works for consciousness as well. But it could be falsified by discovering that only a specific substrate can implement it. Or evidence for some form of interactionist dualism. Basically anything indicating that there’s more than just physical causal relations and interactions at work.

          Liked by 2 people

          1. “central thesis of illusionism is that consciousness isn’t what it seems”

            I don’t think this is a exactly shocking. Most of us understand it to be representational. We understand thoughts and perceptions to be real but not really what is out there. But what makes that different from reality itself? We don’t perceive atoms or gravity. We perceive effects with exactly the consciousness that isn’t what it seems. Nothing surprising.

            Like

          2. Mike,
            I’m pleased that you’ve now defined functionalism such that it’s not inherently true. Apparently you’re saying that functionalism mandates that consciousness not exist by means of any specific substrate, whether some parameter of EM field or something else specific. I’ll observe however that this in itself shouldn’t provide you with a testable and thus falsifiable theory. It just means that a falsifiable theory such as McFadden’s could, if validated, effectively disprove the premise of contrary ideas such as functionalism.

            “However, I’d say theories like the EM field ones are not heeding the lessons of illusionism. As far as I can tell, it’s an attempt to validate an intuition about consciousness being something in addition to the workings of the brain.”

            Would you like to rephrase this? If neuron firing produces an associated EM field, then how can you intelligently say that such fields do not exist by means of brain workings?

            You can theorize why I have my intuitions, and I can theorize why you have your intuitions. Surely we both do this privately a bit. At the end of the day however it’s possible that one of us is essentially correct. The only way to establish that however should be through the experimental validation of a falsifiable theory. If you had such a theory then I’m sure you’d enjoy thinking about ways to effectively test it. That’s certainly been the case for me.

            Liked by 1 person

          3. Eric,
            Nothing in that wording implied EM fields don’t exist. EM fields are, of course, everywhere and EEG depends on the ones from the brain. That doesn’t mean they have any substantive causal role in the brain’s workings. Of course that could change on future evidence (or widescale replication of already claimed evidence).

            I try not to trust my intuitions and then hope for evidence. I’d rather start with the widely accepted scientific evidence and reason from there.

            Liked by 1 person

          4. We all have intuitions that color the way we see things Mike. For example yours seems to have influenced you to say that EM field consciousness would be in addition to the workings of the brain rather than as a part of the workings of the brain. Surely I say things just as colored as well given my own belief that consciousness without substrate would need to exist by means of magic. In any case we’re not going to resolve this disagreement here. Only verified evidence should be sufficient for that. The prospect does give me hope that progress will finally begin to be made at that time however, and even given how crazy things seem today.

            Liked by 1 person

          5. Mike,

            “I think functionalism is really the insight that the way we think about everything else, hearts, lungs, cells, etc, works for consciousness as well.”

            How do you reconcile that fact that all of theses systems you listed including the brain itself are objective systems whereas mind is a subjective system? Is it possible for a single system to be both objective and subjective at the same time; and if so where else in nature is that demonstrated?

            Like

          6. Lee,
            It depends on what you mean by “subjective”.

            If you mean a system taking in information from a particular location, and in particular modes it is capable of, and influenced by all the past information it has taken in, then yes, I think a system can be both subjective and objective. Brains obviously are the only system in biology that we know to have this. (Although see a few posts back for Ogi Ogas and Sai Gaddam’s perspective on this.) But we’re starting to build systems, like self driving cars and other autonomous robots, with rudimentary points of view.

            On the other hand, if by “subjective” you mean having some intrinsic nature that is in principle forever inaccessible objectively, no matter what advances in science and technology may come, then I don’t think anything, including organic brains, has that type of subjectivity.

            Like

          7. “It depends upon what you mean by subjective”?

            Deflection might be a cutesy tactic Mike, but it is not a winning strategy for dialectic Since when is the definition of “subjective” up for grabs. Subjective is the antithesis of objective which means that subjectivity is not veridical whereas objectivity is.

            “If you mean a system taking in information from a particular location, and in particular modes it is capable of, and influenced by all the past information it has taken in, then yes, I think a system can be both subjective and objective.”

            Dude, that is a the very description of an objective system that is veridical; how in the world you can assert that it’s also the description of a subjective system is beyond me. Your robots and self-driving car’s algorithms have to be veridical or else they would jump off a skyscraper because they think they can fly.

            Like

      2. As an illusionist, can you tell me why your knowledge/understanding that consciousness is an illusion is not an illusion itself? Once you sow the seeds of doubt about what you know (or think you know) where do you stop and say it isn’t an illusion? Why wouldn’t the external world be an illusion too? The laws of nature? Illusions too? All of these are understood and perceived through consciousness.

        Liked by 1 person

        1. First, every illusionist I’ve read agrees that functional consciousness exists. It’s only the aspects of it which lead to intuitions that there’s something more that I think are misleading.

          Second, consider how we figured out that the Earth being stationary with the universe revolving around it was an illusion. Or the apparent signs of design in nature. Or that time and space are absolute. If reality matched our initial perceptions, if it wasn’t filled with illusions, there’d be no need for science. The real question is why so many people think the rules are different for the mind when all the evidence implies it’s not.

          Like

          1. Unless we are arguing against naïve realism or Cartesian dualism the illusionist observation seems trivial. It is also a epistemological quagmire unless it has some way of distinguishing between misleading intuitions and those that are not misleading.

            Where does “functional” consciousness end and the “non-functional” kind starts? Or is it all functional? What is the function of an illusion?

            Liked by 2 people

          2. It’s trivial and a quagmire? A trivial quagmire? My take is we shouldn’t trust any intuitions that can’t be tested or at least logically validated.

            Functional just means it has a causal role in the system, ideally an adaptive one. Perceptual capabilities that are useful (functional) for getting through day to day activities become misleading when we try to use them outside of their evolved role, such as trying to use introspection to learn about how the mind works.

            Liked by 1 person

          3. Almost everybody who is scientifically literate knows our consciousness is generated by the brain and that it is a representation of the world, not an expression of an immortal soul or a magical substance.

            If that is all illusionism is talking about, I think it is a trivial observation.

            In that sense, however, functional consciousness would be just as illusory. Even the simplest navigation in the world involves the illusory projection of what is happening in the brain to the body and the external world. We don’t tell ourselves that the oncoming train, that our brain has projected to the external world, is all in my brain, even though that is exactly where it is (there may be something outside the brain too but that is not the same as the train in my brain). When we stub our toe, we fill the pain in the toe but that is a complete illusion because we know the pain is in the brain. Even a neuroscientist would not navigate through the world without using the illusion.

            Liked by 1 person

          4. James, I was too tired yesterday to pull this out, but thought you might find this tweet from Frankish informative.

            Liked by 1 person

          5. Whether we call it “presentation” or something else, my hand is what I feel to burn if I put it on a hot stove. Clearly that is an illusion in the sense that we (including myself) strongly suspect the pain is actually generated in the brain.

            But the same is largely true for every other experience and interaction with the external world. If it is all happening in the brain, why and how does it become projected outward? That is exactly what the question is about whether we call it “presentation” or “cluster of effects”.

            I think perhaps Hoffman’s interface theory might be the cleanest explanation and it doesn’t require buying into full idealism. Consciousness is the interface for understanding and interacting with the external world. My phenomenal hand burns because the non-phenomenal hand is where I need to take action. My phenomenal train causes me to move off the trains so I’m not run over by the non-phenomenal train.

            Liked by 1 person

  16. Thanks for your mind provoking post, that inspired me to revisit these fundamental questions.
    And so, I read the seminal essay of Thomas Nagel myself.

    However, I do not share your criticisms; they miss the essential points of Nagel’s argumentation and are not crucial to its validity.

    (1) Being partially like something ist just another form of being something. For example, one can very arguably ask, how does it feel to be in a vegetative state? The difference between any kind of rudimentary consciousness and no consciousness at all can be seen as a kind of quantum leap.

    (2) This criticism has no basis, because Nagel himself writes (as quoted by you): “I do not deny that conscious mental states and events cause behavior, nor that they may be given functional characterizations. I deny only that this kind of thing exhausts their analysis.”
    Whereby the latter sentence can be agreed on completely. Let’s take pain as an example. The meaning of the term “pain” is not exhausted in a causal role, as Joseph Levine has pointed out:
    „However, there is more to our concept of pain than its causal role, there is its qualitative character, how it feels; and what is left unexplained by the discovery of C-fiber firing is why pain should feel the way it does! For there seems to be nothing about C-fiber firing which makes it naturally ‘fit’ the phenomenal properties of pain, any more than it would fit some other set of phenomenal properties. Unlike its functional role, the identification of the qualitative side of pain with C-fiber firing … leaves the connection between it and what we identify it with completely mysterious. One might say, it makes the way pain feels into merely a brute fact.“

    So why it should be incompatible with “making any assertions about what might or might not be conscious ” and whether it makes (3) moot is not apparent to me.

    (3) Whether consciousness is limited to organic living beings or not is not an issue for this discussion. Nagel remarks: “Perhaps anything complex enough to behave like a person would have experiences. But that, if true, is a fact which cannot be discovered merely by analyzing the concept of experience.”

    (4) The specific feature of being associated with a specific quality of experience cannot be reduced to a causal role or a specific behavior. Most mental events or states have both a functional-relational and a phenomenal-intrinsic aspect. Pain, for example, plays a definite causal role in reporting injury and controlling behavior. But it does not follow (and Nagel does not claim) that these are not representational or relational.

    I disagree that speaking of qualia refers to a version of consciousness that does not exist. Qualia are the undeniable features of our experience. Antonio Damasio characterized it this way: “The conscious mind and its constituent properties are real entities, not illusions, and they must be investigated as the personal, private, subjective experiences that they are.”

    Liked by 2 people

    1. Thanks Karl! Provoking thought is definitely the goal, so it’s gratifying to hear it did that for you. Of course, I disagree with your disagreements. 🙂

      (1) This seems like just another way of saying what I said in that criticism. I disagree that consciousness is a binary on or off thing. My view is based on an evolutionary perspective, where traits seem to develop gradually. Of course, we could always define what-its-like-ness in such a way that it is binary, but the question is whether what it includes or excludes matches our intuitions.

      (2) Note that when talking about epiphenomenalism, I said “at least to some extent”. That seems completely compatible with the sentence you quote. And I think we can’t ignore everything that is said in the previous paragraph.

      I don’t know the context of that Levine quote, but I disagree with the conclusion. In particular, looking just at c-fiber firing is a strawman. Pain is a determination made in the brain based on, among other things, signals from those c-fibers. There’s no good reason to suppose that it isn’t a fully functional process. (With the caveat that the functionality doesn’t always work right.) I did a post on pain a while back.

      The complex composition of pain

      (3) Nagel makes clear in the quoted sections that finding evidence for the presence or absence of consciousness is very difficult and may be impossible. If that’s true, then how does he know that bats are conscious? Or other people for that matter? If we can’t establish its presence or absence by evidence, then we can’t say anything about what is or isn’t conscious. At least other than ourselves according to this line of thinking.

      (4) Again, I just disagree. I’m open to changing my mind if presented with sufficient evidence, or a compelling line of logic, but not from just assertions.

      In terms of what Nagel claims about representational and relational states, again this sentence from the quote (emphasis added):
      “It is not analyzable in terms of any explanatory system of functional states, or intentional states, since these could be ascribed to robots or automata that behaved like people though they experienced nothing.”

      With qualia, a lot depends on exactly what’s meant by that word. Damasio is a physicalist, so I doubt he means “private” in the same way someone like Nagel does. As I noted in the post, there’s a difference between private with current technology vs absolutely private in principle. In terms of qualia as intrinsic, private, and unanalyzable in principle, I do deny them. I think they’re a philosopher’s myth. And Damasio confuses the issue if he used that word without clarification.

      Like

    2. Hi Karl,
      A quick amendment to my comment about that Levine quote. It’s actually from his Explanatory Gap paper which I dug up out of curiosity and skimmed: https://www.informationphilosopher.com/solutions/philosophers/levine/Explanatory_Gap.pdf

      The conclusion to the paper is striking.

      There is only one way in the end that I can see to escape this dilemma and remain a materialist. One must either deny, or dissolve, the intuition which lies at the foundation of the argument. This would involve, I believe, taking more of an eliminationist line with respect to qualia than many materialist philosophers are prepared to take. As I said earlier, this kind of intuition about our qualitative experience seems surprisingly resistant to philosophical attempts to eliminate it. As long as it remains, the mind/body problem will remain.

      Of course, a few years later an eliminativist line toward qualia is exactly where Dennett went.

      Like

      1. Overall, I still think that the notion of qualia is essential for describing experience. Your refutations in detail have given me a lot of stuff to dig into. I cannot go into all the points right away, so I will limit myself here to the first point. However, I hope that I will also find responses to the other issues.

        The question whether awareness or consciousness is an all-or-none phenomenon or rather a continuum is an issue of ongoing debate. Several empirical studies have been carried out to provide evidence for the graded or the dichotomous account. Claire Sergent and Stanislas Dehaene some supporting evidence for an all-or-none bifurcation during the attentional blink. Which is in agreement with GNWT, predicting, according to the authors, a sharp nonlinear transition between unconscious and conscious processing. They suggest that conscious access is characterized by nonlinear dynamic phenomena, which might ultimately be described mathematically using catastrophe theory (Sergent, C., & Dehaene, S., 2004, Is consciousness a gradual phenomenon?). Though it is only about the access-consciousness here, but it could be the same with the phenomenal consciousness content.
        it appears that whether ‘‘consciousness’’ should be characterized as graded or dichotomous depends strongly whether viewed from a first-person or third-person perspective. From the first-person perspective one is either discretely fully aware or unaware of something. It is the characteristic of the content that is gradually changing.

        Like

        1. The GNWT dynamics and first person assessment aren’t really the same question as whether a particular species is conscious.

          Part of the problem with self assessment, is our ability to assess our own consciousness is (more) compromised when we’re in a semi-conscious state. It’s kind of like a drunk person not being able to judge how compromised their judgment and coordination is. It is true we typically either have episodic memory of a period or don’t, possibly indicating whether the hippocampus was online.

          Subsequent work has shown the GNWT bifurcation dynamics are a bit more complex. I shared a paper last year on the “global playground”, indicating that there were intermediate levels of those dynamics prior to being fully global. Victor Lamme also pointed out that the dynamics are more complex than are often described.

          But in terms of species level consciousness, Peter Carruthers pointed out that in GWT, it’s the collective response of the specialty systems that make a piece of information conscious. The question is, which systems are necessary for us to make the consciousness attribution? As we get further from humans in the phylogenetic tree, there are fewer and fewer specialty systems in common. At what point can we no longer consider the organism conscious? His point is that there’s no fact of the matter. I once pointed out to him in a discussion that the same thing could be said for humans who are brain injured, impaired, immature, or senescent. He agreed.

          His overall point is that there’s no border line, no point when the lights were off then suddenly on. We can objectively assess the abilities of other species, but whether they’re conscious or not depends on what we require to be there to apply the label “conscious”.

          Like

  17. The new paper at the link below by Friston and others may be more mathematical than this group normally discusses, but I wonder if it is pointing at a mathematical take on similar issues to some comments here. If so, it is not surprising that it is difficult to put into words what is going on:

    https://arxiv.org/abs/2205.11543

    Liked by 1 person

  18. Coming late to the discussion, I don’t really have much to add that has not been said by you or others, but I do find interesting your and Michael’s divergent reading of Nagel’s paper. It reinforces my belief that at least in writing that paper, Nagel was himself mislead by an ambiguity of the English language. The question “what is it like to be a bat?” can be read as either stressing the “what”, thereby demanding a comparison, or by stressing the “like” (which I think was Michael’s reading), which simply asserts that there is an experience involved. This ambiguity vanishes if the question is asked in my other two languages (Czech and Russian).

    As I read that paper, Nagel started with the stress on “like” in mind but slipped into discussing the “what” in highlighting the fact that in the case of consciousness any actual comparison, while possible in principle, is in principle impossible in practice (yes, that one again! :-))

    Philosophical conversations that I’ve been involved in suggest that Michael’s reading is the one commonly accepted in philosophical circles, in Oxford, at least. Thus my original objections that I can only know what it is being me and hence am in no position to know what it is like to be you (or him, or her, or a bat [or, as Jerry Fodor correctly noted, a rock]) got roundly dismissed as a simple misunderstanding. In any case, as Neil Rickert says in this discussion, if we are talking specifically about echo-location, it is quite possible that this is fully fungible with out sense of (perhaps monochrome) sight and with any other way of sensing one’s surroundings in 3D. I.e. it may feel no different from the experience cavers have in seeing a cave illuminated
    by their headlamps.

    I am biased, of course, but in my experience monoglot English philosophy does tend to have this general problem of sometimes failing to differentiate between valid philosophical points from mere quirks of one’s language. Given the enduring influence of Wittgenstein, I find that particular blind spot rather surprising.

    Liked by 1 person

    1. Latecomers are always welcome!

      I definitely think language ambiguity is a big part of the issue here. It seems noteworthy that there were multiple interpretations of exactly what Nagel’s phrase means just in this thread. It’s also interesting that some people focus on the “what it’s like” version and others on the “something it is like” one.

      My big issue is that when someone uses these phrases, we don’t really know what they mean. When you were hearing people talk, you took it as the meaning that seems natural to you. And depending on which philosophers you were talking with, they may or may not have meant it that way.

      Yesterday, a philosopher on Twitter, when asked what “phenomenal consciousness” means by another philosopher, replied with the “something it is like” phrase, and when challenged on the vagueness, stated that he considered the phrase’s meaning clear and the claim implausible that further clarification was necessary. That’s my chief beef. People use the phrase like it’s precise terminology, and consider requests for clarification invalid.

      In Nagel’s case, it’s possible he was misled by the language but I’m not sure. It seems like he was genuinely trying to find a way to express a particular intuition, or set of intuitions, and the limitations of English may well have been an issue. To his credit he did elaborate on his meaning, which is where my takeaway is coming from. Although I suppose someone can say my takeaway, even from that elaboration, is a misinterpretation.

      Like

    1. I actually have no problem with the question, just with the “like something” phrase. I have no idea what it means, except possibly in reference to some aspect of our own experience, in which case “somewhat like us” seems more accurate.

      Like

        1. Nobody denies the deflated version of “phenomenal consciousness”, the one that’s about our impression of conscious experience. But that’s not what Block means by that phrase, nor is it what Nagel is referring to, as I covered in the post. Their version of what those terms refer to is more demanding. The deflated version exists, and is equivalent to the illusion of the more demanding version. Which terminology to use is a verbal dispute.

          But the demanding version is what we might call “ghost consciousness”, or maybe “mystery consciousness” to be more neutral. That’s the actual bone of contention, and I don’t think that version exists.

          Like

Leave a reply to SelfAwarePatterns Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.