Susan Blackmore’s Consciousness: A Very Short Introduction may have been the first book I read on consciousness many years ago. Recent conversations rekindled my interest in her views.
I’m pretty sure her discussion of consciousness as an illusion was the first time I had encountered that idea. Strong illusionists such as Keith Frankish and Daniel Dennett generally take the stance that phenomenal consciousness doesn’t exist. Blackmore’s illusionism seems like a weaker form, that consciousness exists but isn’t what it seems. And by “consciousness” she stipulates in one of her books that she usually means phenomenal consciousness.
Of course, the difference between a strong and weak illusionist can be seen as mostly definitional. Strong illusionists generally take “phenomenal consciousness” to refer to the metaphysically intrinsic, private, ineffable, and incorrigible concept discussed by Nagel, Block, Chalmers, and other non-physicalists, one that is ontologically separate from access (functional) consciousness. A weak illusionist sees this version as illusory, but is more willing to just consider the illusion itself a reconstructed version of “phenomenal”.
But Blackmore, in her discussions of illusion, often focuses on other issues. One is the Grand Illusion, our impression of how much visual information we take in. If you look around in whatever room or setting you’re currently in, your impression is likely to be that your visual field contains all the details in front of you.
However, there are good reasons to conclude that impression is an illusion. One is that the retina doesn’t really work like a camera. It has a central region, the fovea, which has high acuity (resolution) and high color discrimination. But as we move out from that small central region, the acuity drops dramatically and color discrimination mostly disappears.
In addition, your visual field has a blind spot in it. This is the spot where the axons from the ganglion cells in the retina feed into the optic nerve. There are no photoreceptors in this location, and so no vision. You can detect this blind spot for yourself by holding your left thumb out at arm’s length, closing your right eye, and looking at where your thumb is, then without moving your eye, slowing move your thumb to the left. At some point, all or a portion of your thumb should disappear. (It may take a few tries. The Wikipedia article also has another way of seeing it.)
Why don’t we perceive the world with a small sharp colorful center and increasing colorless blurriness out to the sides, along with a hole in it all? One is that these are not limitations of a window we’re looking through, but constraints on perception itself, and we don’t perceive what we don’t perceive. The other is that our eyes are constantly moving, often in reflexive saccadic movements we’re not aware of, allowing us to take in any detail we want to focus on.
That last point is important, because when you looked at the room or setting your were in, you didn’t have time to scan the entire thing with your fovea. So any impression of taking in the whole thing is wrong. Often this is described in the popular press as the brain “filling in” the details. But there’s no evidence for that. We just don’t take in those details. Our impression is that we do, because the detail is always there when we check, by moving the focus of our eyes to a particular spot.
Other related issues Blackmore discusses are inattentional blindness and change blindness. Inattentional blindness refers to the fact that we’re generally blind to things we’re not attending to. The classic example is the ball throwing video, where (spoiler alert) participants are focused on counting the ball throws between white t-shirted players, so that they miss the gorilla that walks through the scene. And a good example of change blindness is the video I shared the other day.
Blackmore’s goal in discussing these issues is, I think, to make us realize how wrong we can be about what information we actually take in. And to soften us up for her big one: delusionism. She asks us to consider, are we conscious right now? Of course, the answer is generally going to be yes. But what about a few moments ago, when we weren’t thinking about our consciousness, when we weren’t introspecting?
Blackmore points out that the only way we know our consciousness is through introspection. So how do we know it’s there when we’re not introspecting? Attempting to ascertain whether it is, is like opening the refrigerator door to see if the light is on when the door is closed, or to use an example from William James, attempting to turn up the light to get a better look at the dark.
Maybe a unified model of self, attention, and perceptions is put together just when we introspect, but when we’re not, processes continue in the brain in parallel with no distinction between conscious and unconscious processing. Blackmore points out that this is consistent with the neurological evidence, which shows no inherent difference between conscious and unconscious processing.
If this is true, then the very idea of a stream of consciousness is misguided, and we are massively confused about the extent of consciousness in nature. In this view, consciousness appears to only exist in humans, and then only some of the time. It would make it meaningless to ponder what it’s like to be a bat, or anything else.
My take on this is that, as usual, it depends on how we define “consciousness”. As a functionalist, I think consciousness is functionality, but which functionality in particular counts as “conscious” doesn’t seem like a fact of the matter. Which is why you usually get discussions of hierarchies from me.
In particular, I think the perceptual one is worth reviewing here. In this hierarchy, consciousness can be the processing that results from:
- Deliberative attention
Blackmore’s delusionism equates consciousness with 1, which I’ve been tempted to do myself in the past. But it leads to a view of consciousness that is very sparse. If we hold out for self reflective awareness as the standard, then it’s where we seem to end up.
On the other hand, if we equate consciousness with 2, then the stream of consciousness, as the sequence of things attended to, is much easier to establish, both in humans and animals. We just have to realize that this isn’t the full package we have while we’re contemplating our own mental life.
What do you think of Blackmore’s version of illusionism? Or delusionism? Or my take on it?
189 thoughts on “Susan Blackmore’s illusionism”
Would you elaborate on what you take as ‘deliberative’ in 2? Reason for asking is I programmed some simple learning software that could learn for itself what to pay attention to given the current sensory inputs, and I would not consider that to be conscious unless introspected.
LikeLiked by 1 person
I was wondering if anyone would ask about that. It gets into the fact that both attention and metacognition are complex, with a variety of mechanisms that fall under those labels.
In the case of attention, I think we can have reflexive attention, habitual attention, and deliberative attention. (I don’t know of any authoritative names for these. These are labels I came up with.) We can be driving into work, cutting the grass, or doing laundry, while our mind is on something else, like a TV show we watched last night, or an argument we had with a loved one. We can still perform those tasks because we’re able to do them habitually, including figuring out what to focus on at that level. But if we’re attending to them, then what do we call the other issues we’re mulling over? It seems like attention at a different level, which I’m calling “deliberative”.
It’s worth noting that this is a bit distinct from top down vs bottom up attention, which refer to the causes of that attention rather than the level. But often deliberative attention is driven by top down dynamics. Which of course is where Graziano’s attention schema comes in. Graziano himself has noted that the AST is not introspection, although it’s an important source of information for it. The AST is probably much more prevalent in species than introspection.
All of which demonstrates that the boundary between 1 and 2 is bit blurrier than my quick description implies.
LikeLiked by 1 person
Nice post. I’ll say that one element of human perception that you might want to work in is prediction (or predictive processing). People consider our visual experience to be unified partly because a lot of it is already being actively predicted. You did mention the part about taking for granted that if we look at a particular spot we will get the fine details. But I think it’s worth discussing the predictive part. If you fix your gaze, and then pick out something in the periphery that you are familiar with, your attention to that (currently fuzzy) thing is pre-activating the components which will activate fully when you move your gaze to that thing.
Also, I’ll just reiterate my position on consciousness compared to yours. You like to say there is no fact of the matter, but I suggest it is a fact that the lowest level, perception, has all of the key components of consciousness, and that the higher levels are simply recursions of the same theme. Deliberative attention is simply perception of a subset of perceptions, and introspection is perception of the deliberative perception of first order perceptions.
[haven’t put it that way before … feels good]
LikeLiked by 3 people
I find myself uneasy whenever hierarchies of thought are posited. It’s not that I object to the notion itself, but I’ve been burned more than once in discussions with philosophers, who like to take it is as an invitation to indulge in infinite regress. I can’t remember which of the old-style AI greats coined the slogan “heterarchy, not hierarchy”, but I guess he’d run into the same problem.
Of course, a sort-of hierarchy will be inevitably present, evidenced by the degree of reliance on activity of other mental processes. The more-preprocessed the data being dealt with, the higher the hierarchical position. But I very much doubt whether this kind of hierarchy is well defined.
Thanks. Prediction is definitely part of it. And Blackmore discusses it, but only in brief. And there’s only so much I can work into a consumable blog post.
On consciousness=perception, I do understand that’s your take. The only thing I’d note is Blackmore’s points about inattentional and change blindness. If you watch the ball throwing video, do you perceive the gorilla? Or is the signal so suppressed that it never gets past the early sensory regions into the anterior temporal lobe where the concept of a gorilla might be triggered as a prediction? Regardless, you don’t seem in a position to make use of the information.
Although conceivably if the task had been to count tosses between black shirted players, you might still have not noticed the gorilla, but it might have been able to get further into the system, maybe leading to unconscious priming.
All of which is to say, I think attended perception is more enhanced and comprehensive, and unattended perceptions more fragmentary and less likely to be remembered, with the caveat that there isn’t necessarily a sharp boundary.
I’ll grant that introspection is perception of our perception, although I tend to doubt accounts where it’s completely its own thing. I think it’s more of an add-on, again enhancing the attended perceptions.
The first thing I notice in your reply is the use of “you”. If your eye is registering the pixels of a gorilla, are “you” registering the pixels of a gorilla? In considering the blindness’s, and when most people think of consciousness, you/they are referring to a specific subsystem: the autobiographical self, and this requires the introspection level. When focusing on a task, a particular subset of “attention” level perceivers [unitrackers] are primed, and so there probably is no attention level perception of the gorilla, and therefor cannot be any introspection level [semantic pointer] perception of the gorilla. So again, are “you” registering anything about the gorilla?
[btw, currently reading Jeff Hawkins’ “A Thousand Brains”, which has some really interesting new ideas about how cortical columns work. May suggest complementing, adding to, or reworking my current understanding.]
Yeah, I think we’ve discussed before that, while I think the possibility of sub-consciousnesses is interesting, I’m most interested in the consciousnesses that match up with the systems typing these messages. It’s similar to the point I often make to panpsychists. If everything is conscious, then consciousness itself ceases to be an interesting subject for me, and my attention is then focused on what sets the systems we can perceive as conscious apart from rocks, storms, and electrons.
I’ve listened to the various interviews Hawkins did with Ginger Campbell. His views have always struck me as pretty strange. I’ll be curious to hear if you think his theory is more plausible than it sounds.
Finished the Hawkins book, at last, and I find his ideas pretty plausible, probably because they are mostly compatible with mine. I think his main ideas are that all of the cortex derives from a basic unit which gets repeated and works largely the same everywhere, obviously with some variation. He thinks this unit is the column, which has about 150,000 neurons, whereas I think it is the minicolumn, w/ about 150 or so. The intriguing thing he adds is the role of grid cells and place cells throughout, which (I think he thinks) is the source of reference frames for everything. I need to learn more about the anatomy of these cells.
So which things of his do you find implausible?
It’s been a while since the last interview I listened to, but the impression I recall was that he sees every cortical column having its own full sensorium and motorium, essentially being it’s own little brain, and our overall perception of the world and decisions being a sort of vote between them. That seems at odds with most of cognitive neuroscience, which has the various cortical regions performing specialty tasks, with various networks of regions involved for each type and combination of perceptions and actions.
But it’s very possible I don’t understand his theory.
Ah. Yeah, I think the way he puts some things is a little off, but I translate it into my version and it’s not too far off. A little brain with sensorium and motorium is just a definable unit with input and output. That describes a unitracker (minicolumn) pretty well. The voting thing I just gloss over, as it’s essentially every unitracker voting for “me”, and the attention mechanism determines who wins depending on who is louder and who is more important at the moment. I don’t see this as at odds w/ current neuroscience. The specialties of various regions will just depend on where the inputs are coming from and where the outputs are going.
But again, the new stuff about reference frames is what I find interesting and potentially plausible.
Sounds like you’re taking from his ideas selectively, which is fine, it’s what I often do. I have seen the framing thing from other authors (Baars comes to mind), but maybe not in the detail Hawkins goes into. Ok, thanks for the description. If the book goes on sale at some point, maybe I’ll check it out.
I quite like the notion of a sparse stream of consciousness, provided it is not taken as a simple on/off disjunction. But here we run, as usual, into purely terminological problem — as you rightly keep pointing out, the basic terms of the discourse have no agreed definitions. That being so, it is easy to slide between slightly different meanings without even realising one is doing so. I know I am guilty of that too when talking about consciousness as a to-whomever-it-may-concern information sharing system, as if that were subject to the on/off disjunction. Maybe it is, but any introspective analysis relies on memory and just because I cannot remember e.g. being conscious of the surrounding traffic on a car ride along a very familiar route, does not mean I actually was not consciously aware of it at the time. The fact of the matter is that I am in no position to claim that there is or is not a point of the matter involved. Very annoying. 🙂
LikeLiked by 1 person
Terminological issues do seem like the bane of most discussions in this area. I always try to be clear about my meaning, but even with those efforts, sometimes people take the wrong idea. And of course it can be hard to be sure we understand their meanings at times. (And not everyone reacts well to requests for clarification.)
I also agree that it’s very easy ourselves to slide between meanings without intending to. It’s part of the issue with the vague terminology. Often the concepts from folk psychology, or even scientific psychology, don’t have clean correlates in the brain. Although typically the issues with these concepts show up even in purely psychological discussions.
There are people who say we’re conscious of reflexive or habitual activity, but just don’t remember it. As you note, this is a very difficult thing to test. It’s complicated by the fact that everyone agrees that sometimes we are conscious of those acts. Someday we may be able to fully monitor a working brain in detail and reach a conclusion about it.
You mentioned above an unease with hierarchies. I should note that the hierarchies I talk about are purely epistemic crutches, not intended to be any kind of statement of ontology, just a convenience to understand something much more complicated. I shared a paper a while back that looked at it as a series of dimensions in configuration space. I have no objection to that. The hierarchies are really just of concepts in order (or reverse order) of how much they demand.
Sorry, I should have made myself clearer. I didn’t have your conceptual hierarchies in mind. Just noting the debating risk involved in talking about (to paraphrase) awareness amounting to perception of perception etc… Not wrong per se, but rather open to a misconstruction.
LikeLiked by 1 person
Ah, ok. Thanks for the clarification!
What comes to mind in such examinations is that humanity has no doubt contemplated consciousness for thousands of years. The ancient Greeks, Romans, Sumerians, Akkadians, surely tried to divine what “being” really means. Are we any closer? I doubt it.
Using a tool to introspect the tool where the knowledge gained provides no survival benefit… Knowing what consciousness “is” does not put food in my belly, nor reduce the work I have to do dayindayout. It seems like a curious pursuit that may never be attained.
I thinking that the application of such knowledge gained to the development of AGI seems like a shallow justification. That is, no ML researcher will cease AI development based on some discovery regarding consciousness. Consciousness be damned, we’re gonna build AGI because we can and because we can make money doing it.
In a thousand years (like humanity will exist then) I suspect the concept of consciousness will remain just as contentious.
LikeLiked by 1 person
Identifying contemplation about consciousness prior to Descartes can be contentious. The Greeks did write a lot about the soul, which sometimes came close to our modern conceptions of consciousness, but sometimes ended up closer to the ghostly version we talk about today. It’s hard to say to what degree the Sumerians and Akkadians thought about it, because we have so little of what they wrote. Robert Bellah notes that there isn’t much evidence for “theoretic culture”, thinking about thinking, before the Axial Age.
Definitely there was never really any survival benefit for the mind to understand itself. It’s why introspection works well enough in our day to day life, but has pretty limited value as a source of information on the structure of the mind.
I do wonder how much a successful AGI would trigger our intuition of a follow conscious being. My guess, unless it’s architected to be like us in a Westworld type scenario, is it won’t. There are just too many idiosyncratic things about biological minds, that we only have due to evolutionary history, that it probably won’t make sense to put in an AGI. I don’t think that should surprise us. Submarines don’t move through water like fish, nor planes through the air like birds.
Maybe you’re right about a thousand years from now. We might be uploaded entities in a virtual environment, with technology that fully understands and can reproduce the operations of the brain, yet still be arguing about the nature of consciousness. Although I tend to think at some point we’ll probably stop talking about it, much as everyone stopped talking about the élan vital of biological life.
LikeLiked by 2 people
At what age did Helen Keller become conscious? It’s safe to say, I’d think, to say she did. How deeply tied to this concept of self is vision? What of words? Without words and the brain’s application of them to every experience are we conscious at all?
LikeLiked by 1 person
Keller is an example everyone who articulates the language centric theory of consciousness ends up discussing. Keller herself described her pre-language experience as “being at sea in a dense fog”. Does that count as consciousness? I don’t know.
What’s interesting is that she was old enough to have picked up some language before the illness which took her sight and hearing, but probably not old enough to have developed an inner voice. Her subsequent capabilities might have been much more limited if she hadn’t had that period.
I personally don’t think language is necessary for consciousness. But there’s no doubt that it dramatically enhances the consciousness of humans. Someone who never acquires it probably exists in a much more primal state.
LikeLiked by 1 person
I wish philosophers would distinguish between actual illusions and so-called “cognitive illusions”, or as the latter used to be simply and aptly known, “mistakes”.
In an honest-to-goodness illusion like the Muller-Lyer illusion, the phenomenon does not go away when one deeply understands the truth. No matter how carefully I measure the lines and how much I learn about perception, one of those lines still looks longer. Contrast, for example, the belief that time itself (not just some large but particular class of processes, i.e. macroscopic processes) flows from past to future. Now that I fairly well grasp what’s actually going on, the naive belief no longer has any hold on me.
I’ll still use conventional ways of talking about time, of course. No need distracting people with TL;DR versions of stories of everyday life. Only in a conversation about the philosophy of time is more care actually needed.
Most of the “illusions” that philosophers have identified about consciousness are just plain vanilla mistakes. For example, I don’t *perceive* my peripheral vision as fuzzy or crisp. All I *perceive* is things like “something is moving over there!” I *theorize* about how fuzzy peripheral vision is, and nothing in my perception contradicts my theory that it’s fuzzy.
I’m not trying to draw a bright line between perception and theory. Maybe it’s a continuum, like the levels of outdoor illumination over a 24 hour period. Which can be like the difference between night and day.
LikeLiked by 1 person
I think the use of “illusion” is just to acknowledge that things do seem a certain way, but then to point out that it isn’t. But definitely not all of the issues are actually illusions. Some really are just bad theorizing, theorizing that large segments of the philosophy of mind have convinced each other is obvious.
That said, while I am aware of the lack of acuity in my peripheral vision, it’s not something I think about often, and when I’m not thinking about it, my impression remains of a detailed field of vision. I also stubbornly continue to perceive just as much color in the periphery as in the center. However, I do notice that I’m often wrong about those colors. And I certainly don’t perceive myself to not see everything in that field I’m not attending to, at least until almost running into something.
But when it comes to all the noise about qualia / phenomenal properties, yeah, I’m more sympathetic that “illusion” is the wrong word. Most people don’t even know what philosophers are talking about with those terms. On the other hand, the theoretical mistakes being made seem similar to the ones that most people have made throughout the history of this subject. We seem to have a powerful inborn intuition of dualism or a dualist-like outlook. “Illusion” is a clumsy way to identify it. I’ve historically just described it as a problematic intuition, but I’m not sure how much bite it is.
I offer the following as corrections, and not just alternate ways to say exactly the same thing.
(1) I perceive the world as stably and richly colored, no matter where I’m looking.
(2) I perceive the world as stably detailed, no matter where I’m looking.
That’s a good way to put it. I’d add one additional point. We don’t perceive most of the changes in where we’re looking.
LikeLiked by 1 person
A semi-professional philosopher I happen to know, advocates the distinction between illusion and delusion. His use of the terms maps onto the distinction you are pointing out, but unfortunately “delusion” has a very negative vibe, liable to be taken personally (“are you saying I am being delusional??”) instead of clarifying the discourse.
Historically I think, “illusion” is something everyone sees (or most everyone), while “delusion” is something only a few people experience. Which makes it a much more aggressive and accusatory phrase. In that sense, Blackmore’s choice for what she’s talking about is a bit strange. I thought maybe she had adopted it to distinguish her views from Frankish’s illusionism, but her use of that phrase goes back to the 2005 edition of her book. Not sure why she chose it. I suspect because she really wants to challenge people about it.
As you know Mike I’m still a novice to philosophy of mind and consciousness. Your blog has been a fascinating exploration for me. I’m grateful. So, as to illusionism. Well, try as I might, I cannot overcome the conclusion that the idea that consciousness is an illusion (whether strong or weak) is manifest nonsense. If “illusion” means it does not exist, is not real, then that concept is pure nonsense. I certainly accept that our physical senses have certain anomalies, e.g., our visual blind spot. But that is hardly proof that consciousness does not exist. And I certainly accept that the science as well as the philosophy on the matter is in its infancy. We have reached, at least temporarily, an impasse in our understanding. However, mythological limitations in understanding are not proofs of nonexistence. That’s an obvious and indisputable fallacy. I can only surmise some intractable bias is the culprit to such nonsense or, as you say, I’m missing something.
LikeLiked by 3 people
What I meant to say was that “methodological” limitations in understanding are not proofs of nonexistence. I.e., if the scientific methods applied fall short then that is not a proof of non existence.
LikeLiked by 2 people
That’s a pretty common reaction Matti. I think it’s worth noting that no illusionist, strong or weak, denies the existence of functional consciousness, the type that enables us to be aware of the world and ourselves. It’s only when we get into the idea that there’s something intrinsic beyond that functionality that illusionists start denying things.
Definitely the science still has a long way to go. But remove the c-word from the discussion, and there’s a lot of progress to see in understanding cognition and perception, with vision having received the most attention. Things get murkier the further from the early sensory cortices we go, but progress is always happening.
However, it’s always in terms of functionality. For those holding out for the non-functional aspects, there’s been no progress. The question is whether there ever will be.
The distinction between “functional” consciousness and “non-functional (?)” consciousness seems to be a distinction without a difference. It’s all an illusion or none of it is.
That doesn’t seem obvious to me. Can you help me connect the dots? What is the reasoning that leads you to that conclusion?
LikeLiked by 1 person
Maybe I’m not getting the difference between “functional” consciousness and whatever its complementarily opposite is.
All consciousness is the interface that an organism uses to engage with external reality. Phenomenal experience is the interface. That is its “function” if you want to use that term. The interface certainly is the distinct from external reality so, in that sense, it is an illusion just like the feeling of pain in the burnt hand (and the hand itself) exists not in the external world hand but in phenomenal space generated by the brain.
However, the word “illusion” loses much of its meaning if all phenomenal experience is illusory. “Illusion” makes sense only in a reality testing process that eventually leads to new information, but this process itself would just another part of interface if we try to apply the term so broadly.
LikeLiked by 2 people
Thanks for elaborating.
I’m a functionalist, a “type-A materialist” in Chalmers classification. To me, the only thing about consciousness to be explained is the functionality. So I’m probably not a good person to explain what people are talking about when they assert there are non-functional aspects. I’d just note that a lot of philosophers: Nagel, Block, Chalmers, Goff, etc, take it as obvious that there are. The intuition that drives this stance seems similar to the one that makes most of us innate dualists. The question is where this intuition comes from.
Your point about illusions seems similar to Anil Seth’s hallucination one. He describes consciousness as a hallucination that usually correlates with reality. Others describe it as a simulation. My only caution with these descriptions is they could be taken as implying there’s a presentation of some sort happening somewhere between the sensory and reactive processing. I don’t think there’s any evidence for that.
My way of describing it is predictive models, in the sense of a galaxy of conclusions and reactive dispositions. But those models are abstract. They leave out a lot of details. The details retained are optimized for our evolutionary affordances. We’re all familiar with visual illusions, cases where our prediction circuitry reaches the wrong conclusion.
The point of illusionism, at least as I understand it, is we also have introspective models of our internal states. Like the external ones, these models are abstract, optimized for certain purposes. And as long as we use them for those purposes, they are effective. But they didn’t evolve to give us information on how the mind works. When we try to use them for that, we reach the wrong conclusions, just like in the case of sensory illusions.
As I noted to Paul, “illusion” could be seen as an awkward label for this. But I’m not sure what a better one might be.
“predictive models, in the sense of a galaxy of conclusions and reactive dispositions. But those models are abstract. They leave out a lot of details.”.
The model is the interface is phenomenal consciousness, not separate, not abstract.
“a presentation of some sort”
When I smash my hand my hand hurts. Maybe you’ve had a similar experience. If my hand hurting isn’t “presentation” then pick another word.
Defined that way, I don’t know that any illusionist would contest that version of “phenomenal consciousness”. They would just point out that it’s not the definition used by non-physicalists.
The point about the presentation is that there is no central place, no theater where it all comes together and then is reacted to. (Unless we want to say the whole brain is where that happens.)
When you smash your hand, the nerves in your hand send signals to the brain, which causes an affective assessment, which continues as signals from c-fibers then start to come in. The assessment cascades to other systems in the brain. The perception of the assessment motivates immediate and longer term reasoning.
The strong intuition that there’s something more, that the painfulness of pain has been lost in this description, is what illusionists would say is the illusion. Gilbert Ryle would say it’s a category mistake, treating the whole as separate from the sum of the components. That we’re still talking about it today, seven decades after he pointed that out, speaks to how powerful the intuition is.
LikeLiked by 1 person
You are missing something important in your account of the hand pain.
Why/how is the pain felt in the hand when everything you talk about (signals to the brain, signals from c-fibers, other systems in the brain) is in the brain? The “theater” is the entire body located in phenomenal spacetime with the pain “presenting” in the hand. It isn’t some internal TV screen that a homunculus is viewing.
LikeLiked by 2 people
What about the well known phenomenon of phantom pains in amputated limbs?
Doesn’t that make my point? Yes, the pain is generated by the brain but it is “projected” to a phenomenal body. That is exactly what consciousness does to model the world and the body in the world. In this case, and in many others, the “projection” is at variance with the actual world.
Now I am confused. So hen you say “The “theater” is the entire body” you don’t mean entire body, but merely body image? Where’s that if not in the brain?
By entire body I mean the phenomenal body – the body we experience. But the “theater” if we want to stick with that term includes the experience of the external world too. Yes, this is all generated in the brain. It is our consciousness and it is what allows us to act in the world. It is the model of the world and of ourselves in the world.
I have no wish to stick with the “theatre” notion. It is a deeply flawed analogy. The whole experience/experiencer dichotomy is for the birds. If that’s your point, we are not in disagreement. But I don’t see how it helps to use “model” in place of “theatre” — it’s just as suggestive of the same dichotomy. Unless, that is, one explicitly uses “model” to mean any means of making predictions, as I think Mike does (I hope he’ll correct me if I am wrong :-)).
The problem with the word “model” is that it is somewhat undefined as far as implementation. A “model” could be purely mathematical/logical for example. But what is the nature of the “model” for organisms? It is consciousness. Consciousness isn’t the useless add-on to a model but it is the model.
By saying it is the “model” I am trying to get away from the notion that our brain somehow generates a “model” of the world and an additional illusory consciousness that apparently is useless.
OK, we do disagree, then. I don’t see consciousness as useless, merely as not an agent (in the sense of having some executive powers), which is not the same thing, but “epiphenomenal” seems to be used (incorrectly, IMHO) to cover both options.
To be clear I am saying it is the opposite of useless. The epiphenomenalists and illusionists to some degree are the ones saying it is useless, that all the “real” stuff is happening under the covers and we are largely ignorant of it.
I certainly agree with the view that “all the “real” stuff is happening under the covers and we are largely ignorant of it”. But that does not entail consciousness being useless. It is not consciousness itself that’s an illusion, but its status as a decision-making agent.
Interesting discussion guys.
I’d just note that illusionists do not say that consciousness is useless, not even the strong ones. They actually see it as vital. It’s just that we can’t take everything it tells us without question, particularly what it seems to say about itself.
LikeLiked by 1 person
I guess everything is relative. The pain in the hand from smashing it might be an illusion but it is an useful illusion. The same is true of the rest of the illusion – spacetime, the world, memories, ideas, concepts, and theories. I don’t know what the illusion seems to say about itself that is different from what it says about the rest of the world. The illusionist argument just seems to be a insincere argument against anti-materialism especially when it picks and chooses the sort of consciousness it sees as useful and vital.
LikeLiked by 1 person
Um… What’s “insincere” about taking the view that it is the status of consciousness as the decision makers that’s an illusion? In taking that view I am not trying to refute anti-materialists, but to make sense of experimental results and of my own introspection, guided by such results.
LikeLiked by 1 person
What experimental results are talking about? And the illusionist argument is that your introspection is wrong anyway so why would you try to make to sense of it?
The reason I want to make sense of my introspection is simple enough. Right or wrong, it provides data that needs accounting for. Just like e.g. solidity of objects had to be accounted for once it transpired that solid matter was in fact mostly empty space. Dennett, for example, though often labelled an “illusionist”, explicitly accepts that what subjects of experiments report about their own mental processes is valid information and as such has to be taken into the account alongside scientific measurements — hence his advocacy of heterophenomenology.
What experimental results? Most directly Libet’s and a number of subsequent ones. But there is plenty of others, less directly pointing in the same — from Jung’s word associations, through Kahneman’s framing to various instances of priming and of change blindness. They all suggest, one way or another, that our apparently perfect knowledge of our own consciousness is at best very incomplete and at worst quite wrong.
As it happens, I used to feel (as it would seem you do) that claims of consciousness being other than it appeared to us were, at best, far fetched. But they prompted me to pay closer attention to my own decision making and I was surprised to realise that in the final analysis I could not attribute *any* of it to my consciousness. The more I looked at it, the more it appeared to be merely a record of decisions made “elsewhere”. So when Libet caused all that fuss with his experiments, I was already primed to see that in fact they confirmed my self-observations. And the more I thought and read about it, the more I got persuaded that (a) I am not my consciousness and (b) that consciousness has little or no executive power — it is not an agent; I am.
In your book that would seem to make me an “illusionist”, yet I don’t care what anti-materialist believe — let them believe what they will (though *some* positive arguments in that direction would be of interest). Nor am I making any metaphysical assumptions. I am simply and very sincerely trying to make sense to myself of how minds (mine included) work.
I’m curious why you think they’re being insincere.
The problem is they start with a materialist view to demonstrate that any view other than the materialist view is misled. If you start with an idealist view, you can do the exact opposite. In other words, if you don’t start with the materialist assumption, it can’t prove anything. So rather than addressing the underlying metaphysics, the illusionist essentially (though not in these words) is arguing the idealist view is misled because it isn’t materialist.
I think somebody termed the illusionist view as self-refuting. If consciousness is an illusion, then there can’t be any guarantee that the illusionist argument is on any kind of firm epistemological ground unless the argument is the view is somehow outside of the limitations of consciousness itself.
It’s simply a backhand way of arguing against materialism.
So your argument is that their argument is so terrible, no one could really sincerely believe it? I wonder how many philosophers you’ve read; I’ve often thought that, yet typically had the impression that they were fully sincere. And I think you should consider that maybe you don’t understand their view.
I say they should address the metaphysics directly rather than arguing the opposite metaphysical view is a misunderstanding. Idealists argue that matter is an illusion and materialists are misunderstanding what they are seeing too. Anybody can play the game. I disagree with your view – you are mistaken because you’ve started from a different assumption than I do.
If you like “backhanded’ or ‘indirect” in preference to “insincere” maybe they work better for you even though the three are synonyms. The illusionists are primarily making a materialist argument but reaching it indirectly by trying to explain how an idealist comes to a wrong conclusion (because they started with the wrong or no metaphysical assumption about reality). If you don’t assume reality is material, then you can’t argue consciousness is illusory.
LikeLiked by 1 person
I think you hit the nail squarely on the head with that insight—what metaphysical assumption about reality do you start with?
LikeLiked by 1 person
I would probably take an approach similar to Nagarjuna’s which means I try to avoid speculating about “reality”.
Pragmatically I tend towards the scientific which brings with it usually a bias towards looking for material and physical explanations. However, I think the relationship between what we call “physical” and what we call “mental” is complicated. In the end, knowledge of reality or anything has no meaning unless at some point mind becomes involved. Since it is required for knowledge, mind can’t be left out of the world description or consigned to a second class status.
To be clear I am anti -metaphysical so I say a pox on both their houses.
I didn’t intend for my description to be functionally complete, so I didn’t get into things like the body schema. I was just aiming to note that it is a functional process, and illusionists are focused on the notion that there’s something more there.
Your point about the internal TV screen is what I trying to get at with “theater” and “presentation”. So I think we’re on the same page. Certainly we can define those words to be more defensible ideas, but unless we’re relentless in reminding others what we mean, it seems at risk of inviting confusion.
LikeLiked by 1 person
In response to Jim’s last comment: I see the mental block that everyone has about our experience which makes it confusing. The notion of illusion is a rebuttal to both substance and property dualism where there is someone viewing a presentation or the so-called Cartesian Theatre. Also, everyone sees the brain and mind as a single system, a system that is capable of being both objective (veridical) and subjective (non-veridical) at the same time. That my internet friends is what is referred to as a paradox.
When it comes to the mind with its experience of consciousness, Hoffman’s Interface Theory is a great analogy, but like all analogies, it also falls short. In my opinion, what I think everyone is missing is that mind is a separate and distinct system, a system that has its own unique properties (quantum), a system that emerges from the classical brain, a system that operates on the substrate of that brain and a system that uses that substrate for its own purposes.
It’s not a quantum leap to arrive at that conclusion because the very idea of emergence is a fundamental feature of increasing complexity everywhere in the universe. Also in my opinion, the only individuals who would have a legitimized personal reason to reject my assessment would be dualists or philosophical zombies who don’t know what it means to understand……
LikeLiked by 2 people
I’m not sure Hoffman’s Interface Theory precludes the idea that the interface itself is a separate system that emerges from the physical brain. If it does, then I agree with you that it falls short.
Let me offer you my opinion on the apparent intractable bias that you’ve identified here. As I see it illusionism, and especially functionalism, are two ways of countering the observations of John Searle. This is to say that these interests are hell bent upon ensuring that people believe it would be possible for something like Searle’s hypothetical Chinese room, to “understand” by means of being equipped with the proper lookup tables. The illusionist contributes by sewing seeds of doubt regarding the entire concept of “consciousness”, while the functionalist implements the tautology that whatever functions like it’s conscious, thus is “conscious”.
I realize that you’re more familiar with the ideas of Searle than I am, though I don’t think his demonstrations of his oppositions’ failures went far enough. Beyond just “understanding”, I wish that he would have gotten into “pain”. The following is how I like add to his position:
In science it’s understood that when your thumb gets whacked, nerves send signals to your brain that are processed through certain neural firing that ultimately results in you feeling whacked thumb pain. Illusionists and functionalists would like us believe that the thumb pain arises when certain information is properly processed into other information. I object because in a natural world thumb pain should not exist by means of processed information alone, but rather processed information than animates associated thumb pain physics. (That physics might be certain neuron produced electromagnetic radiation, but regardless, some such physics should be required in a non-supernatural world.)
What their position suggests is that if we had sheets of paper inscribed with markings correlated with the information that’s sent to your brain given a good thumb whack, and if a vast scanning and printing computer were to process that information into a new set of paper inscribed with information correlated with your brain’s response, then functionalists and illusionists believe that something here would indeed feel what you do when your thumb gets whacked!
I obviously object. But if that second set of inscribed paper correlated with your brain’s response, were then fed into the right scanning computer that was armed with the physics of phenomenal experience, then I’d say that this machine could conceivably create something that feels what you do when your thumb gets whacked. And yes, I suspect that scientist will empirically discover this physics to exists in the form of certain electromagnetic fields that the brain is known to produce through neural firing. Conversely I think you’ll find that all non-physics based solutions are inherently unfalsifiable, and this is because it’s not possible to empirically test “non-physics”. Some such answer could still be true however, just as a god could still be responsible, though I like to suggest answers associated with this world.
LikeLiked by 1 person
Phil Eric, I think you’re on to something. The intractable bias that I suspected as leading one to illusionism could very well be the strong desire to see brain processes as merely computational.
LikeLiked by 1 person
Happy to be of service Matti. When they start exploring academic conceptions of consciousness, many tend to align with charming people who say all sorts of obscure things to imply that they’re fabulously clever people. Thus today there seems to be an incentive in the field to not speak plainly, and in turn this tends to serve as cover for the popularity of all sorts of ridiculous ideas. Fortunately you and I seem less swayed by such cleverness and charm. Wherever possible we seem to keep things simple enough for effective reduction, or the advice suggested long ago by William of Ockham.
“She asks us to consider, are we conscious right now? Of course, the answer is generally going to be yes. But what about a few moments ago, when we weren’t thinking about our consciousness, when we weren’t introspecting?”
I’m a fan of Blackmore in general but I think this consciousness argument reduces the concept to a semantic game. Actually I think the introspecting when we ask ourselves if we are conscious is simply the recall of the memory of being conscious a few seconds ago. It doesn’t mean we weren’t conscious a few seconds ago. If we reduce consciousness to the recall of being conscious, then what are we going to call all of our other aware mental states. It might be there is no unity to those awake states but that would be a different argument. There clearly is a distinction between general awareness, wakefulness, and unawareness like deep sleep or coma.
LikeLiked by 2 people
I think it is a semantic game to at least some extent, as I noted in the finish to the post. It’s all in what we feel needs to be there to warrant the label “consciousness”.
Blackmore seems to argue that the unity only exists while we’re introspecting. I’m not sure I agree with that. It seems like the attentional dynamics provide a good deal of unity. It seems hard to deliberate without a sense of the world and ourselves in the mix. But she’s probably right that it’s not at the same level as when we’re actively self reflecting.
Your point about introspection being about the past is interesting. It reminds me of Daniel Dennett’s proposition that whether we’re conscious of something or not is an attribution we make only after the fact, after content has managed to have enough causal effects throughout the cortex.
LikeLiked by 1 person
It is a semantic game Jim; and I’m not sure what the pay-off is for those academics like Blackmore and Frankish other than building a fan base to sell books. It’s like you so succinctly pointed out; consciousness is a “state of being” and that state is awareness and wakefulness; it is either metaphorically online or it is offline. What consciousness is not is what occurs during that wakeful period which is the function of the mentation process. The mentation process is the function it is not the state of being aware or awake; wakefulness and the function of wakefulness are two distinctly separate things.
This is concept that even a three or four year old child could grasp; I’m not sure how something as fundamental as a “state of wakefulness” and the antithesis of what occurs during that “state of wakefulness” flies so far over everybody’s head. But that’s the Achilles heel of technology and access to information I guess; it makes us dumber not smarter.
But then again; Mike will coyly refute my comments by asserting that “consciousness is in the eyes of the beholder”. It’s all a semantic game that everyone seems perfectly content to play.
LikeLiked by 1 person
I think I’m with you James. This consciousness argument reduces to a semantic game. To me the illusionists “illusion” and Anil Seth’s “hallucination” make these Anglophone philosophers sound vaguely like French philosophers with their insufferable propensity for the ironic; “Man is born free but he is everywhere in chains” (Rousseau), “The poor man tormented with hunger feeds those who plead his case” (Camus), “Man is condemned to be Free.” (Sartre). I fully agree James. My consciousness is an interface or translation for me to engage with reality. Illusions and hallucinations are not real. That’s how those concepts function in English! The phrases are at best clever irony. Or, as I said, it’s manifest nonsense.
I agree that the weak/strong illusionism dichotomy is just semantics. The only usefulness to this categorization might be if weak illusionism was making a weaker claim. For example, if it specifically targeted substance dualism or the cartesian theater. But typically, both types of illusionists will deny phenomenal consciousness altogether, so I don’t see the point in playing these semantic games.
Also, one thing I don’t find helpful is using talk of simulation and mental models, or of the brain ‘constructing’ our conscious models, as I’ve heard you say in the past. When most people think of the mental model of a table that is constructed in their brains, they think of the mental object which has color and shape etc… But for the illusionist, there is no such thing! According to the illusionists, we aren’t actually acquainted with a mental object which has (virtual) color and shape.
Rather, talk about the mental ‘model’ of the table just describes certain brain processes (which obviously don’t have the color/shape of the table). Thus, I feel it’s misleading to continue to speak of your belief that we have a virtual mental model in our head, or that our brain simulates reality, because it implies that you think that we have actual access to such a model. That’s how the majority of people would interpret it, I’m sure. Or if you continue to speak of it in that way, then it might be wise to add a big disclaimer.
LikeLiked by 1 person
When it comes to philosophy, saying something is just semantics is often dismissing the entire debate. Although I agree it helps to understand when that’s the primary contention. Often I find that people are intensely resistant to this simple realization.
I think weak illusionists are using a different definition of “phenomenal consciousness” from strong illusionists and non-physicalists, something that might be subjectively intrinsic, practically ineffable and private, but doesn’t have these attributes in any absolutist manner, and is certainly not infallible, but is functional. Of course, this isn’t Ned Block’s version, which he makes clear is not a functional concept, so using it can be confusing, but I think when a scientist or other physicalist talks about phenomenal consciousness, that’s usually what they’re referring to. I know most of the references from me in this blog’s archives were meant in that manner.
I expressed my own reservations above about consciousness as simulation. I do think the word has value in certain contexts, such as describing deliberation as sensory-action scenario simulations, but not as a blanket description of consciousness. I’m not sure why you think I use it that way. (I did do a series of posts recently on the simulation argument based on Chalmers’ book, but the word “simulation” wasn’t used that way.)
When I use the word “model”, I’m using it as an information processing concept, a constellation of predictions. Often I qualify it with “predictive”. This isn’t referring to something consciously constructed. It’s the underpinnings of functional experience, the work that goes on in the hidden layers below the levels accessible by introspection. In philosophy, the word “representation” is often used, although I think that has its own issues. Every term in this definitional bog does.
That said, I think my use of “model” is clear in the contexts in which it’s used.
Alex, let me be clear about something. I accept weak emergence. That means I have no problem availing myself of language that refers to non-fundamental and even folk concepts. Otherwise I’d be limited to only talking in terms of fundamental physics. If you see that as somehow illegitimate, well then I fear you’re going to be disappointed.
About “just semantics”:
I mostly agree, although I would add the caveat that saying that a debate is “just semantics” really means that the disagreement in question is not truth tracking, meaning it doesn’t track a difference in the state of affairs in the world. Rather the disagreement has just dissolved into what the appropriate syntactic labels should be to tag certain state of affairs (but no one is disagreeing over what the actual truth/state of affairs is). This doesn’t mean that the debate is pointless though. On the contrary, picking the right ‘syntactic’ labels serves an important signaling function (think of the negative reputations associated with certain syntactic labels for instance). Naturally, the implicit premise is supposed to be that, assuming we are solely concerned with the truth, then debating ‘just semantics’ is not helpful to the pursuit of knowledge.
As for whether this applies to the weak illusionists, I think it mostly does, especially in recent times, when in light of the literature advances it’s become clearer that the weak illusionists have to give up more and more substance to the label of phenomenal consciousness. I agree with Frankish that “diet qualia” are basically zero qualia, and so the entire disagreement between SI and WI is “just semantics”, so to speak. However, not all weak illusionists might be on board with diet qualia, and some might really believe that the classic intrinsic/private qualia can be rescued under physicalism. I think you and I would agree that this is doomed to fail though.
About the modeling analogy:
I think the issue here is precisely that phenomenal consciousness isn’t weakly emergent from our physical substrates. Physicalism is eliminative, not reductionist. It is totally fine to continue to use folk concepts when describing weakly emergent physical phenomena (like the heat of a hot iron rod) precisely because those macroscopic phenomena are fully accounted for in the physicalist reductionist program. But illusionism can’t account for phenomenal consciousness (hence the illusion part), so by definition it must eliminate the content of folk psychological terms (which refer to facets of phenomenal consciousness). Some like Churchland have fully embraced this. Others have attempted to redefine
folk psychological terms to no longer be about their original meaning. I think both approaches are totally fine (but those in the redefining camp should be sure to add a big disclaimer to make things clear).
However, I don’t think it’s accurate to say that no redefinition disclaimer is needed because physicalism is just weak emergence. That doesn’t seem true, phenomenal consciousness is not weakly emergent from physical behavior, and so our descriptions of phenomenal experiences will not be reducible to descriptions of physical stuff.
Now it might be argued that folk psychological terms were never meant to refer to intrinsic/private experiences in the first place, but I don’t see how that can be. Most of us will admit that our starting points are based on our acquaintance with our experiences. Also, as I previously mentioned, if you ask most people what their mental imagery of a table consists of, they will talk about the mental object with color/shape which they think they are acquitted with as a result of their experience. Similarly, when ordinary people refer to pain, they are referring to the feeling of pain (the sensation they are immediately acquainted with) and not the brain state.
On picking the right syntactic labels, I agree it can make a big difference. For example, consider the “phenomenal” label. It implies that we’re talking about something apparent. However, given all the theoretical assumptions Block and others heap on the concept, I think a better term might be “ghost consciousness”. Of course, they would likely argue that the ghost is what’s apparent. (The term “ghost” might be objected to. If so, then maybe “non-physical”?)
If illusionism is taken to be denying apparent consciousness, then it does look ridiculous. How can you deny what seems to be? Isn’t the illusion itself what’s apparent? But if it’s taken to be denying ghost consciousness, that the apparent ghostliness of consciousness isn’t actual ghostliness, then it looks like a more coherent proposition.
Strong illusionism is the view that the terms “phenomenal” and “qualia” are too polluted with ghostly connotations, that we need to just eliminate them and develop a better vocabulary. Weak illusionism (aka weak realism) seems to feel these terms can be reconstructed in a more defensible manner, such as equating phenomenal consciousness with apparent consciousness. In that sense, if a weak illusionist speaks of “qualia”, they can be referring to what Frankish calls “zero qualia”. (I think “diet qualia” is a strawman he designed to be immediately knocked down.)
This definitional morass is why I usually prefer to call myself a “functionalist”. (Not that people aren’t just as willing to misconstrue that label.)
Your points about phenomenal consciousness not being emergent from physics is, I think, conflating apparent consciousness and ghost consciousness. I agree that ghost consciousness isn’t emergent from physics (in my case, because I think it’s a mistaken notion). But I think apparent consciousness definitely is. Even if we refer to that appearance as an illusion, the illusion itself is real and has causal effects, most of which are adaptive. So I see no contradiction in availing myself of the vocabulary that results from it.
As a side note, I would say I’m on the SI side when it comes to defining phenomenality (as I think I earlier noted). There are already so few terms in the ‘synonym circle’ and changing the definition of phenomenality in this way is like changing the definition of ‘soul’ to mean “certain brain processes happening in hippocampus”. If we did that then we’d have no way of understanding what medieval theologians meant. It also undermines the whole impetus behind illusionism in the first place (trying to explain the appearance of a phenomena which many think is real).
Moving on, I’m not sure that I fully grasp what you mean by the “appearance of consciousness”. Up until now I’ve just been taking you to mean this as synonymous with “we have a false belief that we are conscious”, which I will also assume is the case here. But I actually think the word phenomenality as denoting appearance is very appropriate because when Block and others talk of the appearance of consciousness, they are talking about their mental models. Meaning, we have a visual mental construction of a table in our minds, but this table is only ‘apparent’ and not real, because it doesn’t exist in the real world, only in our minds. So, I don’t see an obvious need to distinguish between ghost consciousness and apparent consciousness; ‘ghost consciousness’ has phenomenality as appearance built into the definition.
Now it seems to me that illusionism is just the denial that we are really imagining some ‘apparent’ mental object. I view it as the thesis (please correct me if you disagree) that in reality, we have no such virtual experiences, and what we call experiences are just brain processes.
Anyways, we can define it (for the purposes of this conversation) however you want. We can say that “ghost consciousness” refers to the fact that we are experiencing/acquainted with an apparent mental object in our head (apparent in the sense that it’s not the real object), and we can say that “apparent consciousness” just means that we have a false belief that we are conscious. But this now comes at the cost of once again changing the definition of our folk psychological terms. Most folk psychological terms refer to the mental objects/feelings/sensations in our virtual model, they obviously don’t refer to mistaken beliefs.
So, it seems like you can either believe that consciousness is weakly emergent from physics or (in the exclusive sense) that folk psychological terms refer to something real. It seems you can’t have both, either you redefine phenomenality to be compatible with current physics and risk changing its meaning from folk psychological convention, or vice versa.
I should clarify that when I say “they obviously don’t refer to mistaken beliefs”
I mean that they are not envisioned as such. The original meaning of the words (e.g. pain) are meant to capture the contents of our mental models, and I would say that none of this content is weakly emergent from physics. Hence the need to change the ordinary meaning of folk psychological terms (or just drop their use entirely, as the Churchlands advocate)
Perhaps you can elaborate on what you mean by the “appearance of consciousness”? Like I said, I’ve been taking this so far to mean that we have the mistaken belief (i.e. faulty neural representations) of there being some phenomenal component to our brain. Where phenomenal can be defined as the appearance of the mental objects that populate our mental model, or the way they seem to us (as coming with intrinsic properties).
But I don’t really feel satisfied with this interpretation, because it doesn’t seem to follow that folk psychological terms are explainable as being about the “appearance of consciousness” as you claim, nor does it readily mesh with your concession that illusionism denying the seeming of consciousness would be ‘ridiculous’. By ‘seeming’ are you just referring to our beliefs (neural functions)? Or something more?
Sorry for the delayed response. I’ve been stuck in meetings.
I think we have to accept that language evolves. When we use words like “star”, “planet”, or “disease”, we don’t mean the same thing medieval people meant. Although we probably do for “ghost”. We just have to be careful when assessing their historical positions to take that into account. (I don’t know if you’ve ever followed anything involved with legal precedent or constitutional law, where shifting historical meanings become very important.)
On whether we should change the definition of “phenomenal” in particular, I’m neutral. (I might change my mind later.) But I do think we have to acknowledge that there are multiple views out there. For now, I’m just going to be sure to clarify what I mean if I use it, and be careful not to assume what others mean by it, unless it’s clear from the context.
By apparent consciousness, I mean functionally apparent, in the manner in which it causes reactions in us, to form judgments about things, make decisions, issue verbal reports, etc. Many of those judgments, most I think on a day to day basis, will be accurate ones. But when those functional appearances cause us to judge that we have ghost consciousness, then that would be the illusion, although I usually just call it a set of misleading intuitions.
You may not see the need for a distinction between appearance and ghost, but that’s the distinction I think illusionism makes. It’s not like illusionists say we’re all stumbling around blind, deaf, and senseless.
So, if you adopt the stance that phenomenal consciousness = apparent consciousness and only apparent consciousness, then that position seems equivalent to Eric Schwitzgebel’s innocent conception, and compatible with weak illusionism. If you say no, it must include fundamental intrinsicality, ineffability, privacy, and infallibility, then I think you’re requiring ghost consciousness. So if “phenomenal” is just appearance, it matches the weak illusionist version. If it requires the other attributes, then it matches the non-physicalist / strong illusionist version.
On whether to reconstruct a concept or eliminate it, that’s always going to be a judgment call with no one size fits all answer. I did a series of posts recently on Chalmers’ book about virtual reality, which included a chapter on strategies for reconciling the manifest image with the scientific one, which includes elimination, but also identification, reconstruction, or autonomy. (Of course, being Chalmers, he exempts consciousness from consideration, but I see no good reason for it.)
My take depends on whether the concept remains useful in day to day interactions. If it does, then I think it makes sense to either figure out how to identify it with the scientific understanding, or reconstruct it to fit with that understanding. I’m not a fan of autonomy, except as a temporary status.
Hope that gets to everything you asked. Let me know I missed anything.
LikeLiked by 1 person
Thanks for this. One thing that I think sometimes gets lost in translation is that when Chalmers et al. speak of consciousness being intrinsic, private + all the other terms, they are not saying that consciousness IS those things. In other words (intrinsicness, private etc…) and all the other terms in the synonym circle are meant to be descriptors or adjectives that help tag or locate phenomenal content, but such descriptions are not actually telling us what phenomenal content is. To learn what phenomenal experiences are, you just have to experience them. The idea of describing them as intrinsic and private is just to help you locate your own experience.
This is why we need to be careful when we say things like phenomenal consciousness doesn’t exist. Because non-physicalists will usually take that as meaning “the content that is meant to be captured by the term intrinsic, doesn’t exist” and not “the content that is originally meant to be captured by the term intrinsic, is not in fact intrinsic (but it still exists)”. Hence, the charge that illusionists are saying we don’t really feel pain etc…
Also, even saying stuff like “I feel the mental content you are describing can be fully accounted for in functional/physical descriptions” is ambiguous. What does that mean? I can imagine two scenarios:
1. The non-physicalist is saying mental content is “x, y, z” and functional descriptions are “x, y”, and so functional descriptions don’t fully account for mental content. And the functionalist agrees that functional descriptions are “x, y” but thinks mental content is also only “x, y”.
2. The non-physicalist is saying mental content is “x, y, z” and thinks functional descriptions are “x, y”, and so concludes that functional descriptions don’t fully account for mental content. But the functionalist thinks that functional descriptions are “x, y, z” and so thinks mental descriptions [x,y,z] can be fully accounted for in functional terms.
The difficulty in this debate is that it seems like there is no way to even verify or falsify whether 1 or 2 is true. To me, if you’re saying 1 you sound like a philosophical zombie or a radical skeptic, but if you’re saying 2 you sound like a panpsychist who disagrees over what functional terms are meant to capture. Either way, it’s clear that something big was lost in translation on both sides, the maddening thing is trying to discern what exactly that is.
Make no mistake, the illusionist approach is a “slight of hand” maneuver designed to ridicule idealism and dualism; it is nothing more than an ad hominem slipped in the back door. The ad hominem tactic is used to deflect from the “fact” that our current model of materialism is a joke. To be clear, just because physicalism is a joke does not mean that idealism is any less of a joke. (Sorry Marcus & Alex). “Genuine” intellectuals should be spending their mental capital finding solutions to the contradictions and paradoxes built into the current models of both materialism and idealism, not “bashing the other guys”. So, are illusionists like Frankish disingenuous??????? Absolutely!!!
Mike; I would still like you or any other functionalist to explain how a single system (the brain) can be both objective and subjective at the same time. Your explanation should follow the guidelines of what objective means in contrast to subjective. For the record: Subjective is the antithesis of objective which means that subjectivity is not veridical whereas objectivity is veridical.
In an earlier post you stated: “If you mean a system taking in information from a particular location, and in particular modes it is capable of, and influenced by all the past information it has taken in, then yes, I think a system can be both subjective and objective.”
I commented: “Dude, that is a the very description of an objective system that is veridical; how in the world you can assert that it’s also the description of a subjective system is beyond me. Your robots and self-driving car’s algorithms have to be veridical or else they would jump off a skyscraper because they think they can fly.”
Addressing Mike Arnautov’s concern: “I am not my consciousness and (b) that consciousness has little or no executive power — it is not an agent; I am.”
The “I” of which you speak who is the executive making the decisions is not your consciousness, consciousness is the experience of that agent. The illusionist sees that “agent” as a ghost in the machine (the machine being the physical brain). And indeed, there is a ghost in the machine so to speak, but that ghost is not some kind of spirit, that ghost is just another emergent physical system; and that system has to be quantum in order to avoid contradictions. The very notion that a single system like the brain can be both objective and subjective at the same time is the quintessential paradox; and that paradox needs to be addressed, not conveniently ignored…….
Last time I responded to this question, you accused me of evasion. It doesn’t make me optimistic that a thoughtful discussion will happen here, but I’ll give it a chance.
When you say “veridical”, do you mean the viewpoint itself? By saying subjectivity is non-veridical, do you mean to say it’s private, not subject to verification by other viewpoints? If so, then I’d say that might be true for some systems today, but could change as science and technology advances. If you’re saying that it will never change, that it’s fundamentally and forever private, then I don’t think any system has that type of subjectivity.
The type of subjectivity I described in the answer you quoted is a functional conception of it. I do think that type of subjectivity exists.
Or by “veridical”, are you referring to the accuracy of the information for the system itself? If so, then any system can hold inaccurate information. Although I’d agree that a single viewpoint in isolation has a higher chance of having inaccuracies.
It is really difficult to have meaningful conversation with you because you seem to lack a definitional compass. I don’t know if that is a tactic you employ in dialectic debates to maintain a sense of superiority or whether you are “genuinely” that definitionally bankrupt. Veridical has a very straight forward and simple definition, I suggest you look that definition up and try to understand what it means. For all I know, this is the internet and you might actually be a “bot”.
“The type of subjectivity I described in the answer you quoted is a functional conception of it. I do think that type of subjectivity exists.”
The type of subjectivity you described in your original answer does not exist in any form of information processing because information processing operates on the substrate of an algorithm. All algorithms, regardless of their complexity are veridical for the simple fact that they follow the instructions built into the model; in other words, algorithms are restrained by and follow the rules. Therefore, the idea that mind is computational is false…….
An artificially intelligent self-driving automobile would never decide for “itself” to drive off a cliff into a ravine unless the algorithm itself was fucked up or, the algorithm was built in such a way as to allow it to happen; either way all AI devices are veridical because they are simply following the rules built into that algorithm. Algorithms are objective, whereas the system we refer to as mind is subjective is it not? Isn’t that the grounding premise of illusionism? Or is only part of what the mind does that is subjective and the other “functions” are objective.
The brain is a 100%, unadulterated objective system. It manages all of the involuntary functions of the body, it instantiates the mind during wakefulness and it carries out the executive orders of that mind without question. If the command and control center which is the mind tells that brain to make those two little legs and feet run up a flight of stairs to the twentieth floor of a building and jump out the window, that is exactly what will happen. The brain is a veridical, objective system, the mind is not.
You keep demanding evidence Mike; logic itself is the evidence and logic dictates that a single system cannot be both objective and subjective at the same time because that is a contradiction which reduces to absurdity. Another glaring example of a contradiction is our current model of gravity: according to our current model, mass is the cause of gravity and that the mass itself is subject to its own cause. One cannot have a self-caused cause because that is a contradiction which reduces to absurdity.
If you ask me ambiguous questions, I’m usually going to have clarifying questions. Even if your view is sharply delineated, I’ll often have questions. If that outrages or offends you, this is probably not be the best forum for you.
In any case, the insults are getting really old.
My comments about you are observations, not judgements. Dialectic arguments should never be taken personally; we can agree to disagree and it shouldn’t be a problem. But at least be willing to take a stand and categorically state that you believe contradictions that reduce to absurdity are acceptable in your view of the world.
Furthermore; as far as I’m concerned, accusing me of insulting you is nothing more than a back door ad hominem, a tactic used in dialectic because one cannot defend one’s own metaphysical position; delineated points which I present that you willfully refuse to address. But that’s just my own personal opinion.
This is your blog Mike, so you are entitled to the last word because I’m out of here; call me boorish if you must because I can take only so much nonsense…….
Thanks for being patient with me thus far. I think I’ll make one more comment and then I should take a break from this entire conversation. One thing that I’d like to ask which I haven’t thus far, is how would you verify or falsify your functionalist claims?
For example, let’s entertain a silly thought experiment and suppose that you encountered a behaviorist in some alternative universe who insisted there was nothing more to consciousness than actual behavior. Meaning, she insists that once you’ve described an organisms’ behavior, then you’ve fully described its experiences and mental content. Anything more, like its dispositions to behavior, or its brain states, are just as superfluous as classic qualia. In this bizarre alternative universe, the behaviorist laments how the functionalist illusion has grabbed hold of most of her colleagues, and how she has devoted her life’s work to explaining how people might fall for such a mistaken belief.
I think it might be worth taking a step back and asking whether there exists any test you could employ to determine which of you are right. It seems like there is none. You can’t rely on reports of pain, since that by definition constitutes behavior (the behaviorist thinks that whenever people report they are in pain, or act like it in other ways, then they are in pain). What could you appeal to in order to convince the behaviorist that you are right? Indeed, it seems like the only thing you could do was note that a complete behavioral description would leave out the brain states that instantiate pain, but of course this is just circular logic (it assumes that there are brain states which instantiate pain).
You might appeal to the fact that it’s common knowledge that pain is distinct from the behavior/reports of pain. Or that it is conceivable that you might be in pain even if you report that you are not or vice versa. But the behaviorist would simply insist that this is the illusion of functionalism; you think it’s conceivable, but it’s actually not. After all, it also seems conceivable and common knowledge that we might experience a qualitative sensation of pain in a different functional state (qualia inversion), but (as you well know) this is just an illusion. Similarly, the behaviorist says, its just an illusion that we think that actual pain is distinct from the reporting, or behavior, of being in pain.
Is there any test that you could conduct to determine which of you was right? And if the answer is no, then why do you think consciousness is functional, as opposed to behavioral, or even nonexistent (maybe we can imagine a consciousness-nihilist making an analogue argument in some other alternative universe!)? Of course, the non-physicalist has an easy answer, there is a test (our acquaintance with our experience), and this test reveals that both the functionalist and behaviorist are wrong (although the functionalist is closer to the truth). Indeed, the very fact that you picked functionalism over behaviorism or nihilism seems to suggest that you relied to some degree on this test to begin with, don’t you think so?
Thanks for your time.
LikeLiked by 1 person
A friendly amendment Alex. The rhetorical reason to bring in an alternate universe here would be to say how things exist in it ontologically. But because you didn’t say how things are in that universe, there’s no reason for it to not be this one. Parsimony. And I don’t think there’s anything silly about what you’ve asked. I’ll try to diplomatically weigh in on this once Mike provides his answer.
I appreciate that much of the description is to point us at our own version of the content. And concern about the seeming denial of that content, even in a functional sense, is why I’ve long been leery of strong illusionism. If you understand that when SIs say “phenomenal”, they’re referring to the complete package described by Block, then their statements make sense. But as you noted, it’s very hard to deny phenomenal consciousness in the ghost sense without seeming to deny it in the appearance sense. So although the difference between SI and WI is definitional, definitions matter in a communication strategy.
On the “x,y,z” points, I think the functionalist has to acknowledge our burden. We can’t simply dismiss z. It has to be accounted for. But that accounting may entail explaining why we think there is z rather than explaining z itself (SI), or updating our understanding of what z is (WI). Of course, we should also be prepared to revise x, y, or anything else if the evidence shifts.
On the behaviorist question, it’s interesting because I do think there’s an important point worth considering here. How do we know any system other than ourselves is conscious? We’re ultimately dependent on their behavior, including the behavior of self report, to make an assessment. Once we’ve established that a certain stimulus or brain activity is correlated with reports of conscious states, we can then use that information in an assessment, but only because we’ve established that.
But my point to the behaviorist would be to ask whether they can provide a causal account of all the behavior of a subject without internal states. For example, stimulus A may elicit behavior A on a particular occasion, but on another occasion, stimulus A may generate behavior B. Assuming all other conditions are equivalent, that implies a difference in internal state, such as mood, memory, pain, etc.
She might talk about dispositions toward certain behaviors changing, but that’s just admitting that there are internal states that make a difference. Once we’re there, then we need to account for the causal relationships between those states, and we’re effectively in functional waters.
So functionalism is causally necessary, understandable in terms of structure, relations, and effects. The only question is whether it’s a complete account, and how we might establish that it isn’t.
Thanks again for the discussion Alex.
LikeLiked by 1 person
Say Mike I hope you don’t mind a question from a “back-bench” observer. And you may have already provided an answer elsewhere. But could you describe your concept of functionalism in relation to (the most recent) view of Hilary Putnam—which he calls “liberal functionalism.” That would help me greatly.
LikeLiked by 1 person
I have to admit I don’t know a whole lot about Purnam’s latest views. I’m basing my response on what he mentions at the bottom of this blog post: http://putnamphil.blogspot.com/2014/10/what-wiki-doesnt-know-about-me-in-1976.html
I don’t really see anything objectionable in those couple of paragraphs. His first point, that functionality may extend out into the environment, I take as what’s often described as the extended cognition hypothesis. As long as we don’t get too carried away with it (the brain remains the essential component), I think this is probably right.
I also don’t have a problem with the second, about not insisting only on the vocabulary of computer science. Obviously we need to involve psychology and neurobiology. Again, as long as we don’t get carried away.
I’m not sure I really understand what he’s saying by not objecting to “intentional idioms”. Intentional concepts seem unavoidable in light of his first point, so it makes sense.
There may be devils in the details if I read him at length though. And he does talk about qualia earlier in the post, but I’m not sure in what sense he’s using that word.
Was there anything in particular you were curious about?
Mike, thanks. Hilary Putnam was (many years ago) a founding father, so to speak, of the functional approach to consciousness which I find, at this point in my understanding, unacceptable. Yet I have respect for some of his other work and for the fact that he has boldly changed his mind about other views he’s held. So I’ve kept an open mind about him. Recently I’ve struggled with his new thinking, liberal functionalism. At present I’m not a convert, just struggling to understand. Thanks for your feedback.
Thanks Matti. Putnam, along with Wilfred Sellars, Jerry Fodor, David Lewis, and others developed functionalism. I haven’t really read these guys at length, mainly because while I agree with the general idea, a lot of the specifics they speculate on (like mentalese) haven’t aged well. (Nothing ages well in detail if you give it enough time, but these guys had the misfortune to be writing just prior to major advances in brain research.)
I do know Putnam later turned against functionalism and computationalism, but it looks like late in life he gravitated back toward it, albeit not with the intensity he probably had in the 1960s.
It seems like a lot of the neo-dualist movement developed as a reaction to functionalist views. Functionalism became the idea that philosophers like Nagel, Block, and Chalmers felt the need to react against and explicitly clarify that they were talking about something else. And of course, the eliminativists and illusionists are in turn reacting to the neo-dualist ideas.
My 2 cents, briefly. Isn’t “SelfAwarePatterns” the definition of consciousness? I.e. the ability to worry the future and reflect on the past. What things can do this? Humans. Maybe some great apes? But I doubt dogs, cats, etc. and so on, much less trees, rocks … .
All the jargon and semantics are barriers to that which matters. Yes?
Consider the Texas shooter. Was he conscious of what he was doing? Did he think about the past or future? Where does he (and other similar humans) fall on the spectrum of “self aware”?
Is consciousness an illusion? Are dreams? No, they are real. But have no physical form. Now, Free Will? That may be an illusion.
LikeLiked by 1 person
When I created the username “SelfAwarePatterns” I would have said yes. By the time I created this blog a couple of years later, I had concluded that there’s no real fact of the matter. I was lured away from that position for a few years, but eventually came back to it. The word “consciousness” has a wide range of meanings: introspection, attention, perception, and sentience, just to name a few.
Just in terms of “self awareness”, the “self” can refer to a bodily-self or a mental self. Awareness of body-self seems very ancient. Awareness of mental-self, as you noted, is much rarer.
No idea on the Texas shooter. From what I’ve read though, most criminals have issues in the prefrontal cortex, which is the region that seems to coordinate thinking about the future and past, as well as emotional feelings.
My takes on your questions: Ghostly consciousness is an illusion. Self awareness and awareness overall are real. Contra-causal free will doesn’t exist, but the perception of it is useful for society.
If we want to consider a way to falsify functionalism then we should need a good definition for it. At the end of this comment (https://selfawarepatterns.com/2022/05/19/what-does-it-mean-to-be-like-something/#comment-156831) I’d say that Mike essentially gives us such a definition. “[Functionalism] could be falsified by discovering that only a specific substrate can implement it.” Furthermore he’s implied that from this perspective consciousness must exist the way that hearts, lungs, and cells work, though not the way that gravity or light bulbs work. From this view consciousness must exist generically rather than as anything specific (like certain elements of gravitational or electromagnetic fields). It’s the generic component which seems to render functionalism no more falsifiable than Christianity. Observe that the only way to effectively disprove either would be to have good evidence for the validity of a contrary and falsifiable theory.
I see that Matti and Mike discuss the founders of functionalism a bit above. I have a hunch about why this movement emerged and became so popular. It seems to be a shortcut past an apparently quite hard problem.
First there was Alan Turing’s imitation game where we try to decide if we’re speaking with a person or a computer. The presumption from this must then have been, if code were written well enough to continually fool people into believing that they were speaking with a human, then the associated computer must effectively be conscious. Not only does a hard problem get bypassed here, but we also get to imagine human consciousness being uploaded to an advanced computer to thus truly exist in a virtual world.
Though “consciousness as code” became conventional wisdom, John Searle’s 1980 Chinese room was at least a fly in the ointment. Nevertheless the man clearly failed. Apparently the shortcut of consciousness as code has been too seductive.
If your brain does more than convert certain code into other code to create your consciousness however, then what else might it need to do? Beyond the non-conscious computation of brain based neural firing, there’s also a known electromagnetic field created by such firing. Certain portions of this field might exist as the medium through which we experience our existence, or a substrate to potentially test. Furthermore the best neural correlate for consciousness we have just happens to be synchronous firing. McFadden’s cemi postulates that it must take a certain kind of synchrony to get above the noise and enter the electromagnetic zone of consciousness.
Imagine how monumental it would be for science and humanity if his theory were experimentally validated in enough ways. In that case we could say that we were living with and actively debating the likes of a Darwin or Einstein, though before their greatness were even grasped. Wow!
LikeLiked by 1 person
Just to note, I also said that evidence for any form of interactionist dualism would also falsify functionalism. I’d also add that any indication that consciousness is something fundamental, like maybe a new type of fundamental interaction, might also do it. Basically prove dualism, idealism, panpsychism, or new special physics, and you’re there.
You cite my examples of what would falsify it, then you say it’s unfalsifiable. I don’t see the logical steps you use to get from one to the other.
I’m not sure why you think I would exclude gravity or light bulbs. I wonder if you forgot about this post.
Imagining having evidence for your favorite theory isn’t nearly as persuasive as actual widely reproducible evidence.
LikeLiked by 1 person
There’s a subtlety regarding falsifiability that I probably didn’t go into as well as I should have. Christianity is unfalsifiable because its truth depends upon otherworldly dynamics. This is to say not just causal stuff of this world, but rather magic that can’t be causally explained even in principle in our world alone. Christianity can always be true regardless of our evidence given that magical component. You and I are not Christians however because the position seems to conflict with many things that we have evidence for. So it’s technically unfalsifiable though you and I still consider it wrong given our evidence. Let’s call this “weak falsifiability”.
Though not nearly as spooky as Christianity, I essentially make the same case for functionalism, or that evidence for something else could suggest that it’s untrue even if true. You can’t demonstrate what would falsify it directly because the concept here would ultimately be true by means of substrateless existence, or essentially magic. In this situation there’s some kind of generic information that would need to be properly converted into other such information to thus create a phenomenal element of reality. In a causal world however machine information should not exist generically, but rather in terms of what it causally animates, or a substrate. Otherwise there should be no such information. Just as you and I believe that the rest of reality always exists by means of substrate, consciousness also should.
The thing that I think irks you about my position is that ancient to modern theistic notions demand there to be a consciousness substrate as well. They refer to it as a soul. Agreed, but as I’ve just explained that doesn’t mean that consciousness must exist with no substrate. In any case you seem to be calling my position “manifest” rather than “scientific” on this basis.
I said that your functionalism conception of consciousness would be wrong if it worked like gravity and light bulbs because each product (gravity and light) seem substrate dependent. Conversely my current conception of your position is that there must be no consciousness substrate in order for functionalism to be true. That’s the point of contention as far as I can tell since I consider it “not of this world”.
LikeLiked by 1 person
These two things are not the same.
1. Functioning in alternate substrates
2. Functioning without any substrate at all
Functionalism implies 1. It’s worth noting that in biology, the same functionality is often accomplished with varying mechanisms in different species. (Consider all the different implementations of eyes, the varieties of blood types, or how different invertebrate nervous systems are from vertebrate ones.)
2 is not implied by functionalism or any other physicalist theory.
LikeLiked by 1 person
Agreed Mike, I was wrong to say that your functionalism postulates no substrate at all. To me it’s still problematic however because I find it causally ridiculous to think that a Chinese room (Searle), or China brain (Block), or the current United States as a whole (Schwitzgebel), or certain inscribed paper that’s properly converted into other inscribed paper (me), could phenomenally experience its existence without an underlying common type of physics which suggests that phenomenal experience. That’s the manner of substrate that I was referring to. A common variety of EM field would clearly be appropriate for example in a causal world, though not the mere existence of mass/energy in some capacity, as is found in all of these non-brain dynamics.
Ultimately I don’t believe that machine information can exist generically. Thus for example genetic information may be functional informationally in a cell that it was set up to work in, or even for a biologist who looks at it closely enough, though otherwise the associated complexities should generally just remain “stuff smeared on my shoe” and so forth. Similarly a DVD requires something able to play it to exist as the sort of information that we know can potentially be unlocked. In all cases machine information should not exist without physics based instantiation mechanisms. In a natural world you’ll definitely have some variety of matter/energy for substrate in all cases, though I should have mentioned that this shouldn’t be sufficient in itself. I was referring to a causal explanation for phenomenal experience to exist, or a certain kind of brain physics (such as an electromagnetic field).
Notice how various specific functions in biology that are mechanistically accomplished in very different ways, are still understood in a biological and physics based capacity to do the same sorts of thing. Maybe human blood needs to be very different from frog blood, even if the essential purpose of each is to distribute oxygen and whatnot. That’s what we’d expect given their different ways of life.
If functionalists truly endorse the position of consciousness through any manner of substrate that properly converts one generic set of information to another, then I’d expect them to be the people who educate us in general about the diverse sorts of things that their position suggests. Instead it seems to be people in opposition (except for Schwitzgebel) who tend to illustrate what functionalism effectively means.
LikeLiked by 1 person
I think the issue here is when you say “phenomenal experience”, you mean the ghost. And you’re looking for an explanation for that ghost. You don’t see a ghost in the Chinese room, China brain, or US, therefore those scenarios seem ridiculous to you. I think you like EM field theories because they seem to provide a naturalistic version of that ghost.
My answer is that I don’t think the ghost exists, not even a naturalistic version. Functionalism doesn’t need to account for it. It only needs to account for why we think there is a ghost, a much more manageable burden. Along those lines, there’s no reason in principle a Chinese room, China brain, or the US as a whole can’t have a model of itself that implies a ghost that isn’t there.
In terms of machine information, consider that you probably typed your reply on your phone, I looked at it on my iPad this morning, then eventually pulled it up again on my laptop where I’m typing this response. You use multi-realizable information all the time. I haven’t watched a DVD in years, yet continue to watch movies and shows via streaming. I rarely buy physical books anymore, yet continue reading books. Multi-realizable content.
The only reason to resist the notion is thinking that the ghost can’t possibly be instantiated by alternate means. But there is no ghost, not even a naturalistic one. There are just the physical systems and what they do. The rest is illusion. Prove otherwise and you’ll have falsified functionalism.
LikeLiked by 1 person
Apparently you and I are quite resolute in our convictions here. Let’s say that you are correct about consciousness existing as code that’s converted into other code without animating any dedicated phenomenal experience physics as substrate. Furthermore let’s say that this was demonstrated to me in countless ways. Maybe Apple gets me a phone where Siri seems quite sentient. Maybe a version of me gets uploaded to my phone to live in a wonderful virtual world. In any case let’s say that you were right about this while I as wrong. Would I then recant my past thinking?
If I decided this consciousness must be supernatural then I don’t think that I would recant, though otherwise yes. So I guess the big question here is, would I consider such consciousness to exist naturally or supernaturally?
If a clearly magical god came here to validate the truth of Christianity, then I’m pretty sure that I’d ditch my metaphysics of naturalism in that regard. If certain code converted to other code creates sentient existence however, then I’d probably presume that this was natural function that’s well beyond my own potential to grasp. So yes, I suppose that I would recant being so strongly in error on this matter.
What about you? If McFadden’s theory were validated in countless ways, and so consciousness became scientifically understood to exist under certain parameters of electromagnetic fields, what do you think your reaction would be?
LikeLiked by 1 person
I think I’ve confirmed innumerable times that I follow the evidence (widely reproduced and accepted by the relevant experts). I’m wondering what you think is being accomplished by constantly asking me about it. I’m already thoroughly on record for it.
LikeLiked by 1 person
Actually I didn’t mean to ask if you’d accept the evidence. I’ve presumed that. I wanted to know how you think your perspective would change? Would it not change much? Would it change radically? And why would this or would this not be a big deal?
I’m on record for stating that McFadden would then become considered one of the most important scientists the world has ever known. Observe that in one fell swoop dualists, panpsychists, idealists, illusionists, and yes functionalists, would be disinvited from science. It seems to me that this would be the most monumental restructuring event in academia’s history (yes surpassing each restructuring from the work of Newton, Darwin, Einstein, or quantum mechanics). And indeed, afterwards I’d expect far greater soft science progress in general, that is should McFadden’s theory become overwhelmingly validated and accepted. But what do you think?
LikeLiked by 1 person
I don’t think McFadden’s CEMI theory would falsify functionalism. It would just introduce a new substrate for the functionality. McFadden himself includes discussions of global workspace dynamics, a functionalist theory, to complete the picture. Some of the more identity based notions of EM fields, like Pockett’s, might be much more disruptive.
If we’re going to fantasize about major upheavals, then people like Goff, Kastrup, or Penrose turning out to be right would be far more revolutionary.
But until there’s evidence, it seems a lot more likely that introspection is just unreliable.
LikeLiked by 1 person
Gosh Mike, earlier in this conversation you seemed quite proud to note that it was indeed possible for a theory such as McFadden’s to falsify functionalism (and even in the weak sense where a falsifiable theory comes in to effectively invalidate an otherwise unfalsifiable theory). Now you seem to be saying that functionalism would survive the general validation of McFadden’s theory? Actually I suppose that you’re right about that, or at least in name. Ideologies tend to take on a morphing life of their own in order to survive new political circumstances.
In any case I wasn’t talking about what clever people under various competing positions might do to hold on to their popularity under the experimentally verified success of McFadden. I was talking about the functionalism definition that you recently provided such that it mandates no specific kind of consciousness substrate. Presumably this would be because this sort of functionalism mandates that consciousness exist by means of certain generic information that’s properly processed into other such information. This position seems diametrically opposed to McFadden’s specific substrate based proposal. And yes dualists should survive, though it seems to me that establishing the brain based substrate of consciousness should help disinvite them from their current influence in science today. I have no worries about the survival of Chalmers himself, though the success of McFadden should certainly be an unfavorable turn of events for him.
You’re entirely right about how the stock of the “global” people could go up markedly given cemi success. I think it was back in 2001 that McFadden offered them a piece of his action, since his theory can reasonably be interpreted that way. Though he was clearly snubbed, that surely wouldn’t be the case if his cemi were continually verified.
In any case, are you implying that McFadden’s validation wouldn’t have the significance that I’m suggesting? If so then to understand I may need a bit more of your reasoning.
Granted, I did imply if you proved an electromagnetic ghost it would falsify functionalism, but that’s really only true if the EM field is absolutely the only way to do it, not just the way the brain does it. To whatever extent that’s part of CEMI (you’d know better than me), and that part itself was validated, then it would falsify functionalism. So I guess I should have specified that a functional ghost wouldn’t falsify functionalism.
Anyway, if you want to talk about consciousness or something interesting, I’m here. But I think I’ve had as much pointless discussion of hypothetical scenarios as I care to.
LikeLiked by 1 person
Actually yes, I was presuming that scientists would come to understand that consciousness could only exist through the proper parameters of electromagnetic radiation. That seems consistent with his theory. So they should then figure out elements of those parameters and even be able to detect where and in some sense how it exists for both the human and various other monitored conscious beings. Thus the often presumed privacy of consciousness would be lost in at least this sense. Furthermore we might then reproduce such EM fields with one of our machines to thus create some sort of phenomenal experiencer, though this wouldn’t be functional consciousness in the sense that it wouldn’t effectively do anything other than experience what we cause it to. But of course standard functionalist rather than non-substrate functionalists like yourself might tend to say that it would thus be functional in the sense that we would cause it to feel what it does, or “a function”. Your position would effectively be falsified while their position could morph on eternally.
Sorry that this sort of speculation has seemed pointless to you. I’ve found it to be quite the opposite. Thanks for engaging!
LikeLiked by 1 person
I must be missing something here. Surely (beware the “surely operator”! :-)), presence/absence of phenomenal consciousness is in principle unverifiable — hence the whole philosophical zombie kerfuffle. That being so, no theory of it can be falsifiable. Any apparent falsification can be countered with an assertion that phenomenal consciousness is actually present/absent (delete as appropriate) in the supposedly falsifying case. Hence Dennett’s intentional stance — for my money, the only rational approach to the matter.
If phenomenality is epiphenomenal* in the philosophical sense of having no causal effects, then definitely its presence or absence is untestable, and we can’t have any insight into which systems have or don’t have it. It seems like a metaphysical add-on we can choose to either believe in or not. (Things are less bleak if it’s only epiphenomenal in the biological spandrel sense.)
I’m with you on the intentional stance. Basically, talking about putative mental content that isn’t part of the causal chain seems unproductive.
* That’s a fun phrase. 🙂
I think the problem is a deeper one and applies to non-epiphenomenal theories of consciousness too. Say you have worked out such a theory to your satisfaction. How is it to be verified? I can see no way of doing so which would not be open to a challenge by somebody who simply disagrees with posits of your theory. In the absence of an agreed definition of phenomenal consciousness, you would surely (!) run smack into the Duhem-Quine Thesis: all observation is theory laden. And defining phenomenal consciousness in a non-circular manner is, as I see it, a hopeless task. Hence a request for such a definition is often met with an equivalent of “it’s obvious, innit?” or even “if you have to ask, you’ll never know”.
LikeLiked by 1 person
I totally agree about the definitional part. A big part of the problem with phenomenal consciousness is getting a concrete definition. Usually the only answer is some variation of Nagel’s “like something” phrase, often followed by an assertion that no further explanation should be needed. Unless we then use Nagel’s subsequent elaboration as our definition, the key parts of which appear to be epiphenomenal, it’s not clear what we’re even talking about.
I’m tempted to say if we could agree on some definition and if there were causal effects, that would allow us to get our foot in the epistemic door. The problem is once we’ve done that, we seem to be talking about something functional.
Mike Arnautov, even though your pessimism does seem quite well founded today, I suspect that there is a way for scientists to effectively progress anyway. While using abstract discussions of problems can sometimes imply fundamental intractability, once we fill in various specifics, effective assessments might then seem possible. That’s how I think things will go with consciousness.
Firstly, yes an effective consciousness definition should be very important. But if the human is often conscious when it seem so, as well as various other clearly sentient creatures, then it seems to me that Schwitzgebel’s innocent/wonderful conception (an idea which even Frankish has admitted is true), should suffice. It uses simple positive and negative examples to demonstrate an apparently non-problematic idea. Would it be possible for something non-conscious to “understand” his definition? No, not phenomenally so (even if functionally so in some sense). But this shouldn’t matter if humans often experience their existence phenomenally since that’s who the definition is meant for. So yes, one would need to be conscious to even possibly grasp this definition, though I see no problematic circle here. http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/DefiningConsciousness-160712.pdf
Secondly, to create something that phenomenally experiences its existence, does the human brain just accept certain input information and algorithmically process it into the right output information? That’s an idea that I consider magical. Instead let’s imagine that such output information is able to animate a specific kind of brain mechanism that causally creates what’s phenomenally experienced. So just as the four fundamental forces exist through causal substrates that scientists piece together roughly though will never fully grasp, let’s imagine that consciousness exists through such a substrate as well, or something that evolution effectively harnessed. As I mentioned to Alex below, just as your computer processes information to animate your computer screen, let’s imagine that your brain processes information that animates certain not yet scientifically grasped brain physics to create phenomenal experiences.
From this position it seems to me that there is only one plausible solution. It should be that the electromagnetic field set up by certain synchronous neural firing, is able to get above the general EM noise to itself exist as what you experience each moment that you experience. Beyond the radiation associated with neuron firings, what other brain mechanism could harbor the perfect informational fidelity of a component of the EM field directly created by that firing? This field would be a substrate for consciousness. Regardless of how sensible this answer currently seems to me however, how might it actually be tested?
I need to do a full post on this matter some day. There are all sorts of studies which have been done that McFadden has used second hand to demonstrate consistency with his cemi. The simplest is to just observe that apparently the only reliable neural correlate for consciousness found so far, is synchronous firing. This is presumably because it takes the right synchrony to get into the right electromagnetic zone of consciousness.
Consider my own proposal for testing his theory. A person who is about to have brain surgery would be paid to permit scientists to leave a set of (hopefully benign) transmitters in their head. Then once recovered the person would be paid hourly to let the scientists fire charges through those transmitters, each of which would be about the strength of a standard neuron. Here the person would be instructed to report if anything seems phenomenally strange as various synchronous brain-like firing occurs. The point would not be to make the person feel/think/do anything specific, but rather to potentially tamper with the theorize field that constitutes their current sight, vision, mood, and so on. If scientist could thus demonstrate that certain electromagnetic fields that they create in the head have various reported phenomenal effects, and so with hundreds of subjects could map out parameters of EM radiation that seem to correlate with “vision”, “taste”, “anger”, “itchiness”, and so on, then scientists in general should validate McFadden theory. I’m sure that many ordinary people would still disagree with this position, and that theists would tell us that this is merely one aspect of how God creates our souls, but the point is that science should nevertheless be able to advance in this regard. Does this seem reasonable to you?
(I’m also pleased that Mike Smith implied that any such consciousness substrate should or would have been an epiphenomenal spandrel before evolution effectively made it adaptive.)
Just for the record, Frankish’s actual response to Schwitzgebel is on p. 16-17 at this url: https://keithfrankish.github.io/articles/Frankish_Not%20disillusioned_eprint.pdf
Eric, on your spandrel theory, you know my position. Stop trolling.
LikeLiked by 1 person
The position of Frankish seems consistent with what I’ve said — he even seems to endorse it. What I don’t understand is why he doesn’t consider it “substantive”, or why he’d presume that someone like me wouldn’t want to let consciousness exist as “possession of introspectable properties that dispose us to judge that the states possess phenomenal properties in that substantive sense.” It’s not the sort of verbiage that I’d use of course, though I don’t see why I’d inherently object.
If the position of Frankish is that everyone should accept Schwitzgebel’s definition, as he somewhat implied, then fine. That’s my position too. In that case relevant scientists could potentially do so and it seems to me that illusionism could thus be dissolved as an oddly successful movement. He hasn’t said anything of the sort though, has he? The dissolution of this movement as a success, and largely given Schwitzgebel’s definition, shouldn’t work out well for him personally. Or at least that’s my current suspicion of what’s going on here.
I think Frankish’s main point is that Schwitzgebel’s innocent conception is fully compatible with illusionism. So he hasn’t really defined phenomenal consciousness in a manner that’s more robust than simply being an illusion for the more demanding version with all the problematic notions.
I actually reread the paper this week. The other issue I see with his definition is that it seems conceptually fragile. We go through the examples, then he asks us to hold it without making any additional theoretical commitments. The problem is that only works until we start to apply the least amount of analytical pressure to it, until we make any effort to understand how it fits in the overall ontological and causal framework.
Then we find ourselves having to grapple with the fact that phenomenal properties seem to have the problematic attributes Schwitzgebel rejects. So then, which is it? Do we take the appearance as reflecting reality and go the realist route, with all the implications? Or say that the appearance is misleading and go the illusionist route?
As Frankish notes, we could simply redefine a deflated version of “phenomenal” to be compatible with the illusion of the stronger version. That’s basically the move I’ve been making for a while, and one I suspect many scientists who discuss phenomenality make. But it really makes phenomenality an empty concept. Frankish notes that few phenomenal realists will want to go that route, a fact that seems increasingly evident the more phenomenal realists I talk with. At least taking it to be an illusion allows us to acknowledge that it does appear to have the problematic notions, but couple that acknowledgment with an observation of the problems with introspection.
LikeLiked by 1 person
It seems to me Mike S that it would have been epistemically disastrous for Schwitzgebel to have defined consciousness with any robustness at all. This is given the pathetic state of academia in this regard today. He’d have no epistemic basis from which to do so. Fortunately instead he provide a solid foundation from which to potentially build. I’m happy that you’ve reviewed his paper again, though perhaps without quite appreciating the necessity for an entirely innocence/wonderful consciousness conception. Such a thing must necessarily leave everything on the table (including otherworldly souls and all the rest), so that academia might finally have a solid place to begin sorting out this mess. Consider how often he mentioned various suspicions that may seem quite sensible to each of us, but then mandated that they not be written into any consciousness definition itself. Exactly! In that case his definition would have become ontologically loaded. Let’s let scientists do what they’re suppose to be doing here, and even let philosophers do what they’ve long neglected to do here.
Observe that with one hand you and Frankish seem to be asking for Schwitzgebel to provide more than an innocent conception of consciousness (under the heading of “robust”), and then with the other seem to castigate all who do! Instead I’d say that we should be promoting Schwitzgebel’s amazing definition in general. Thus scientists might use empirical demonstrations of how things are to potentially reduce his consciousness conception back to more specific notions some day.
On this you might poke back at me however. Don’t I believe that in a natural world consciousness must exist by means of some sort of causal substrate associated with brain function, whether EM fields or something else? Yes, though I’m not going to write such a stipulation into a consciousness definition itself. I’ll let empirical science figure out what’s what on that. It’s another strategy for you to consider. Any problem with always defining consciousness innocently for now, and then letting scientists potentially reduce that definition into further specifics some day? (This reminds me of how science has been approaching ideas like gravity.)
One thing that I’d like to know is the position of Frankish on Searle’s Chinese room? Is he essentially a repeat of Dennett?
LikeLiked by 1 person
Schwitzgebel was responding to a target paper’s challenge to produce a substantive definition of phenomenal consciousness without non-physical implications while also being distinct from illusionism. As Frankish noted, he may have come up with something interesting, but the effort wasn’t substantive. Schwitzgebel of course disagreed. While I’m a fan of Eric Schwitzgebel, I have to agree with Frankish.
To your point about not polluting scientific investigation with all this theorizing, as I noted somewhere else in this thread, all observation is theory laden. It has to be. We need theory to even take a shot at interpreting observations. And it’s not like Schwitzgebel’s example based definition is entirely theory free. His very assertion that all those examples are all part of some coherent category not based on functionality is itself a theory. Or, well, it’s more the promise of a theory.
So the goal isn’t to conduct theory free observation. That isn’t possible. We should all come to those observations with our theories, but, crucially, be prepared to modify or discard them if the observations clash. That’s science.
Who’s castigating? If someone comes up with a definition of consciousness, it’s totally fair to scrutinize and criticize it. You’re perfectly welcome to contest the criticisms I’ve directed at Nagel, Block, and Chalmer’s conception of consciousness, as others already are doing. You know my stance: any claim to be the one true definition is meaningless.
I can’t recall Frankish ever addressing the Chinese Room, and a quick search didn’t turn up anything. I’d expect him to be close to Dennett, but he does have his own views so it’s hard to say.
LikeLiked by 1 person
Okay Mike S, going through that target paper does fill in some blanks for me. Schwitzgebel’s innocent conception of consciousness isn’t substantive in the sense that Frankish meant, which is to say both consistent with illusionism and “robust” in the sense of telling us how consciousness works. It’s merely consistent with illusionism but doesn’t go further. Given the primitive state of science in this regard today however, demanding both conditions seems like an impossible challenge. Schwitzgebel did something far better that what Frankish asked for I think, though I will credit him with an assist. Schwitzgebel gave the world an effective consciousness definition that everyone should be able to accept. This should be a huge piece of the puzzle!
You seem to be presuming that it’s theory laden, though from what I can tell only in the weak sense that you have evidence of your own phenomenal experience. Hopefully you agree that Descartes settled that issue long ago. You also seem to presume that his definition challenges functionalism. How could it? Functionalism is either true by definition, or in the case of your substrateless version, it’s unfalsifiable in itself. While Schwitzgebel’s definition does not and cannot challenge functionalism, I’m able to do so given these observations.
In any case the question is this. Will you support Schwitzgebel’s innocent consciousness definition in the name of progress here, or will you instead tell me that you cannot do so because it’s problematic, as well as illustrate these supposed problems?
What would it mean to support the innocent phenomenal consciousness definition? What does it provide that we didn’t have before? How can we use it? What work does it do? Is the goal just to be able to use the word “phenomenal” without the problematic connotations? If so, what work does that word do without those connotations?
LikeLiked by 1 person
That’s a reasonable list of questions Mike S. I’ll go through them one at a time.
“What would it mean to support the innocent phenomenal consciousness definition?”
This would be a definition for the term that we should all be able to agree upon. Mike Arnautov for one has observed that we lack such a thing today and so studying “consciousness” should be problematic in this regard. I think he’s right. Fortunately, Schwitzgebel to the rescue.
“What does it provide that we didn’t have before?”
This would provide a generally accepted idea that we should all agree upon and thus not talk past each other in this regard quite as much.
“How can we use it?”
As a founding definition regarding the consciousness idea.
“What work does it do?”
This would give us a specific idea to study rather than a hierarchy of ideas that people may or may not mean in a given situation. Of course different ideas could still be considered, though hopefully soon using terms other than a generally accepted “consciousness” idea.
“Is the goal just to be able to use the word “phenomenal” without the problematic connotations?”
No, the goal would be for us to have an accepted understanding what scientists in the English language mean when they say things like “phenomenal”, “sentience”, “qualia”, “subjective”, “consciousness”, and so on. It’s an extreme refinement of the Nagel heuristic that you seem to consider quite troubling.
LikeLiked by 1 person
“This would give us a specific idea to study rather than a hierarchy of ideas that people may or may not mean in a given situation.”
So, what specifically would that idea be? You talk a lot about having a definition we can all agree on. But what specifically is that definition? Do we all just have to link to the paper anytime someone asks what we’re talking about? That doesn’t seem like progress. And even if we can establish what specifically we’re talking about, why is that definition preferable to any other commonly offered?
At least Nagel and Block elaborated on what they meant. And it does match many of our intuitions. I don’t have a problem with their version of phenomenal consciousness as a definition of the phrase. They bite the necessary bullets to put it forward. I just don’t think it’s reality, and laid out why in the “like something” post. The problem is people use their terms without buying into the package, and don’t define what they mean instead. The ambiguity hides confusion and disagreement. I can’t see how making things even more ambiguous is helping.
LikeLiked by 1 person
Schwitzgebel’s definition is the specific rather than ambiguous one that you hopefully gained when you read his paper. Yes people might achieve such an understanding from the paper directly, or perhaps in the same manner that they learn the meaning for terms that are commonly understood in general, which is to say through the example based usage of those who grasp the term.
The reason I consider this definition preferable to all others that I know of, is because the associated idea is both relevant and should be accepted to exist by all sensible people who grasp it. In any case this is what I always have and always will mean when I use the term.
Would it really be bad if scientists in general had a specific illusionist sanctioned and generally accepted conception of “consciousness” by means of Schwitzgebel’s positive and negative examples? Couldn’t it still be demonstrated that theorists such as McFadden will end up as failures and therefore by default scientists would continue to presume that such consciousness should tend to arise when the right generic information is properly processed into other such information, whether brain based or conventional computer? And if it turns out that a theory that’s falsifiable in itself becomes experimentally verified to thus lend this definitions certain specific parameters, such as being EM field based, then it seems to me that you’d accept those understandings just as you accept other science based understandings.
When people use the “consciousness” term, wouldn’t it be helpful if there were a common understanding of what was meant that we all consider to exist? Frankish grasps the meaning of and accepts the existence of this consciousness. Must you nevertheless object to the general adoption of this specific rather than ambiguous understanding?
“The problem is people use their terms without buying into the package, and don’t define what they mean instead.”
Right, as so the general adoption of Schwitzgebel’s definition would seem to fix this particular problem.
LikeLiked by 1 person
On the innocent definition, you seem to find it gives you everything you need, but I find it too innocent to be useful. I think we’re at the point of just having to agree to disagree.
Would it be beneficial if we all agreed on our definitions? Definitely. But that ship sailed long ago. The reality is we have to communicate with everyone in the language environment we’re in. I don’t think we have any real choice but to clarify what we’re talking about as well as we can in that environment (at least if we really want to be clear). Language evolves. Attempts to control it don’t often succeed, and even when they do, they usually look like the IAU’s definition of “planet”.
LikeLiked by 1 person
Too innocent to be useful Mike S? It seems to me that by addressing an idea that we can all agree exists, whether physicalist, theist, dualist, panpsychist, idealist, functionalist, illusionist, and so on, that this would be a very useful step forward. This would provide a solid position from which a very troubled field might productively build. Conversely mandating that various “robust” components be added to such a definition before science is able to verify the existence of such components, does not seem like a useful step to me. This seems like a recipe for failure.
You can assert that the general adoption of Schwitzgebel’s innocent conception of consciousness would not be useful if you like, though this does leave me to wonder about your motivations. Furthermore you should wonder about what influences you here as well, since some of this should be happening behind the scenes. I refer to this as “quasi conscious” rather than “unconscious”, since I consider the unconscious term to be used for far too many related ideas that we should try not to conflate.
In any case I consider the IAU “planet” definition to be an effective association. Here scientists were attempting to standardize certain terms based upon sensible parameters so that they might be able do their jobs more effectively. The public and media then came on to blast them given that folk conceptions of Pluto as a planet were put in jeopardy. Furthermore observe that astronomy is many orders more advanced than phenomenal exploration. Frankish could be a savior by permit this sorry field to finally adopt an effective consciousness definition, though he seems to decline in favor of the interests of his illusionism movement.
We all have to be on guard against nonconscious motivations. You do too Eric. Have you considered your own motivations for this reverence for a list of examples, and manifest hostility toward anyone who doesn’t agree?
I’ve noted before that I have no emotion about how we categorize Pluto. But the fact is that the definition remains controversial, with a lot of astronomers and planetary geologists still refusing to use it. Their reasons aren’t the childish reactions you’re thinking of.
LikeLiked by 1 person
OGH Mike has already given a reply, with which I fully concur. In short, I would refer you to Quine’s strong version of the Duhem-Quine thesis (most explicitly in “Two Dogmas of Empiricism”): all observation is inescapably theory laden and a theory-free view is simply not feasible. We prove it by this very discussion, deriving opposite convictions from the very same facts (or lack thereof).
To be honest, I think that all these arguments are actually irrelevant. I suspect the problem will be dissolved by IT engineers (software and hardware) eventually creating systems which will be in practice generally thought conscious, whatever philosophical sceptics may object. Did you watch commentary on AlphaGo’s games against human experts? Commentators kept slipping into intentional language (“it thinks that…”, “it plans to…”, “it intends to”, “it “wants”) and then correcting themselves. Intentional stance in action!
You also say “Secondly, to create something that phenomenally experiences its existence, does the human brain just accept certain input information and algorithmically process it into the right output information? That’s an idea that I consider magical.” Well, I don’t consider it magical. So in the absence of other evidence, our respective considerations cancel each other. 🙂
I would argue in return that in so far as we do have *any* conscious experiences, in order to be of any use they must have distinct characteristics a.k.a. “phenomenal properties”. The difficulty arises only when one makes the false move of thinking of these characteristics as belonging to *something”, some representations created by the brain, which present themselves to oneself. The problem is that this view separates the experience from the experiencer, thus smuggling inherent and unmotivated dualism into the picture. We do not experience phenomenal properties. Phenomenal properties *are* our experience. (Richard Rorty has a good go at demolishing this philosophical mistake in his “Philosophy and the Mirror of Nature”.)
To my mind, this whole misconception masks a much more interesting problem: is consciousness actually necessary for Brentano-style intentionality, or is its importance for us as we are (and I do not consider it in any way epiphenomenal!) merely a contingent quirk of Terrestrial evolution?
LikeLiked by 1 person
Sounds good Mike A, and I am pleased that you’re around here supporting Mike S. I often feel guilty about countering some of his most cherished beliefs. He’s a very talented person who provides us with a wonderful service!
On your thoughts about the futility of argumentation, I agree. Furthermore I’ll add this — we’re all self interested products of our circumstances. Our investments make it very difficult for us to admit when we’re wrong. Some of us should be invested in ideas that are more accurate, while others less. Demonstrations of this inequality is where egos can get bruised. So it goes. Thus my advice to all who wish to effectively participate in discussions like these, is to emulate the strategies of Mahatma Gandhi. For anyone who hasn’t seen the film in a while, watch it again from the context of a blogger. It’s just plain good strategy.
In any case I seem to be positioning each of you in a tough spot. You can continue to back the unfalsifiable in itself notion of consciousness as generic information converted to other such information, though this leaves you open to demonstrations of various ridiculous implications. Furthermore there is not only the potential for McFadden’s cemi to be verified experimentally more and more, but demonstrations that such a consciousness substrate approach makes very good general sense. It seem to me that one good way to counter all this would be to help make sure that Schwitzgebel’s innocent conception of consciousness never becomes widely adopted.
Glad to be of service! 🙂
As for being in a tight spot, I think the boot is on the other foot. When I say that (a) the presence/absence of phenomenal consciousness is not objectively verifiable and (b) therefore (c) *any* theory of phenomenal consciousness (epiphenomenal or not) is not falsifiable — you obviously disagree. Could you enlighten me as to which bit (a, b or c) do you disagree with?
As for your favourite McFadden theory, that’s just a subset of the notion that has been floating around AI circles for ages. There are many more things happening in the brain than neuron firing. There is the whole complex endocrine global signalling as well as non-targeted (epiphenomenal?) electric fields. And furthermore, there are strong indications that the old McCulloch-Pitts neuron model is grossly over-simplified anyway. To what extent all these “extras” are necessary ingredients of the phenomenon of consciousness is unclear. They can be also simulated, of course, but that would be much more computationally demanding. Hence so far the strategy is to see how far pure neural networks can be pushed — which really is remarkably far on current showing
But my problem with McFadden’s theorising is that he is trying to place “free will” in brain-generated electric fields. Do you actually believe in “free will”? I used to, until I realised (long ago) that I could not answer a question posed by Marvin Minsky: how do you tell the difference between free will and ability to make choices?
In any case, electric fields are governed by Maxwell’s equations and on the deeper level are accurately described as quantum-probabilistic “micro-banging” (to use Ladyman’s term) of virtual photons. Where’s free will in that?
LikeLiked by 1 person
On your presumption that I consider phenomenal experience objectively verifiable, no I’m not going to support that. As Alex has been arguing, science itself should be considered a subjective activity. Consciousness seems to exist for me as the medium through which I experience my existence. I presume it’s the same for you. From here we may be considered inherently subjective entities, and even if there is some kind of objective world “out there” that creates us. That doesn’t mean that we’re unable to do effective subjective science however. This often seems to be done pretty well in other fields, though far less so in mental and behavioral varieties, and certainly regarding “consciousness”. This is one of many things that need fixing.
On the complex and still often mysterious function of the brain, definitely. The point of science is to reduce such dynamics back to graspable ideas that seem effective. This brings me back to my initial question to you above that was not addressed. If scientists were able to wire transmitters into the brains of people such that the associated EM field would tamper with their otherwise normal phenomenal experiences for report, and even learn EM parameters from which to directly create standard experiences for a person to have such as frustration, itchiness, red, and whatnot, would this largely validate McFadden’s theory to relevant scientists in general?
It’s good to hear that you’ve done some reading on McFadden’s cemi, and yes I cringed when I noticed him to use the term “free will” for something that I don’t think he should have. For that I’d instead have used the term “purpose”. And in truth he’s not talking about libertarian free will, but rather a compatibilist form that many of us (like me) accept. He’s actually a full determinist. I find this is heartening since he mainly works as the founder to the new field of quantum biology. Then there’s his wonderful new book on Occam’s razor. I can overlook small missteps to appreciate a true big picture type of scientist.
Please re-read my post. There was no such presumption. There was a question which you haven’t answered. Needless to say, if you do not wish to answer it, that’s up to you.
If McFadden says “free will” but does not actually mean free will, then he is being more philosophically naive than one would expect from a serious researcher looking into these matters. And I remain completely unclear as to why and how brain’s electric field is supposed to constitute some virtual machine or whatever. It can, of course, be used as a (perhaps a part of) global signalling system on the lines of GWS theories, but that is not what you appear to have in mind.
LikeLiked by 1 person
Okay Mike A, I figured that you were just saying we can’t demonstrate the existence of phenomenal consciousness objectively, which I agree with since I consider us inherently subjective. So if you’re saying that the presence/absence of phenomenal experience is not even subjectively verifiable through the methods by which scientists study our world in general, then yes I do disagree with you there. Here I guess I dispute both (a) and (c), (though I’m not entirely sure what to make of (b) since it seems to simply reference a “therefore” between the other two). Fortunately I’ve presented a way to test McFadden’s theory that currently seems conceivable to me. I’ll go through this proposed test and then perhaps you’ll be able to explain why his theory, and perhaps all phenomenal experience theories, are inherently unfalsifiable.
The theory begins by proposing that the brain functions as an entirely non-conscious computer, and not entirely unlike the non-conscious computers that we build. Each accept input information to algorithmically process for potential output function. Under cemi however there should be certain parameters of synchronous neuron firing that create an electromagnetic field which itself exists as a phenomenal experiencer of existence. Therefore here consciousness should exist as a physics based output of the brain somewhat like screen images exist as computer monitor output. So your hearing for example would exist under certain parameters of an EM field that’s created by associated synchronous neuron firing. Furthermore theoretically your thought itself should exist as some component of such a field as well. And theoretically when you phenomenally will your hand to open for example, the EM field’s ephaptic coupling should induce the right neurons to fire that cause your hand to essentially do what you’ve decided. I consider this to be a falsifiable theory because it holds that an electromagnetic field exists as the substrate for phenomenal existence, which is to say a causal dynamic that thus could be experimented upon to check its validity.
The test I propose is to put a massive chain of neuron strength transmitters in a volunteer’s head and then attempt to see if various synchronous transmitter firing typical of brain function were to affect the person’s otherwise standard phenomenal experiences for report. This would presumably be because waves of a given variety tend to affect other waves of that variety. So with enough testing I’d expect scientists to be able to set up exogenous EM fields in the heads of subjects that alter their otherwise standard phenomenal dynamics for oral report. Here I think scientists should learn parameters of the such fields which constitute vision and so on given the types of radiation that would tend to do such tampering.
Let’s say that scientists were to set up apparently appropriate EM field disturbances in the brains of test subjects for many years quite extensively, and yet never verifiably alter anyone’s phenomenal experience for report. From here your challenge is to explain why it would not be logical to thus conclude that McFadden’s theory is wrong.
Has McFadden been naïve about how his use of the “free will” term would be interpreted libertarian rather than compatiblist? Yes, clearly so!
The (b) was inserted because purely in terms of formal logic one can agree with (a) but disagree that (c) follows from it, with no prejudice to agreeing or otherwise with (c).
I actually meant objective falsification. Subjective one is of no use, unless underpinned by the objective kind — that’s the whole point of philosophical zombies.
But never mind… Let’s assume I am a firm MF believer, faced with your putative falsification. (Not gonna happen for too many reasons, but we are talking about falsification in principle, so OK…) Am I stumped? No way! Quite clearly, brains have a defensive mechanism to guard against unwanted interference by external electric fields — e.g. to ensure that when two people quite literally put their heads together, no conscious disturbances result. Just how this mechanism works… perhaps by switching conscious experiences to different frequencies or whatever, is a very interesting question to be researched at some length!
As a separate point, I repeat: I can see no advantages flowing from off-loading all of phenomenal consciousness into electric field, which is in any case completely determined by the brain’s activity. What problem is solved thereby?
LikeLiked by 1 person
I think I see what you mean about the (b) in your formal logic Mike A. In any case I’m in general disagreement with that logic. Consider this logic.
A magical situation: If something occurs in a given world without worldly cause for it to occur, then no worldly reason can be true for it to occur. Furthermore here a true reason for it to occur would inherently be unfalsifiable in a worldly sense.
A non-magical situation: If something occurs in a given world that does have worldly cause for it to occur, then there must be true worldly reason for it to occur. Furthermore here a true reason for it to occur would inherently be falsifiable in a worldly sense.
This formal logic seems to either put you in the magical and unfalsifiable consciousness camp (with Chalmers for example), or in the non-magical and falsifiable consciousness camp (with McFadden for example). If you disagree then I’d like an explanation.
I suspect that the reason you’ve been arguing that consciousness is unfalsifiable, is largely because you’ve never heard of such a theory that actually was falsifiable. And I must admit, I hadn’t either… that is until McFadden’s.
The reason the experiment I propose demands that we put transmitters inside the head to even potentially affect EM field consciousness, is because neurons produce extremely low energy EM fields and the skull serves as an effective faraday cage for such fields. Certain synchrony should amplify the right sort enough to not only get into a consciousness zone, but also to affect the neurons which move our muscles in accordance with what we decide.
“As a separate point, I repeat: I can see no advantages flowing from off-loading all of phenomenal consciousness into electric field, which is in any case completely determined by the brain’s activity. What problem is solved thereby?”
Can you think of anything which is known to exist, though does so without any substrate from which to exist? The reason that I don’t think anyone will ever validly do so, is because substrate equates with existence in a natural world. Here a void in substrate mandates a void in existence. So if consciousness both exists and is worldly, then there must be a worldly consciousness substrate of some kind. Furthermore if brains create consciousness then observe that its EM field would seem to be the only medium appropriate. Thus naturalists in general ought to presume that consciousness exists by means of some form of neuron produced EM field. Furthermore we ought to be trying to verify whether or not this is true experimentally. The main reason that very few of us have this concern however, seems to be the dominant belief that consciousness exists without substrate, and even though this belief conflicts with naturalism as I’ve demonstrated.
We seem to have a different idea as to how a theory is falsified. In my book a theory is falsified if it cannot be augmented in a way which preserves its core assumptions in order to square it with experimental evidence. I gave you an explicit example of how an MF supporter could deal with your hypothetical scenario without abandoning the theory’s central claim. There are others and I kind-of alluded to one by mentioning p-zombies. The experimental results you posit could be also explained away by asserting that such external fields disrupt the brain’s electric field causing phenomenal consciousness to shut down — subject become p-zombies for duration of the experiment. Since phenomenal consciousness cannot be objectively verified, its presence/absence can be arbitrarily asserted or denied to suit inconvenient evidence.
And I most certainly disagree with your very partial syllogism. The non-magical explanation is that our consciousness (including experience of phenomenal qualities) is a natural part of the biochemical activity of the brain — quite possibly including the resulting electromagnetic radiation for some non-local signalling.
Hence I am quite baffled by your
“The main reason that very few of us have this concern however, seems to be the dominant belief that consciousness exists without substrate, and even though this belief conflicts with naturalism as I’ve demonstrated.”
Without substrate? What’s wrong with brain biochemistry as the substrate?
LikeLiked by 1 person
I think I’m understanding your position better now. One small piece is that I seem to grasp what you mean by “an MF supporter”. This would of course be “a McFadden supporter”. Damn my slow brain! Hopefully my meaning will also become reasonably clear to you.
You seem to be saying for example that an MF supporter would be able to counter conflicting evidence by claiming that the brain should have defenses which would prevent a science produced EM field in the brain from affecting a brain produced EM field that exists as consciousness itself. That explanation however should create a substantial Occam defying next level of complexity to also explain. No one has less respect for these after the fact supposed explanations (such as the notorious “epicycle”), more than McFadden. If certain synchronous neuron firing in the brain creates a complex EM field with parameters from which to exist as a phenomenal experiencer, then proper synchronous transmitter firing in a person’s brain ought to affect that consciousness field for oral report. If such oral report were widely demonstrated then continued evidence should be virtually incontestable. Thus science should finally be able to move on. Initial failure however should be far less decisive. For a while it should be reasonable to propose that these exogenous fields hadn’t attained the right parameters yet. Longer and longer continued failure however should cause scientists to doubt McFadden’s theory more and more.
Contrast this with my conception of what you believe. (And certainly correct me if I’m wrong!) The position is essentially that consciousness exists in the form of functioning brain software alone. Thus any computer that properly converts the right information into other information, will thus be conscious. My thumb pain thought experiment by means of inscribed paper provides a simple hypothetical example of something that would thus need to phenomenally experience its existence. This general proposal should be unfalsifiable in itself because one could legitimately explain failure as nothing more than the wrong information conversion. While McFadden proposes a physics based closed idea which might thus be explored scientifically, the hypothetical information to information proposal harbors no such constraints to potentially test.
On your p-zombie way of challenging the falsifiability of McFadden’s theory, we naturalists necessarily presume that such beings cannot exist in a causal world. Thus any scientist who would propose that these exogenous EM fields in a subject’s brain would transform a phenomenal experiencer into a p-zombie, should also be injecting an unnatural idea into the situation. Of course not all scientists are naturalists, though I do consider this problematic. Given their diverging metaphysical outlooks it seems to me that “natural +” forms of science should not be mixed with “natural” science.
In my last comment I errored when I said “without substrate” regarding the position of consciousness as software function alone. As Mike S has also mentioned to me, there will always be substrate of some kind. The question is whether or not potentially causal substrate exists. The words that we read can exist by means of all sorts of substrates. The causal condition for their existence seems to be whether or not the proper light information might exist as words to potentially read. Or perhaps in the case of Braille the proper form of touchable bumps must exist, and regardless of the substrate that makes up those bumps. Similarly our video media may be provided to us in a host of formats, though the causal substrate will concern any machine which is able to create the associated light and sound based transmissions. So in all cases it’s not just the existence of substrate at all, such as brains themselves, but appropriate causal substrate that’s thus able to create a phenomenal experiencer.
As I see it, “What’s wrong with brain biochemistry as the substrate”, is that here we may be proposing a computer to create an experiencer, though without addressing a physics based substrate from which to do so causally. An EM field might constitute such a substrate, though the simple existence of a brain that processes information should not. As in the case of words or video, in a natural world we always need causal substrates for existence. Could you at least remain agnostic about which phenomenal experience causal substrate the brain’s processed information animates in order to create a phenomenal experiencer? Or must you believe that phenomenal experience inherently exists when certain information is correctly processed into other information, though without the need for there to be a causal substrate of some kind for that information to animate, whether EM field or something else?
That’s not how the Razor works. Its correct use is to reject unnecessary hypotheses. But if an experiment disagrees with a theory and there are no better alternatives, then an elaboration of the theory becomes necessary. Think the discovery of neutrinos. An embarrassing violation of energy/mass conservation it not falsify Einstenian dynamics — a neutrino was posited as the answer instead, without anybody complaining about the Razor. There are other examples from the history of science, dark matter being the most recent example.
I unclear why you think that a naturalist must reject the possibility of philosophical zombies. It is only the anti-physicalist argument *from* p-zombies that has to be rejected as logically flawed. For example, if consciousness is indeed an epiphenomenon (which I don’t think it is, but I cannot prove that it is not), then it may well be possible to disrupt it, without disrupting behaviour — just like epiphenomenal noise of an engine can be suppressed by an “anti-noise” (as in that classic SF story of Clarke’s). A p-zommbie wuold result.
Your “inscribed paper” argument is just a version of Searle’s Chinese Room argument and you say “I obviously object” to the suggestion that consciousness might be generated by such mere computation. Doesn’t look obvious to me! But let’s agree, for the argument’s sake… What makes you think that electrical fields generated by the brain are any less computable?
As for being open-minded, I try very hard to separate what I think (or like to think) might be the case, from what I know to be the case. So yes, MF’s theory might be right, but it doesn’t seem to be well motivated.
LikeLiked by 1 person
I’m quite aware that the Razor permits greater complexity when there are no better alternatives. It could be however that I understand McFadden’s theory better than you do right now, and that this understanding demonstrates to me how ridiculous it would be for a brain produced EM field that exists as consciousness, to not be affected by a technological EM field that’s produced in the right place with the right parameters. Given extremely well established physics, there simply should be no causal alternative for such fields to remain unaffected. If neurophysicists (or whomever) were satisfied that an extremely wide assortment of low energy brain produced EM fields should be affected by fields technologically produced inside the brains of extensively tested volunteers, though without any verified oral report, then McFadden’s cemi should essentially be removed from consideration.
I should add that I have no idea if such an experiment is even remotely feasible in a practical capacity. McFadden himself has never proposed such testing. I emailed him my proposal in April of last year. Perhaps he doesn’t consider such testing remotely feasible since he didn’t respond. Or perhaps he didn’t respond because this would have helped give a random nobody cheap legal rights over a hard earned area of his domain? In any case we’re clearly discussing the conceptual testing of a falsifiable theory, as in neutrinos, dark matter, and all the rest. Conversely consciousness by means of generic information processing alone should be unfalsifiable.
I do not so much reject the possibility of p-zombies from the premise of naturalism (as I do for gods and such), but rather consider p-zombies utterly ridiculous from the premise that we live in a natural world. An Occam consistent story would be that consciousness evolved because it helps an organism function in ways that non-conscious organisms cannot. Thus as I’m thinking about various potential responses to you right now, it makes sense that the decisions I personally make consciously go on to help causally animate associated typing. You believe this as well of course. But now consider a far less causally simple account — consciousness instead came to exist in us without function, and not even as a spandrel of something else that does have function. If causal, though organisms like us are able to function essentially the same without it, then why would it continue to be passed on? This makes no sense to me. Thus I don’t believe that any modern naturalist should be open to the idea that p-zombies might exist. (In any case if my proposed cemi test were to fail, imagine how silly people would consider the excuse that this was probably because exogenous EM fields also turn subjects into p-zombies!)
Regarding the computability of EM fields, this might be helpful since I do not dispute it. In fact I consider consciousness to exist as one variety of computer in itself. The computers that we build are powered by means of electricity (generally). Brains are powered by complex electrochemical dynamics. So what is the power source which causes a brain created phenomenal computer to function as it does? A desire to feel good I think. This idea is sometimes referred to as “utility”, and it founds the reasonably hard behavioral science of economics.
Yes I did model my thumb pain thought experiment after Searle’s Chinese room. When approached step by step I consider his version relatively long and clumsy. Even then it’s based upon the condition that a Turing machine could be built that can speak a natural language just as well as you or me. He and I doubt that such a machine is possible. So I used my Occam Razor to build something more to the point. Could certain inscribed paper that’s properly processed into other inscribed paper, thus create something that feels what you do when your thumb gets whacked? HA!
The essential point is to illustrate that in a natural world, the computations that all computers do should exclusively be implemented by means of causally appropriate output mechanisms. Thus the brain should be doing more than simply processing information in order to create consciousness. It should also be animating some sort of consciousness producing mechanism, and analogous to how a conventional computer animates the function of a computer screen. This may be a hard pill to swallow for anyone who has become heavily invested in notions of consciousness uploads or existing phenomenally in a body-less virtual world. But why not continue having this sort of sci-fi fun by imagining a copy of brain information that’s sent to a machine that’s armed with a physics based consciousness output mechanism?
My apologies to you as well for the long delay in responding.
1. I don’t think p-zombies can be dismissed so easily. While I agree that the anti-physicalist argument based on zombies is nonsense (it assumes what it claims to prove), I am unaware of any reasons to dismiss the possibility of p-zombies as such — sentient (intelligent, intentional…) beings without phenomenal consciousness. In fact, I gave you one possible scenario in which they might exist. Yes, it would mean phenomenal consciousness being an epiphenomenon, which I think is unlikely. However, I do not see a good argument showing that it definitely cannot be one. It is important to separate what we believe from what we know can assert as a fact.
2. On reflection, I am surprised that you should try invoking the Razor against my suggestion that in MF’s theory brains might need to have a defence against externally imposed electrical fields. Perhaps you do not appreciate how easily any electrical field structure (particularly one of the unprecedented dynamic complexity you envisage) could be disrupted by external electromagnetic influences. A robust defence mechanism of some sort would be a necessary ingredient of any theory of the sort you are talking about.
3. If you are happy with the notion that electromagnetic fields are in principle computable, then I fail to see why your “paper and pencil” argument does (or does not) work against any such “electrical field computer” just as much as it does (or does not) work against brain’s biochemistry. In short, you are still not telling me why electrical fields should be able to do something that biochemistry in principle cannot do.
LikeLiked by 1 person
No worries about any delayed responses to me Mike A. I am happy that you’ve continued however since I have been enjoying our discussion. I sometimes start but then halt a response until I find the right frame of mind. That was certainly the case this time! Your questions and concerns involve things that my last comment does address at least somewhat. Hopefully however some elaboration will help clarify my position.
On your first point about the possibility that p-zombies can or do exist, I grant that if consciousness is epiphenomenal in general then McFadden’s theory would be unfalsifiable in that specific regard. Actually if our bodies can effectively function the same without a phenomenal instructor experiencing its existence to thus decide various things, then this should negate the falsifiability of any otherwise falsifiable consciousness proposal. I suppose this is why you mentioned earlier that consciousness proposals are inherently unfalsifiable. Of course anyone who also believes in evolution would have to wonder why such an irrelevant trait would be passed on that’s not even a spandrel of something that does matter (as in the case of my functionally irrelevant shadow to my functional though light blocking body).
In the end however you do seem to consider consciousness to have evolved because it helps various organisms survive better. Though you’ve merely said that you consider epiphenomenal consciousness “unlikely”, while I’ve said that I consider it “ridiculous”, I’ll save further argument on the matter for those who truly fail to grasp how unlikely it would be (to say the least I think!) for phenomenal experience to exist in us without function. Thus I’m restricting my argument to you and others who at minimum hold your position on the matter.
(I’m not entirely sure if I should mention or ignore your implication that something could lack phenomenal consciousness and yet also be “sentient”. I personally define these terms equivalently. If you do not then I suppose we may need to reconcile these terms.)
On your second concern about the distortability of consciousness as a neuron produced electromagnetic field with the right parameters, actually I consider the brain to have an inherently robust defense mechanism against such distortion. According to McFadden the high conductivity of the cerebral fluid creates an effective faraday cage against exogenous field tampering since it’s mainly located around the outside of the brain. Apparently even EM fields created by the alternating current of a standard high voltage power line create only 40 Micro Volt/Meter fields inside the head. Conversely typical brain fields exist in the range of 1,000 to 1,000,000 times that magnitude. I mainly took this information from McFadden’s “Prediction 6” in the following paper, which is found on page 13. https://philpapers.org/archive/MCFSFA.pdf
Here I feel the need to complain about McFadden’s work. 8 ways of testing his theory are presented in this 2002 paper, many of which have reasonable experimental verification. Then in his far more recent 2020 paper I think the number was increased to 13 predictions. But never has he proposed anything with near the decisiveness of the test that I propose (which is to say, to try to use technological EM fields of the right parameters in the right part of the brain of a fully conscious person to see if this would distort that person’s consciousness for oral report). If such a test were successful this should alter human understandings to its core. I suspect this would alter the course of science more radically than any discovery yet. But why must I argue the case here exclusively rather than the highly qualified and distinguished McFadden himself? If I’m deluded on the possibility for such testing then where is my error? Or otherwise, how could McFadden be this obtuse?
In any case if neurophysicists (or whatever) could say that no one should believe that consciousness exists by means of brain produced EM fields given that such consciousness ought to be altered by various exogenous EM fields, though obviously it isn’t, then surely some of these scientists would have said so. Apparently high cerebral fluid conductivity protects these fields from such tampering and therefore they’re unable to.
On your third observation regarding how I could hold my position and yet also believe in the computability of EM fields, this could be important since my answer may help you differentiate my argument from more standard arguments. The central theme is that all computers accept input information, process it algorithmically into other information, and this new information then goes on to animate the function of associated output mechanisms. The brain operates the heart this way for example, as well as my computer for its screen. Without associated output mechanisms these computers obviously do not pump blood or create video images. Can you think of anything that computers do without operating associated output mechanisms?
So what I’m proposing is that the right EM field effectively serves as the brain’s consciousness output mechanism. Without this there should be no consciousness regardless of the computations that the brain does. Similarly no computer is able to animate a heart or computer screen that they aren’t causally set up to animate.
Conversely the prevailing implicit to explicit belief today seems to be that consciousness can exist by means of information processing alone — no output mechanism is required. Thus unlike any other known computer function, all the brain should do to create “thumb pain” for example is to accept certain information and convert it into the right second set of information. Here “thumb pain” should effectively exist for something to experience if certain encoded sheets of paper were processed into the right second set of encoded sheets of paper. But as I see it this skips the crucial final step in the process, or a thumb pain animation mechanism. So for actual thumb pain to exist the second set of encoded sheets of paper should need to be fed into a machine that uses that information to produce the right EM fields (or whatever) that exist as the experiencer of such pain itself. Here the standard position is that there’s an unfalsifiable “programming problem of consciousness”, whereas from my position there is a physics based and so generally falsifiable problem to potentially solve.
On the potential for EM fields to be simulated, yes I see no reason why it shouldn’t be possible to simulate them. Here you may ask, if the brain does create consciousness by means of certain EM fields, then if one of our computers could simulate such fields well enough to effectively become those fields, couldn’t one of our computers thus create consciousness by means of such a simulation alone? Yes I’ll agree. If the simulation of a given EM field actually becomes that EM field in a physics based capacity, then sure. I won’t be holding my breath on the matter however until, for example, I can be hydrated by means of perfectly simulated water. 😉
So in short I do actually believe that the biochemistry of the brain creates consciousness, and probably by means of associated EM fields though a consciousness output mechanism of some sort in any case. Thus for one of our computers to do so as well it should need to animate such fields, somewhat like our computers also animate computer screens. Here you’d purchase a device with the right physics and so plug in your compatible computer to create something consciousness. Still that doesn’t quite get to the function of an evolved conscious creature. Here the brain should exist as one form of computer that’s driven by means of electrochemical dynamics, while consciousness should exist as a brain produced second computer that’s driven to feel as good as it can each moment. Theoretically the brain uses this second phenomenal computer to algorithmically do what it otherwise cannot, or function by means of value based purpose rather than algorithm alone.
1. There is no mystery in McFadden not contemplating the sort of test you are talking about. As anything other than a purely thought experiment, it is entirely unrealistic.
a) Electrical fields being what they are, there is zero chance of the necessary pre-conditions of reproducibility, given all the contingent non-conscious, electrical field generating activity goes on in the brain all the time.
b) And even if (a) weren’t true, no ethical committee would dream of sanctioning such an experiment, because its safety by definition cannot be established by in vitro, in silico or in vivo testing.
2. You are too hasty in dismissing the possibility of consciousness being a contingent evolutionary quirk, rather than a necessary pre-requisite for “intelligence”. In doing so, you are generalising from a single data point — us. Not a sound methodological approach.
3. However you slice it a Turing machine is a Turing machine, whether it instantiated as a single machine, or a number of parallel ones, or a cascade of machines “implementing” other machines, nothing at all is gained in what a Turing machine can do — it can only affect the speed with which it can do it. This is a fundamental theorem of computer science. So you double machine cannot do anything that a single one cannot do.
4. If consciousness is in any way an “output mechanism”, then it is ipso facto epiphenomenal. Perhaps you don’t realise this, but your computer’s operation is not actually dependent on the presence of its output mechanisms ( e.g. a screen or speakers) for producing the kind of output that would drive them, if present. So that’s at best a bad analogy.
LikeLiked by 1 person
It seems to me that the test I propose should be considered something doable rather than just a thought experiment. To all who appreciate the advancement of empirical science (and I figure you to be such a person in general), such an experiment should actually be wonderful news, and regardless of any specific results. It could be that this would appropriately make McFadden’s proposal seem extremely implausible. Or the theory might conclusively be validated to thus incite an amazing paradigm shift. Or at worst technical difficulties might render such testing inconclusive. It seems to me however that even this should still advance the cause of empiricism by opening people’s eyes to the concept of falsifiable rather than unfalsifiable consciousness proposals. I’ll now go into some of the details associated with your current concerns. Furthermore I’d love more criticism if you can think of any. You might also want to consider the criticism that James Cross recently gave over at his site. https://broadspeculations.com/2022/07/10/what-wind-tunnels-can-tell-us-about-consciousness/#comment-15312
Regarding your 1a) point, the reason that the EM radiation of general neuron firing should not disrupt EM consciousness, is because standard neuron firing isn’t synchronized and so such “noise” shouldn’t tend to exist at a high enough energy level to alter the synchronized EM field components that are theorized as consciousness. Of course there’s always the chance for certain random firing to not get canceled out by other such firing and to also correlate in the proper phase. This should be quite rare however. (Note that when I say “random” here I mean this in terms of EM fields rather than general brain based function, since of course in that sense neurons fire for causal reason rather than randomly.)
As established earlier, outside fields should tend not to affect such consciousness given the high conductivity of the cerebral fluid. Furthermore I’ve now argued that inside fields should also tend not to provide much distortion given that the right random synchronous firing should be rare. If McFadden’s theory is true however then one thing that should alter consciousness is some sort of dedicated technology that’s specifically set up to affect associated EM fields. This is exactly what I propose to test his theory.
“b) And even if (a) weren’t true, no ethical committee would dream of sanctioning such an experiment, because its safety by definition cannot be established by in vitro, in silico or in vivo testing.”
Actually an in vivo test would not be false by definition, let alone be false at all. I propose that we test living subjects. Of course so far I’ve only discussed testing humans this way, though you raise a good point. It would certainly make sense to test other animals first. This should not only help answer questions about organism safety, but if true might help scientists dial in effective EM parameters that could be tried on a human for oral report. Researchers should be able to tell a lot about whether or not they’re tampering with an animal’s subjective experience given that doing so should cause it to act differently than expected, as possibly observed in control subjects.
In any case I’ve recently made what may be a significant improvement to my proposed experiment. Originally I figured that we’d need to put a chain of thousands of neuron strength transmitters in a person’s head to individually fire synchronously for that person to potentially report any phenomenally queer dynamics. Thus for qualifying candidates already schedualed for brain surgery, an offer could be made for compensation if they’d permit a hopefully benign apparatus to be closed up in their heads. Then once recovered they’d be offered compensation to participate in such clinical testing. Though this might sound dangerous, it could be that specialists would build implants that seem small enough to be reasonably benign. And perhaps this would only be offered to candidates who have little reason to expect long term recovery anyway. Furthermore it now occurs to me that the large brain of an elephant might work far better than the small brain of a mouse given so much more cranial space.
My new thought however is that we could build an external machine to do the associated firing and then transmit its combined EM field product directly into the brain of a test subject by means of a far more compact skull implanted transmitter. This should be far less invasive. Testing should also be far less expensive since a single advanced machine could be used to test subjects in general. According to McFadden it’s not individual neural firing that matters here, but rather the right combined EM field that would exist as consciousness. Thus I presume that piping in EM fields of similar parameters should tend to distort such consciousness given that waves of a certain kind tend to be affected by other waves of that kind.
2. You are too hasty in dismissing the possibility of consciousness being a contingent evolutionary quirk, rather than a necessary pre-requisite for “intelligence”. In doing so, you are generalising from a single data point — us. Not a sound methodological approach.
I don’t know that I’ve exactly dismissed the possibility of consciousness as a contingent evolutionary quirk. I certainly believe that it must have begun that way, and epiphenomenally so. But if our consciousness also evolved to become what it is today (as the vast majority of naturalists presume), then this supposed quirk must have also had adaptive survival uses. Similarly our hands aren’t generally considered “a quirk”, and even if they originally emerged serendipitously without much function. Regardless McFadden proposes a means by which a phenomenal dynamic exists in the human specifically. Could there be various non EM field based intelligent creatures or machines in the universe even if EM fields reside as the basis for human intelligence? His theory does not address that question. Furthermore I’m simply proposing an empirical way to test exactly what his theory does address.
3. However you slice it a Turing machine is a Turing machine, whether it[‘s] instantiated as a single machine, or a number of parallel ones, or a cascade of machines “implementing” other machines, nothing at all is gained in what a Turing machine can do — it can only affect the speed with which it can do it. This is a fundamental theorem of computer science. So you[r] double machine cannot do anything that a single one cannot do.
I do not dispute the generic series/parallel nature of Turing machine processing. What I dispute is the notion that consciousness can exist as information processing alone, which is to say be causally created without instantiation mechanisms to enact that information. I don’t believe that anything is able to “weakly emerge” that way — instantiation mechanisms should exist for each and every element of reality that exists in a causal world. I can’t think of a single exception to this rule so I’m not about to tag consciousness as the first such exception. Can you think of anything else that’s understood to exist without associated instantiation mechanisms from which to exist? Even entropy is known to increase through associated causal mechanisms.
4. If consciousness is in any way an “output mechanism”, then it is ipso facto epiphenomenal. Perhaps you don’t realise this, but your computer’s operation is not actually dependent on the presence of its output mechanisms ( e.g. a screen or speakers) for producing the kind of output that would drive them, if present. So that’s at best a bad analogy.
It’s not that I consider consciousness to be an output mechanism. It’s that I consider the brain to harbor an output mechanism which creates consciousness. Neuron produced EM fields are of course what I suspect this mechanism to be. So just as your computer could do all the information processing to light up your computer screen the way it does, and yet not do so if it’s not connected to that mechanism, your brain could do all the information processing to create the phenomenal experience that you currently have, though not create it without an EM field to exist as that mechanism. It shouldn’t matter at all what kind of computer generates these electromagnetic fields. Furthermore here theoretically psychology then weakly emerges as a phenomenal situation becomes evaluated by the experiencer. Once we make decisions (EM field based), theoretically associated EM fields affect our brains to cause them to enact those decisions through associated muscle operation and such. I agree that if the brain were to create consciousness, though consciousness had no feedback effect upon the brain itself (known as EM field ephaptic coupling), then consciousness would thus be epiphenomenal. You know my thoughts on epiphenomenal existence that somehow also ends up evolving — this makes no sense to me.
In any case McFadden is certainly on my side regarding the need for his theory to effectively be tested. Therefore if he were to see any flaws with my plan, why would he not mention them so that improvements might be made? Why would he not open things up for general consideration given that the effective testing of his theory is in his interest most of all, let alone the interests of science in general? It’s a question that I don’t yet have a good answer for. I suspect however that it hasn’t occurred to him how revolutionary it would be if we could add a technological EM field inside someone’s head that would instantly alter their consciousness for report on the basis of the EM field parameters that we create. It’s an idea that I think he should assess.
@Mike Arnautov & SelfAwarePatterns:
One thing I would say is what I have already said, which is that phenomenal consciousness classically refers to what we traditionally conceive of as “consciousness”, meaning the virtual model of the world that is constructed by your brain, with which we are all acquainted. Another way to put this is that phenomenal consciousness is what “is left over” after one has given a complete functional/physical definition of consciousness. We think that we are acquainted with mental objects in our virtual model of the world (e.g. my visualization of a table), but a functional/physical would leave out descriptions of such virtual mental objects.
As for why it can be so tricky to “define” phenomenal consciousness, that’s because we typically require our definitions to be high dimensional, whereas phenomenal consciousness is a low dimensional concept. Meaning that we normally define things in terms of our experiences, and not the other way around. Thus, it’s not a surprise that there are so few terms in the “synonym circle”, because phenomenal consciousness is supposed to pick out the low dimensional component of our experiences (that which we define most everything else in terms of). In any case, it’s pretty apparent that we have experiences, and that a purely functional/physical description doesn’t appear to describe anything in our mental model of the world (e.g. how mental objects look or feel).
That’s the whole point of the illusionist program after all, there wouldn’t be a need to invoke an illusion if most people didn’t think they were acquainted with some non-physical thing (our virtual model). According to the illusionist program, we don’t have mental models in the sense that we are acquainted with or seeing/feeling/experiencing some mental object in our model. If you think that it is at least apparent that you do, then you already understand what phenomenal consciousness is.
About the charge that we can’t verify or falsify phenomenal consciousness. I would say that this depends on what you mean by verifying or falsifying! Ultimately, every theory is verified or falsified depending on whether it conforms to our experiences. All scientific experiments, in the end, must be converted to the medium of sense data and experiences in order to pass the test of verification/falsification. Once we realize this, then it becomes patently obvious that phenomenal consciousness is in fact verifiable and falsifiable. We think we have immediate experiences of the mental objects in our virtual model of the world, and this itself serves as strong confirmation that phenomenal consciousness exists. So why don’t physicalists accept this line of evidence? Well, they must argue that such introspective evidence is unreliable, because really the only reliable form of evidence is scientific evidence. But as we have just seen, “scientific evidence” is really just a subtype of experiential evidence, since all evidence is ultimately experiential.
Hence, it’s a bit strange for the physicalist to charge that the phenomenal realists can’t verify or falsify their theory when the physicalist refuses to acknowledge a category of evidence as being veridical. It would be as if the phenomenal realists went around lamenting how the functionalists can’t verify or falsify their theory because they can’t provide any evidence, where evidence only counts as (all evidence – scientific evidence). It’s merely a trivial consequence of the physicalist claim that the evidence for phenomenal realism is inherently unreliable.
My ultimate worry, as I have explained elsewhere, is that while illusionism and physicalism are certainly coherent theories, that they are potentially self-defeating. We need our low dimensional intrinsic concepts (phenomenal experience) to justify our beliefs in our high dimensional concepts (physical structures/scientific theories), but once we strip our belief in the existence of the low dimensional stuff, we also lose the very foundation we needed to believe in the higher dimensional entities. Notice that maintaining that we can still believe in the existence of functional consciousness is no good, since functional consciousness is a high dimensional concept. Additionally, the ability to explain the appearance of phenomenality (which illusionism can readily do) only goes so far as to establish coherency with our experiences but doesn’t actually avoid the self-defeater (for reasons I mentioned elsewhere).
It seems like your description of phenomenal consciousness invokes a lot of functional concepts, such as a virtual model of the world. But as you noted, the real issue seems to be the residue, the leftover stuff once the functional accounting is in. The thing is, once we’ve accounted for things functionally, we have to remember that they’ve been accounted for and not reconsider them again in terms of phenomenality. The question is, once we’ve done that, what’s left?
Certainly we all have an intense intuition that there is something left. The functional description of the discrimination of red and resulting dispositions don’t seem to cover the redness of red or the painfulness of pain. The question is what the redness of red, or the painfulness of pain, or things like that, really amount to once the functional aspects have been accounted for. Are we talking about something like the distinction between water and H2O? Which of course would be a category mistake. Or is there something really left over?
Is our inability to describe phenomenal properties because they’re intrinsically ineffable? Or because our judgment that they’re there is a mistake, an effect of the limitations in our introspective abilities?
It’s true that science is dependent on experiential data. Although there’s no reason that experience has to be phenomenal. It can be functional. And science has always had to be careful about which experiences to pay attention to and how to interpret them. All observation is theory laden. And all theory is contingent. If you think about it too long, it can seem like a foundation of sand. But again, I think falling back on which understandings are more accurate at predicting future experiences, and which are less, helps us to cut through these quandaries.
I’m not sure I understand the high dimensional vs low dimensional concept you discuss. But I think I understand you to be saying that phenomenal concepts are more primitive. Except that primitive concepts are a functional notion as well, one that can be explained with cognitive impenetrability. I think I’ve mentioned before that a software bit is as primitive as it gets in software, but we know a bit is usually implemented with a transistor, which can be further reduced, and is usually designed by other software using bits.
There are two problems with describing epiphenomenal consciousness as “whatever is left”. Firstly, as a definition, it is empty. Consider the analogous description of life as whatever is left when you eliminate everything no-living. It is accurate in so far as it goes, but it tells us absolutely nothing about what life *is*. (Which, incidentally is a problem almost as hard!)
Secondly, (and this is problem for you, rather than for me), as Mike S has already pointed out, it allows for the “whatever is left” to be nothing at all. You appear to reject that possibility out of hand. Me, I am more open-minded. However, statistics being one of my numerous hats, I accept the default “null hypohtesis” of there being no extras, until some evidence (not intuitions, not opinions, not hand waving) shows otherwise.
“There are two problems with describing epiphenomenal consciousness as “whatever is left”. Firstly, as a definition, it is empty. Consider the analogous description of life as whatever is left when you eliminate everything no-living. It is accurate in so far as it goes, but it tells us absolutely nothing about what life *is*. (Which, incidentally, is a problem almost as hard!)”
That’s because your analogy invokes no restriction of modal scope, since normally we rely on the context to achieve the required scope. For example, defining the human skin “as everything that is non-skin” makes little sense of course, but defining the human skin as “whatever is left after you have taken out all the innards of a human being” makes more sense. The context of the activity is supposed to provide the relevant modal quantification.
In any case, if you find the definition unhelpful, please feel free to disregard it! As I already said, phenomenal consciousness is supposed to pick out our acquaintance with the mental objects in our virtual model of the world. Where “mental object” is not an independent ontological entity, but just that thing which matches the description of what you are visualizing.
It is of course conceivable that we are not acquainted with any mental objects. That, when I am visualizing a table in my mind, while I think and feel it has color and depth, there is in fact no such thing. But the ontological reality of phenomenal consciousness is separate from the descriptions of phenomenal consciousness. The descriptions of phenomenal consciousness (what it’s meant to refer to) will not change even if it turns out that “there’s nothing left” in our functional/physical descriptions. We should not mix the meaning of a term with questions of existence.
“Are we talking about something like the distinction between water and H2O? Which of course would be a category mistake. Or is there something really left over?”
Hey Mike, I think it’s important to make a distinction between ontological reality and the descriptions of phenomenal consciousness (as I mentioned to Mike Arnautov). The illusionists, like Frankish, are saying that functionalism can fully account for the ontological reality, even if it leaves out the descriptions of phenomenal consciousness. A separate issue, which you appear to be bringing up, is whether our descriptions of phenomenal consciousness pick out any meaningful difference from our functional and physical descriptions. Whether, in other words, talk of phenomenal consciousness is just committing a category error.
We can’t even begin to tackle the question of illusionism until we address the proper referent of phenomenal consciousness. As I’ve already mentioned, I think phenomenal consciousness is meant to tag that intrinsic aspect of our experience, and so it’s pretty clear that any error about phenomenal consciousness would be an ontological one, and not a category mistake.
“It’s true that science is dependent on experiential data. Although there’s no reason that experience has to be phenomenal. It can be functional.”
The issue is that a functional experience is a high-dimensional or non-primitive concept. That means that what you call experience (functional consciousness) is the explanans and not the explanandum. Meaning we can further explain functional consciousness in terms of some other more primitive thing (if you deny this, then consider how you first came to know that your consciousness was functional). What is that other more primitive thing? It can’t be experience, since we just acknowledged that under your definition it is the high dimensional concept. Once you deny phenomenal consciousness you then seemingly knock away the ladder that buttresses your entire worldview.
“Except that primitive concepts are a functional notion as well, one that can be explained with cognitive impenetrability. I think I’ve mentioned before that a software bit is as primitive as it gets in software, but we know a bit is usually implemented with a transistor, which can be further reduced, and is usually designed by other software using bits.”
This merely demonstrates that our primitive referents might have complex ontological structure. But the issue is not that phenomenal consciousness can’t have some complex ontological structure (as it would under panpsychism for example), but rather that it can’t be functional (because they are mutually exclusive concepts). We have to distinguish between ontology and epistemology; our knowledge of functional consciousness is non-primitive, and so it has to be explained in terms of something else.
The challenge for you is coming up with some experiential aspect that is more primitive than functional consciousness, but which is non-phenomenal.
So, I’m taking “high dimensional” to be reducible and “primitive” to be irreducible. Let me know if that’s a misinterpretation. And I totally agree we have to distinguish between ontology and epistemology.
Generally, the idea is that phenomenal properties are intrinsic, making them both irreducible and non-relational. This is understood to be an ontological irreducibility. Which fits with the view that phenomenal properties cannot be reduced to the physical.
But from a functional perspective, intentional representational perceptual properties are cognitively impenetrable, subjectively irreducible, that is epistemically irreducible, from the perspective of inside the system. Like the software that can’t reduce the bits it runs on, we can’t reduce perceptual properties from inside the system. But like many things epistemically irreducible, we can succeed in reducing them by taking into account other perspectives. Which connects with your point about functional perception being high dimensional.
Just as software using bits can be used to design transistors, which go on to implement bits, we can use the perceptual properties available to us subjectively within the system, to understand data collected from outside the system on how those properties can be reduced to more primitive concepts (computation, neuroscience, biology, chemistry, physics, etc). Think of it as like a highly elaborate mirror that allows us to look at things we can’t see otherwise (such as our face), to get around the blind spots of the single internal perspective.
This is sort of an elaboration on what I said above, but I’m hoping it clarifies why I don’t consider the challenge you describe as an actual challenge. Not that I expect us to agree on this today. 🙂
You say: “It is of course conceivable that we are not acquainted with any
mental objects. That, when I am visualizing a table in my mind, while I think
and feel it has color and depth, there is in fact no such thing.”
If you think that as a functionalist I must aver this to be the case, you are
quite mistaken. I am in no way tempted by the implied duality of “us knowing”
and “mental objects to be known”. On the contrary, in my view this supposed
duality is a fundamental mistake that has been clouding the mind/body issue
for a long time. There is no metaphysical ontology of “the mental” separate
from our experience. Experience is one with the experiencer.
Sorry for being slow in replying, I recently had a health scare with a DVT/blood clot. But everything is well now, and I’m back to being an erstwhile healthy young man in his late 20’s.
@ Mike Arnautov:
“If you think that as a functionalist I must aver this to be the case, you are
quite mistaken. I am in no way tempted by the implied duality of “us knowing”
and “mental objects to be known”. On the contrary, in my view this supposed
duality is a fundamental mistake that has been clouding the mind/body issue
for a long time. There is no metaphysical ontology of “the mental” separate
from our experience. Experience is one with the experiencer.”
When I speak of acquaintance with a mental object, I don’t mean some metaphysical object. I only intend to capture the content of what our mental experiences seem to be. For example, it probably seems to you that when you visualize a table in your mind’s eye, that you are experiencing some mental object which is describable as having depth, color etc… But a functional/physical description of your cognitive system (i.e. brain) would completely leave out such descriptions of your mental content. So that means you aren’t really having an experience of an object having depth, color etc… And what we call experience (assuming we still wish to use the word, as physicalists do) are really just brain processes (e.g. descriptions of synaptic activations and neurotransmitter propagation), and purely describable as such.
Notice this phenomenal realist picture does not commit you to the notion that there’s some metaphysical ontology “out there”. Rather, it simply commits us to the belief that certain brain states have a special property, and this property is the property of instantiating experiential states, where such states refer not to the (traditional) physical processes of the brain, but to the genuine experience of being acquainted with mental objects in our virtual model of the world. Such objects of our phenomenal virtual model can exist as the non-physical properties of our mind. No substance dualism required.
Or did you mean to disagree with my assertion that descriptions of mental objects are incompatible with physicalism/functionalism?
@ SelfAwarePatterns (the other Mike 🙂):
The analogy with the software bit just shows that it is possible to have an epistemic irreducible component of our cognitive system which is ontologically reducible. I of course agree with this, and I also agree that our phenomenal consciousness could be such a thing. I am not committed to the notion that phenomenal consciousness is ontologically irreducible, only that it is not ontologically reducible to the physical.
We should also make a distinction between:
1. Illusionism being able to explain the existence of our epistemically intrinsic (but not ontologically intrinsic) phenomenal properties
2. Illusionism explaining our beliefs in the existence of our epistemically intrinsic phenomenal properties
I agree that illusionism can do 2, but I disagree that it can do 1. Illusionism can’t say that phenomenal properties exist from some subjective inner perspective, but that really, they are ontologically reducible to the physical. If we actually are experiencing (subjectively) phenomenal properties, then our knowledge by acquaintance with such properties immediately tells us that they cannot be functional/physical (because our descriptions of them are non-physical, see my point to Mike Arnautov above). So, illusionism just has to deny that we have any epistemic access to such phenomenal properties, meaning we have no intrinsic epistemic access to some subjective phenomenal property (which may or may not be ontologically intrinsic). We just have mistaken beliefs about them.
I guess this all comes back down to whether you accept knowledge by acquaintance. I would say that if you don’t, then it seems like we inevitably have to accept some form of coherentist theory of knowledge, since we don’t have privileged epistemic access to a primitive that we use to form the foundation of our knowledge of high dimensional concepts. Unless you think there exists some form of functional primitive analogue?
So, I agree that my worldview does not count “as an actual challenge” provided that you accept that illusionism is just asserting 2. It also seems to me that you must buy into some coherentist picture, as I mentioned. I was getting this sense earlier when you talked of everything being hopelessly theory laden. By contrast, the phenomenal realist will deny that our phenomenal intrinsic experiences are theory laden, or at least, if they are theory laden, they are intrinsically theory laden (meaning we just come into the world with these assumptions and can never revoke them).
Thanks for the conversation guys.
Wow. Glad you’re okay.
On your 1, I’d say that when I discussed epistemic vs ontological irreducibility, I was talking in terms of functional perceptions, not phenomenal properties. Which I guess is another way of just affirming 2, but also noting that illusionism has access to the functionality which phenomenal realists typically take to cohere with the phenomenality.
I can’t say I accept knowledge by direct acquaintance, at least not in the sense that there can be knowledge without causal mechanisms. And of course, the knowledge will ever only be as good as those mechanisms, and no mechanism is perfect, so no knowledge is infallible.
I’m not sure what you mean by a coherentist picture, but I do think predictive coding theories have a lot going for them, in which perception is a predictive model fine tuned with error correction signaling from sensory information. So I think there are always models involved, and of course a model is a theory. So definitely in my view, all observation is inescapably theory laden.
Thanks Alex. Take care of yourself.
LikeLiked by 1 person
Glad to hear that the scare was unfounded. Even so, such things can be pretty upsetting. My sympathies.
Re your response to me… I am beginning to wonder whether philosophically speaking I have an unfair advantage. I have a condition now recently acknowledged and labelled by science: aphantasia. So when you say
“For example, it probably seems to you that when you visualise a table in your mind’s eye, that you are experiencing some mental object which is describable as having depth, colour etc… But a functional/physical description of your cognitive system (i.e. brain) would completely leave out such descriptions of your mental content.”
I just smile. You see, when I “visualise” a table, I do not actually see it in any way — there are no associated phenomenal qualities, just my conceptual recollection of them. It is only recall that is so affected, seeing the same table I (generally :-)) recognise it as such. In fact, it took me decades to work out that when people were talking about the mind’s eye, as you do, they were not being purely metaphorical.
Do you think that makes me a philosophical zombie? 🙂
LikeLiked by 1 person
Wow that is very interesting! I would say that you’re definitely not a philosophical zombie. 🙂
I used the example of the mind’s eye because it is easier (for most people) to conceptualize the imaginary table as being separate from the real table, whereas most people might not instinctively categorize their perception of a table as distinct from the real table (even if they do understand that there is a difference). But the same logic also applies to the descriptions of the sense datum of a table (or any other object).
I think it’s very possible that our formative difference in visualization capability led to our later difference in philosophical outlooks. I would say that I’m a heavy visualizer, and it’s not an exaggeration to say that the majority of my conscious life is spent encapsulated in my “virtual” model of the world. If I’m thinking heavily on a subject for example, I’ll usually just pace back and forth in my room (probably while staring at the carpet, although I’m not sure honestly) while visualizing the subject matter in my head. Occasionally, I’ll be brought back to reality when I’m about to bump into a wall or something (meaning that all of a sudden, my virtual vision collapses and I’m looking at the wall in my room), so my unconscious brain will take over at times to prevent me from doing something stupid.
Since I spend so much of my time in this “dreamworld”, I have a hard time taking illusionism seriously. To me it would basically amount to a denial of 90% of my conscious life!
Very interesting conversation Mike!
This is why I dislike the term “illusionism”. It may be well understood in academic debates, but in non-academic philosophical conversation (like this one) it is liable to confuse. As far as I am concerned, nobody is denying the reality of your inner experience. I’ll try to explain by means of two analogies, reflecting (as I see it) two main branches of “illusionism”.
1. A stick partially immersed in (e.g.) a glass of water can appear broken dues to refraction. That appearance is real. It can be captured by a camera. The stick is real too. When we point at it and say “stick”, we are referring to a real object. It’s just that the quite real way it appears to us does not match the reality of the stick itself.
2. A rainbow too is real and can be captured by a camera. But this time there simply is no object behind the quite real appearance (something children tend to disbelieve — I did! 🙂 ). So pointing at it and saying “rainbow” does not amount to a reference to an object — there simply is no referent.
As I see it, when some illusionists say that (phenomenal) consciousness is quite real, it just not what you think it is, they treat phenomenal consciousness as the stick in the first analogy above. There are others, who simply deny that phenomenal consciousness refers to some underlying ontology. In doing so, they treat it like a rainbow in the second analogy. Neither camp denies the reality of your actual experience.
I think you are right that it is likely our very different inner experience that leads us to diametrically opposite conclusions. Let me hazard a guess… For you, seeing something with your minds eye is so much like seeing something in the real world, that it is very difficult to imagine that just like objects in the real world have quite real (sensory) properties such as colours, objects in your “virtual” world must also have such properties, which are in some elusive sense “real”! These purely phenomenal qualities (a.k.a. qualia) then become a philosophical puzzle since they appear to lack any kind of substrate.
OTOH I have no temptation to equate imagining e.g. a red rose with actually seeing one, because I can only imagine it conceptually, not visually. Hence I don’t have a problem with substrate-less phenomenal propetries.
What I think is actually the case is that when I see a red rose, parts of my brain responsible for processing visual information gets activated in a particular way and that *is* my sensory experience. If I were able to voluntarily activate it in the same manner, I would “see” a red rose in a purely phenomenal, non-sensory sense. I cannot do that, but you can. To my mind this neatly accounts for both our deifferent experiences, without creating a mystery.
You might object that my explanation relies on distinguishing sensory properties from phenomenal (a distinction often elided in philosophical debates). But I do consider them to be the same with just that one difference: their cause.
Thanks for the lengthy reply. I should clarify that when I refer to my experiences, I just mean their phenomenal qualities. So, I interpret both weak and strong illusionism (your 1 & 2) as mainly denying that such qualities exist.
If I understand you correctly, you are saying that neither position actually denies the reality of such phenomenal qualities. I gather that you view the first position (weak illusionism) as merely asserting that we are mistaken about some of the attributes of these qualities (like their ineffability and private natures?), whilst the second stance (strong illusionism) is tantamount to the denial of substance dualism. That’s how I read your description of the second camps’ denial that “phenomenal consciousness refers to some underlying ontology”.
However, I don’t interpret the illusionist camp as saying either of the above (although I will add some caveats for W illusionism later). In fact, I don’t perceive phenomenal qualities as having to be based on some underlying ontology in the sense that they have to be substrate dependent, and I am sympathetic to the stance that our phenomenal qualities can be public and effable to a degree. We might imagine some future where we have the technological capacity to perceive each other’s phenomenal qualities by “linking” up our brain systems, for example.
If that’s all the illusionists are saying, then there would be very little disagreement of substance between them and the phenomenal realists. Instead, I think the strong illusionists, like Keith Frankish, are making the much stronger claim that we don’t have any phenomenal experiences, full stop. The distinction with weak illusionism is more complicated and somewhat confused in the literature. At times there is an equivocation going on, where weak illusionists use the term ‘phenomenality’ to refer to physical descriptors, in which case they are just saying the same thing as the strong illusionists but using different language. At other times there appears to be a more substantive difference.
I don’t want to get too much into the delineation between S and W illusionism though. What I will say is that there comes a point where we are simply not speaking about the same thing. To use the rainbow analogy, there’s a big difference between asserting that:
(There is no object in the sky which emits the colorful light that reaches your retina)
(There is no object in the sky nor is there even any colorful light being captured by your retina)
The first assertion is that a rainbow is an illusion, whereas the second is simply the denial that a rainbow exists. An important point that needs to be stressed is that one can’t first declare that an illusion exists without finding some common descriptor (e.g. the light being perceived in the rainbow example) between the realist and the illusionist. Once we tag this common descriptor, we can then point out the mistaken ontological attributions that we think the realist is adding to the description (by ascribing its cause to some mistaken ontological entity, for example).
But there is no such common descriptor between the phenomenal realist and the illusionist in regard to phenomenal consciousness, because the phenomenal realist denies that there exists any such descriptors for phenomenal consciousness, like intrinsicality, privateness, qualitativeness etc…
I think this is basically a settled issue among the illusionists and realists, with people like Frankish conceding that they think that phenomenal properties are not real in any sense. And not just that they have no substrate and that we are mistaken about a great deal of many of their qualities, but rather that we aren’t experiencing any (qualitative) feelings, emotions, or visualizations of objects in our mental model of the world. If we think we are, then we are simply mistaken.
Illusionists and physicalists continue to use such descriptions, but at all times they are referring to purely physical descriptors of our brains (e.g. an emotion = a particular hormonal and/or electrochemical brain state). This a very important point, because it means that they’re not actually talking about the same thing as the phenomenal realist, nor arguably ordinary people using folk psychology.
For example, a medieval person obviously doesn’t mean the term “angry” to refer to “a particular hormonal and/or electrochemical brain state”. So, such a person is clearly not talking about the same thing as the physicalist. Again, this would require that there exists some common descriptor between them and the medieval peasant. Such a common descriptor is supposed to be the phenomenal experience, but because the illusionist has denied that there are any qualia, there can be no common referent.
In some cases, the illusionists might even be forced to abandon folk psychological vocabulary, as for example with pain. It turns out that there might not be a biological kind that is “pain”, and for this reason, one of the most popular philosophical theories of pain is actually pain eliminativism. The phenomenal realist of course can assert that pain is still real in the phenomenal sense; when we talk about ‘pain’ we mean to refer to some phenomenal kind that underlies all the different physical phenomena, but the illusionist has no such recourse.
I agree that there is an essential difference between “There is no object in the sky which emits the colourful light that reaches your retina” and “There is no object in the sky nor is there even any colourful light being captured by your retina”. As you rightly say, the first asserts rainbow to be an illusion, while the second denies it altogether. Hence the first is the illusionist stance, whereas the second is an eliminativist one. You seem to view strong illusionism as a form of eliminativism, which is, I reckon, a mistake — though a fairly common one. E.g. Dennett is often classed as an eliminativist, despite saying over and over: consciousness exists, it’s just not what you think it is.
But beware the curse of analogies! 🙂 They are analogies, not models. Thus in the case of a rainbow you can point to a medium (light, or in your terminology “common descriptor”) common to both readings, which makes a rainbow still something “out there”, which is being perceived. But from a functionalist standpoint, there is no such separation involved in experiencing phenomenal qualities and hence no light-like medium delivering it to us. There is no phenomenal quality to be perceived simply because that quality and our perception of it are the same thing. So in my understanding when illusionists deny phenomenal qualities, they do not deny your experience — they deny the existence of those qualities as anything other than your experience itself. That’s how I understand Frankish’s denial that they are real in any sense.
I note with interest your example of “a medieval person obviously doesn’t mean the term “angry” to refer to “a particular hormonal and/or electrochemical brain state”. Doesn’t seem at all obvious to me! May I suggest Richard Rorty’s “Philosophy and the Mirror of Nature”? In Chapter II “Persons without minds” he proposes a thought experiment of aliens whose psychological talk consist solely of “hormonal and/or electrochemical brain state” and the conundrum thereby posed to human philosophers as to whether these aliens have phenomenal consciousness. Very instructive.
I think illusionism is a form of phenomenal eliminativism, yes. There’s more to consciousness than our phenomenal experiences, so it’s no contradiction for people like Dennett to maintain that they still believe in consciousness.
“So in my understanding when illusionists deny phenomenal qualities, they do not deny your experience — they deny the existence of those qualities as anything other than your experience itself.”
This seems to be just a reiteration of your strong illusionism = rejection of substance dualism stance. I think some of Dennett’s earlier work (e.g. arguing against the Cartesian theatre) can be summed up as being about something like this, but the later illusionists like Frankish are, in my opinion, making the much stronger claim that we don’t have phenomenal qualities in any sense. This includes the sense of phenomenal qualities being “just experience”.
If you read Frankish’s latest, you’ll see him refer to his denial that there exist any states, perceptual/physical/functional or other, which possess phenomenal character. It’s not a denial of location (i.e. there are qualitative states, but they don’t go over and above our experiences), but rather a denial of our experiences having phenomenal character.
That after all is precisely the point. If illusionists admitted that our experiences have a phenomenal aspect, then the hard problem of consciousness is not solved.
Hi Alex and Mike,
Other Mike here. You guys are having an excellent discussion.
I’m just jumping in to share this snippet that Frankish responded to me in a conversation on Twitter. I think it clarifies exactly what he’s denying. (His philosophical writing tends to use terms in a technical sense, which I know has misled me at time.)
I’m sharing this because there are people (typically scientists) who hold a functional notion of phenomenality. (Anytime you see me writing about it in this blog’s archives, it should be read that way.) This new paper is a prime example: https://academic.oup.com/nc/article/2022/1/niac007/6573727?login=true
Of course, functional phenomenality is not what strong phenomenal realists nor strong illusionists are talking about. But functional phenomenality doesn’t seem to be a hard problem issue.
Thanks Mike, that was helpful! I agree with the message in his tweet and see it as espousing what I was saying.
I think the simplest way to put forward my position (along with many other phenomenal realists), is something like this:
I am having mental experiences. When I visualize a table in my mind’s eye I “see” an object with rich color and contrast, depth and width etc…
This “seeing” is real only in the sense that my brain states really do have the property of being able to conjure up these visualizations, by constructing this virtual model of the world that I call my phenomenal consciousness. This doesn’t imply that our qualia go over and above our experiences or anything like that. Another way to put this is that our functional/brain states have a special “mode” of presentation, which presents itself in the form of the construction of my virtual model of reality (consisting of these mental “objects”).
On illusionism, I believe there is no such virtual model of the world. If I were an illusionist, then I would have to conclude that I am not really seeing a virtual object in my mind’s eye. There is no experience of something as having color or depth etc… We might say that my mind is completely “dark inside”. This doesn’t mean of course that illusionists think that we’re blind or incapable of making reports about our imaginations, since we still retain functionality. But descriptions of “functionality” don’t capture the content or description of what the mental object in my mind’s eye looks and/or feels like (hence the hard problem of consciousness).
This is similar to how our unconscious brains can function quite well despite not having the ability to construct any virtual model. Everything is “dark inside” for my unconscious brain, but that doesn’t stop it from carrying out its functions. On illusionism, everything (including our conscious selves) continues to function despite having no experiences of mental objects. The only difference between our conscious brains and unconscious brains is that the conscious brain has a special type of functionality. This special type of functionality might be access to our verbal speech area, or some connection to a special attention center, but the point is that it doesn’t include this special access to my virtual model of the world that I previously mentioned.
The “illusion” merely refers to the fact that we think we are having rich experiences, when in fact we are not.
The issue I see here is your description of phenomenal seems to be including a lot of functionality. Things like virtual models of the world seem inherently functional in nature. A self driving car has something like it. Of course, that fits with the functional notions of phenomenal that are out there.
But this statement seems to exclude that interpretation:
“But descriptions of “functionality” don’t capture the content or description of what the mental object in my mind’s eye looks and/or feels like (hence the hard problem of consciousness).”
My question here is, what is the functional account missing, aside from something fundamental, indescribable, and scientifically inaccessible? What richness, aside from the things that match those characteristics, is being excluded?
We seem to be using the phrase “virtual model” in a completely different sense. By virtual model, I just mean the model populated by my mental content, like my visualizations. Indeed, the example of my visualization shows that a functional description is inadequate. Nowhere in the physical/functional accounting of my brain will you find a description of what an imaginary table looks like. So, we would have to deny that I have any such experiences of *that* kind. But if I don’t have such experiences, then really, my so-called conscious life is basically “dark inside” in the exact same way as it’s dark inside for my unconscious brain.
The argument is not that a functional account is missing something “fundamental, indescribable, and scientifically inaccessible”. That would be begging the question. Rather, the phenomenal realist starts from the assumption that they are having experiences (for the purposes of this conversation, let’s stick to the visualization example). She then notices that any descriptors of this experience are being left out by the functionalist/physicalist account, and then (and only then) concludes that this experience is seemingly “fundamental, indescribable, and scientifically inaccessible”.
Just to give a simple example. In my visualization of a table, there exists a spatial arrangement of parts. Each part has a location in relation to the next. Of course, this location is not physical (it’s not located in our physical universe). Rather, the spatial relationship maps the parts in my virtual model. But this mapping relation (which tells you how things are spatially arranged and look in visualization) is not at all described in a physical and/or functional account of my brain. It is completely absent (along with a lot of other things).
Asserting that a functional account isn’t missing anything important is just like saying a complete description of my astronomy book, down to the atoms of the ink on the page, is a perfect substitute for a physical description of the milky way galaxy. Yes, there may be a complex representation between the book and the galaxy (the words are supposed to express the physical facts), just as there is a complex representational/causal relation between my brain and my virtual model, but it’s pretty obvious that they are not the same thing.
Certainly if we do a brain scan, we’re not going to find a picture of a table anywhere, nor any written description of it. We wouldn’t find that in a low level scan of my laptop, even if it had a picture of a table in it, or even the plans for building one. At least unless, in both cases, we accessed it in the right manner using the system itself (or in the case of the laptop, a system equipped with the same protocols).
But the main thing to realize is, your visualization of a table has a functional account, as an organism running a sensory simulation for communication purposes. If I asked you what the table was made of, was it rectangular or circular, a dining room type table or a work one, you would be able to access your model to provide answers, a model implemented as a constellation of predictive firing patterns in your cortex. The act of accessing and utilizing that model would be the functional experience of the visualization. The experience is a part of the functionality, not something in addition to it.
You say the phenomenal realist notices that any descriptors of the experience are being left out by the functional account. What are these descriptors? Of course, the usual answer is that they’re ineffable. Often they’re referenced as something like the redness of red, the painfulness of pain, etc. It’s like an assertion that the mental paint of the experience is being left out.
Except, it isn’t. The impression of mental paint is a pattern of conclusions, of predictions, categorizing / associational conclusions in the case of colors. And the experience of those conclusions is the rest of our subsystems reacting to and utilizing those conclusions for various purposes.
The problem, of course, is we don’t have access to the underlying work that leads to those conclusions. So introspectively, they seem to just float out there independent of anything. They seem like something that happens to us rather than something our nervous system does. This is where the illusions creep in, the intuitive conclusions we reach due to the limitations of introspection, and that mislead us, because the tool we’re trying to use didn’t evolve for that role.
So I’d flip your analogy. Saying the functional account leaves something out is like looking at an overly simplified astronomy book, then seeing a more scientifically detailed one, and complaining that the second account misses things from the first, when what’s really missing are the first’s simplifications.
“But the main thing to realize is, your visualization of a table has a functional account, as an organism running a sensory simulation for communication purposes. If I asked you what the table was made of, was it rectangular or circular, a dining room type table or a work one, you would be able to access your model to provide answers, a model implemented as a constellation of predictive firing patterns in your cortex. The act of accessing and utilizing that model would be the functional experience of the visualization. The experience is a part of the functionality, not something in addition to it.”
How does this demonstrate that a functional account doesn’t leave out my inner phenomenal experiences? Of course, if you first assume that you have a mental virtual model, then you may give a functional account of how that model behaves. But the functional account doesn’t actually describe the model, the model is basically a black box, and the physical description is about what is on the periphery (the model’s behavior). As long as the outputs of the black box are the same, your functional account can’t tell the difference (hence the example of the philosophical zombie).
That’s because, as I earlier noted, descriptions of mental content, like the spatial relation between parts of my imagined table, are left out in a physicalist functional account. Forget ineffability, just stick to trying to explain that one small aspect of my mental model. You mentioned this point yourself early on, but then simply dismiss it. But it’s the most vital element of this topic, and it is what demonstrates that my mental model (if I’m really having phenomenal experiences) is not identical to “predictive firing patterns in (my) cortex”. If it were identical, then descriptions of the mental table, like its spatial arrangement, would be included in descriptions of those predictive firing patterns.
As far as I understand you, your argument here is that descriptions of mental content (like spatial relations between virtual objects) don’t fall out of descriptions of physical systems in my brain, because the former descriptions are incomplete (but not wrong, presumably). But if that were true then the physicalist account would be additive, it would take the basic facts about my mental experiences, and then add additional facts to them to change my overall model. An example of this might be something like showing how a table is not really a table, but a component of some greater whole. The individual facts of the ‘table’ remain true, but we drew incorrect conclusions from them.
Yet the physicalist account is eliminative, not additive. It would deny every single fact about my mental model (remember, I’m talking about what’s in the black box, not on the periphery). Facts like the spatial arrangement, or the way the table looks (how bright it is etc…) are not accounted for in a physical description of my brain states.
Therefore, I’m not sure how to take your astronomy book analogy. Are you saying that phenomenal consciousness is like an epicycle (non-existent), and that we should not complain that a more detailed astronomy book leaves such things out, since they don’t exist? If that’s the case then I certainly agree that’s what the illusionists are saying after all, but I see this as an unhelpful side discussion which doesn’t address the main argument. The whole point of this conversation with Mike A. was that he was claiming that illusionists actually can account for the existence of my phenomenal consciousness/reality.
Be careful not to confuse functionalism with behaviorism. Behaviorism minimizes talk of mental states. But functionalism is all about mental states, that they are fundamentally about what they do, their causal relations to stimuli, behavior, and other mental states. So there’s no black box, at least not in principle. (Of course, there may be black boxes in terms of the current limits of our knowledge of the brain’s operations, but those are always shrinking.)
So there are no details of your model that functionalism can’t, in principle, account for, at least no details that are caused by and have causal effects in the world, including the brain.
You can posit epiphenomenal entities that functionalism can’t account for. But for us to talk about them means that our talking about them can’t be caused by the entities themselves. If we go there, functionalism can’t follow, but there’s no way to provide evidence for something like that, since any evidence will require causal interactions.
So the spatial relations, to the extent you can imagine them, are part of an information processing model in your brain. It’s worth noting that a stroke, with a lesion in the wrong regions, can affect your ability to perceive and imagine spatial relations.
This incidentally, is one of the things I think most non-physicalist philosophers overlook, the effects brain injury can have on people’s experience. The wrong kind of injury can compromise or take away people’s ability to perceive motion, colors, recognize faces, feel fear, or pain, or just about any other aspect of conscious experience.
So I can’t see the conclusion that the functionalist account denies every aspect of your mental model. It certainly does not deny so much that we’re left in the dark, unless it’s a darkness that is the absence of epiphenomenal light.
Yeah, we’ve probably hit the limits of that astronomy analogy.
I’ll let you and Mike A hash out the specific question you guys are discussing. Sorry, I didn’t mean to become entangled in it.
Hey Mike S.
Just to put my cards on the table, I am sympathetic to a kind of panpsychism. So, I think that brain states are mental states, but I just think that the traditional physical description of brain states won’t exhaust or fully capture the reality of my (our) mental state(s). Thus, we need to change our traditional conception of the physical.
In any case, it is your claim that a functional description would include descriptions of my mental content, including the spatial relations between objects, and their properties of color, contrast etc…
But I’m not actually denying this, I think I said very early on in the last thread that I think phenomenal consciousness can be functional (though I also think its epistemically intrinsic, but that’s not important). Rather, what I’m saying is that a (traditional, non-panpsychic) physical description is incomplete.
Let’s remember that analytic functionalism (the kind you need) is supposed to reduce to the physical. We might imagine different kinds of functionalism, a functionalism metaphysical states for example. But the kind of functionalism that the illusionists subscribe to is purely physical, meaning any functional properties are solely the properties of brain states. That means at the end of the day we need to demonstrate that a description of our physical brain states can capture a description of my mental content. You can give a description of my mental content in functional terms, but it won’t be reducible to a physical description, which defeats the entire point of the illusionist programme.
Once you admit that a purely physical description won’t capture the content of my phenomenal states (like a description of a visualized table), and you concede that functional states are entirely reducible to the physical, then it seems like you must admit that your kind of functionalism completely leaves out such mental content. That, to me, is very much equivalent to being left in the dark.
And notice that I never brought up ineffability, intrinsicality (except as a side note) or anything like that. I feel that this is really besides the point. If physicalist functionalism could just explain how my visualized table looks and feel to me, then I would be sold.
If I thought consciousness couldn’t be explained physically, panpsychism would probably be where I’d land. It makes room for additional ontology while stepping around the issue of having to identify the differences (aside from functionality) between brains and other physical systems.
Interestingly enough, if what the panpsychist is talking about are strictly non-functional items, then in most cases the panpsychist and functionalist will have the same attitude about whether a particular system is conscious. The panpsychist sees an extra ontology there that the functionalist doesn’t, but operationally they otherwise mostly seem identical. It’s why, I think, I can find so much resonance with many of David Chalmers’ views.
Where we diverge would be in explaining our intuitions that there is more there than the functionality. A functionalist takes this to come from the limitations of our introspective abilities. A panpsychist takes it to reflect actual reality. As Chalmers has noted in his meta-problem discussion, if the intuitions can be explained without reference to the putative reality, then that reality will appear redundant. On the other hand, if no such explanation can be found, then the reality looks a lot more likely. (Although I can’t see that the redundant conclusion would rule out an epiphenomenal reality.)
There actually are non-physicalist illusionists. James Tartaglia seems to be an example, although I haven’t read him at any length. I’ve also heard illusionist type views grounded in eastern traditions, most of which don’t seem committed to physicalism.
On not mentioning ineffability, intrinsicality, etc, I think the disagreement here is that any of the specific things you do mention seem subject to a functional description, and therefore a physical one. It’s the indescribable stuff which can only be pointed at that’s the bone of contention. But it doesn’t seem like this is something we’re going to resolve in this conversation.
“I think the disagreement here is that any of the specific things you do mention seem subject to a functional description, and therefore a physical one”
This is exactly what I am denying in fact. For the purposes of this conversation, I’ve just skipped over the intrinsic components of our (my) mental content, to focus on the dispositional/relational facts. It’s my contention that a physicalist account won’t capture certain dispositional facts about my mental content (like in the visualization example).
The first important point that bears mentioning here is that being able to give a functional account of x, does not entail that one can thereby give a physical account of x. We can give a functional and dispositional description of God’s work, but it’s obviously not going to be reconcilable with our scientific and physical conception of the world.
Similarly, if we want to maintain that our mental content is reducible to the physical, then we are going to have to give physical descriptors of all of our mental content.
Therefore, the mere fact that we can provide a functional description of what my mental content is doing, is actually of no help. Such a functional account is supposed to pick out or refer to certain dispositional states, just as the description “pumps blood” is supposed to refer to the physical/dispositional nature of the heart (which itself can be understood in dispositional terms).
But at the end of the day, we wanted to know whether those dispositional states of my mental content were physical, and in this sense the functional account merely adds another veneer to our purported reductionist account, without actually helping to solve the issue.
The bottom line is that facts about the imagined table are not accounted for in any physical description of my brain. Such facts are facts like “there is a spatial arrangement of table parts that I perceive”. Of course, I think such facts help to play a functional role, maybe they do things like “help Alex understand the ways tables work”.
We can search for a physical referent in the brain which also plays the functional role of “helping Alex understand the ways tables work”. But the issue is that the facts of that physical referent are not facts about my mental content, because they are not facts about things like table spatial arrangements. Hence, they aren’t the same thing (unless of course we’re mistaken about the mental facts).
Hence, we are at a crossroads. We can either admit that there is no mental content over and above that spartan physical base, and therefore the purported facts about my mental content (like spatial arrangements of imagined tables) are flat out false. Or, as you say, we can expand our conception of that physical referent to also include the facts of my mental content, leading to either dualism or panpsychism.
But do not be under the illusion that the physicalist approach is not eliminativist. To take that approach it is to simply deny the facts about my (and your) mental content. It’s not true that being able to give a functional account of certain parts of our mental content somehow prevents the elimination. It does not.
That means that in the end we are very much “left in the dark”, since I’m not actually seeing a table in my mind’s eye. That’s okay, after all the illusionist has a good story (false belief) for why I think I might be imagining things when I’m actually not. I certainly wouldn’t declare that the illusionist stance is inconsistent with physical reality, but I think it’s important to know exactly what you’re buying into.
LikeLiked by 1 person
Let me quote verbatim from Dennett’s “Quining Qualia”:
Everything real has properties, and since I don’t deny the reality of conscious experience, I grant that conscious experience has properties. I grant moreover that each person’s states of consciousness have properties in virtue of which those states have the experiential content that they do. That is to say, whenever someone experiences something as being one way rather than another, this is true in virtue of some property of something happening in them at the time
Having read a lot of his work over the years, I am pretty sure the above is still Dennett’s position. Thus I am unclear how it is to be squared with your contention that functionalists (Dennett being an arch-functionalist) and/or illusionists (Dennett is claimed as one by Frankish) deny the reality of your phenomenal experience.
Also, to deny that your phenomenal experience is an experience *of* something is a much stringer claim than a denial of substance dualism. It also denies at least some forms of representationalism (exemplified by, but not restricted to the Cartesian Theatre).
Could you first elaborate on this statement? “To deny that your phenomenal experience is an experience *of* something is a much stronger claim than a denial of substance dualism. It also denies at least some forms of representationalism (exemplified by, but not restricted to the Cartesian Theatre).”
I’m not exactly sure what this means, to say that illusionists are denying that phenomenal experience is an experience *of something. I can think of two interpretations:
1. Phenomenal experiences are not instantiated by some underlying ontological substrate. Qualia are not things “out there” which we might get acquainted with through our experience.
2. You’re not having phenomenal experiences.
1 seems to me to just be a reiteration of ~substance dualism, and 2 of course is the denial of phenomenal consciousness. It sounds like your stance is some esoteric position in between 1 and 2, but I can’t quite grasp it. I do think that 1 entails the denial of some forms of representationalism as well however.
About Dennett, I would say that I’m not super acquainted with his work, although I’ve read Quining Qualia and see it as compatible with my interpretation (including what you quoted). I suspect that what Dennett calls a conscious experience is simply a non-phenomenal experience, indeed he has been quoted as simply denying phenomenal experience altogether (see this exchange: https://ase.tufts.edu/cogstud/dennett/papers/magic_illusions_zombies.pdf), and of course he has been charged with this accusation by notables like Searle and Strawson.
What his actual position is I cannot say. I don’t find it particularly important though, and I’ll be happy to admit otherwise if proved wrong. A lot of people are confused about Dennett particularly; I’m honestly beginning to believe he might revel in it a bit!
Also, please see my previous responses to Mike to get a clearer conception of what I think phenomenal experience is, maybe that’s more helpful.
My apologies for the considerable delay in responding. Life has been complicated. 🙂
To answer your question… To deny that one’s phenomenal experience is an experience *of* something goes much further than just denying substance dualism. It also denies such an experience as an experience of some internal representation of what is being experienced. Hence my reference to the Cartesian Theatre as an extreme example of what is being denied. But it does not deny the reality of phenomenal experience — as I see it, that’s a false dichotomy.
When you see an object on your mind’s eye, what is denied is that that object is somehow represented by your brain for you to observe. There is no such internal duality of the observer and a representation being observed. As I already mentioned, the general idea is that your visual centres are activated in generally the same way as they would be if you were actually seeing the object. And that *is* your experience — whether you are seeing the object or just imagining it. Of course, one can conceptualise this as a representation, but only if one is prepared to say that the experience itself is the representation.
I have re-read your exchanges with Mike S, and it would seem that this is indeed your sticking point. Your imagining of an object feels and looks so much like you actually seeing that object, that you find profoundly counter-intuitive (to put it politely :-)) the notion of there being no separate representation being observed by you. I think that’s where my aphantasia is relevant. I can imagine what it is like for you to imagine an object (not much different from seeing it), but this does not lead me to posit any kind of representation being involved.
In the extreme, the insistence of an internal representation can lead to silly arguments like the one made by Raymond Tallis in “Neuromania”: if one’s consciousness of a “redness” (or “greenness”, or “yellowness”…) was a brain state, than neurons themselves would have to be red (or green or yellow…). Now, I do not for a moment suggest that you are holding with such silliness, but I have heard a professional philosopher in Oxford quite seriously echoing this mind-boggling argument.
And yes, Dennett does deny representational “second transduction” as he calls it — the first being from images on the retina to trains of nerve firings, while the second would involve reconstruction of the image from those trains.. But when he says that he accepts the reality of *all* our phenomenal experiences, he means just that. The confusion as to whether or not he denies some or all phenomenal experience only arises if one cannot imagine how it might work in the absence of the “second transduction”.
Hey Mike A,
No worries for the late reply, and I hope things are going okay for you (if not better).
I suspect we have a different definition of what substance dualism is, because my interpretation of substance dualism is basically analogous to what you are saying about a separate representational state being my experience. If my representational state is not my brain physical state, and if my experiences are such representational states, then as far as I’m concerned, that’s substance dualism. This need not be anything like Cartesian dualism or “souls” or what have you, it can be Penrose’s OOR for example. In any case, it doesn’t matter what we call it, as long as we understand each other.
“Our imagining of an object feels and looks so much like you actually seeing that object, that you find profoundly counter-intuitive (to put it politely :-)) the notion of there being no separate representation being observed by you. ”
This is not what I am actually saying. I am saying that I think phenomenal experiences exist, and that furthermore the strong illusionists (and many weak illusionists) are rejecting the proposition that there are phenomenal experiences, in addition to any claims concerning separate representationalism.
I take the proposition that
P: (phenomenal experiences exist), to be asserting that there are true facts about my experiences, like facts about table spatial arrangement in my mental model.
Notice that denying that our experiences exist as a separate representation doesn’t actually entail ~P. To achieve this, we need two more premises. These are:
E: Experiences are brain states
B: Physical descriptions of brain states completely exhaust and capture all their properties (including the experiential content).
F: B does not capture or entail phenomenal facts, like facts about table spatial arrangement in my mental model.
I believe that the illusionists, like Frankish, accept E and B, and basically everyone (yours included) accepts F. Denying separate representational (or “second transduction”), is merely about getting us to E.
For the record, I do accept E, and thus deny separate representationalism. I also reject B, and think we need to be open to accounts like panpsychism or strong emergent property dualism to reconcile the facts of P with the truth of E.
“In the extreme, the insistence of an internal representation can lead to silly arguments like the one made by Raymond Tallis in “Neuromania”: if one’s consciousness of a “redness” (or “greenness”, or “yellowness”…) was a brain state, than neurons themselves would have to be red (or green or yellow…).”
Unfortunately, I have no idea who this person is, but I will say that if you accept E & P & F, then it straightforwardly follows that ~B. This means that descriptions of neural states have to include descriptions of phenomenal facts. Under panpsychism, this might mean that neurons (or other micro-brain features), experience phenomenal qualities like redness. This doesn’t of course mean that a neuron is physically red (that would indeed be absurd), because phenomenal red is not physical red.
In any case, all I’m asserting, and have been asserting so far, is the truth of P. I still think it’s pretty clear that most illusionists like Frankish deny the truth of P. My question is, do you deny P? And if not, what part of the above reasoning do you reject?
“Substance dualism” is generally understood as meaning that minds and bodies have different ontological substrates. Any other use of it needs to be suitably qualified.
You ask me whether I believe your proposition P:
“P: (phenomenal experiences exist), to be asserting that there are true facts about my experiences, like facts about table spatial arrangement in my mental model.”
That depends on what you mean by “true facts”. Do I believe that you truly have such experiences? Absolutely so! Do I believe that you have them in virtue of perceiving some internal representations constructed in your mind? No, I don’t.
Take that old philosophical chestnut: if I believe I am in pain then I am in pain — I cannot be wrong about the fact. So, is pain an illusion? Well, yes and no. If C-fibres are not firing, yet I am in pain (e.g. from a phantom limb), then one could say it is an illusion. OTOH in so far as I really am in pain, C-fibres or no C-fibres, that pain is no illusion. To demand a yes/no answer is to pose a false dichotomy.
@ Mike Arnautov:
“”Substance dualism” is generally understood as meaning that minds and bodies have different ontological substrates. Any other use of it needs to be suitably qualified.”
If the separate mental representations are conceived of as belonging to my mind, then surely it follows, according to this viewpoint, that mind and brain are ontologically separate, no?
“That depends on what you mean by “true facts”. Do I believe that you truly have such experiences? Absolutely so! Do I believe that you have them in virtue of perceiving some internal representations constructed in your mind? No, I don’t”
Believing in P doesn’t entail belief about separate internal representations, or Dennett’s second transduction. Also, as I showed, if you hold to E & B & F, then this entails ~P. So, physicalism seems to be incompatible with the truth of P. Allow me to then reiterate my question; what part of the previous reasoning do you reject? If you accept that I am having phenomenal experiences, you can’t hold to the conjunction of E & B & F. Are you denying F?
Substance dualism proclaims mind’s ontological substrate to be non-physical.
See e.g. https://en.wikipedia.org/wiki/Mind-body_dualism#Substance_or_Cartesian_dualism: “Substance dualism, or Cartesian dualism, most famously defended by René Descartes, argues that there are two kinds of foundation: mental and physical. This philosophy states that the mental can exist outside of the body, and the body cannot think.”
Similarly https://plato.stanford.edu/entries/dualism/#SubDua characterises substance dualism thusly: “So the mind is not just a collection of thoughts, but is that which thinks, an immaterial substance over and above its immaterial states.”
To make it three: https://www.britannica.com/topic/substance-dualism “That version, now often called substance dualism, implies that mind and body not only differ in meaning but refer to different kinds of entities. Thus, a mind-body (substance) dualist would oppose any theory that identifies mind with the brain, conceived as a physical mechanism.”
And no, I cannot answer yes/no to your E, B and F either. I hold with non-reductive physicalists asserting predicate (not substance or property!) dualism — Davidson in particular. Hence for example, for me your E cannot be answered because it fails to specify what type of description is meant. Under physical description, E is true. Under mental description E makes no sense because mental description does not feature physical brain states amongst its ontological commitments. (Note the distinction between ontology as such and ontological commitments of a discourse — to quote Quine again: “what there is, does not in general depend on the language we speak, but what we say there is does”.) Furthermore, B and F “sin” by mixing physicalist and mentalist discourses with the result of achieving “non-sense” (not the same as nonsense!), thus not permitting a yes/no answer.
If this terminology of “discourses” is not familiar, think of them as points of view: mentalist discourse if the 1st person point of view, while physicalist discourse is the 3rd person point of view. Mixing them is the source of much confusion in philosophy, such as the whole “free will” controversy, for example. Or (my favourite :-)) the argument contra physicalism repeated e.g. by Raymond Tallis in his “Neuromania” that if my consciousness of a “redness” was a brain state, then the neurons themselves would have to be red.
Very interesting points Mike! Davidson’s anomalous monism stance sounds a great deal like Searle’s biological naturalism to me, with which I am somewhat more familiar. I would say that I am sympathetic to criticisms that Searle and Davidson’s viewpoints collapse into property dualism.
I do disagree that B and F lack meaning under a two-levels of description semantic account. If we accept the aforementioned account, then B can be rephrased as asserting that brains just have physical and no mental descriptions. In other words, B is the denial of there being two levels of descriptions of the brain/mind. F, by contrast, merely notes that B leaves out the mental level of description.
B and F are meta-level talk, they’re not just talk about physical or mental events, but also talk about what is physical or mental etc… I think it’s pretty clear that meta-level talk isn’t meaningless, if that were true then it would be impossible to even assert Davidson’s two-level description stance in the first place!
Thus, we can rephrase the debate to be about whether the facts in the mental level of descriptions are true. Presumably, we both believe that there is one reality and that there are true facts about that reality. Additionally, when we speak of true physical or mental events (to adopt Davidson’s terminology), we utilize something like a correspondence account of truth. I think (and I presume you do too) that both mental and physical events take place in the world (the one world we inhabit), so while it might be appropriate to adopt a coherentist theory of truth when speaking in some other context like with logical or moral facts, such contexts do not apply here.
Having established the above, it is then meaningful to ask whether mental events even take place. It seems like under a physicalist programme you have to say no. Or at least, if you subscribe to the viewpoint that physical facts fully exhaust the world (the one captured under our correspondence theory of truth), then you would have to say no. So, I don’t see how non-reductive physicalism can work, unless you deny the correspondence theory of truth and think that talk about mental events is talk about some other kind of stuff.
You might also try adopting an indexical stance of mental talk. Maybe talk about mental events is like talk about indexical facts (“Robert is really tall”). Such facts are relative to your point of view, but can still be true or false depending on context (e.g. If I am super tall, then Robert seems short). However, I don’t see how this can really work in our case. The point is that indexicals still refer to fixed propositions, they just leave out the relevant tags that pick out the proposition(s) in question, which have to be filled in by context. As such, indexicals refer to different propositions depending on the particular context, but the facts of the matter and truth of the propositions don’t change by context.
If we were to insist on drawing this analogy with mental-physical events, then we would have to explain mental-physical fact divergence as being due to the respective facts in question referring to different propositions (because 1st and 3rd person context changes the reference). Unfortunately, the problem is that under physicalism, physical facts should fully exhaust all the propositions about the world. Once you admit that there are true mental propositions about the world which can only be captured in a certain context (the 1st person one), then you’ve admitted that physical facts don’t fully exhaust our reality.
I do not see Searle and Davidson saying even approximately the same thing. Searle is no predicate dualist and has never been classified as such. OTOH Davidson’s “Mental Events” outlining Anomalous Monism is the ur-text of predicate dualism and has no trace of Searle’s biological exceptionalism.
As far as I am concerned, arguments that predicate dualism collapses into predicate dualism are flawed in that they mix mentalist and physicalist discourses, thus missing the whole point of Davidson’s approach.
Re your B and F… My apologies, I should have been more explicit. “Physical descriptions of brain states completely exhaust and capture all their properties”? — yes; “(including the experiential content)”? — no. Experiential content belongs to the mentalist discourse (there is no “experience” in physics), but the question was posed in terms of the physicalist discourse, asking about physical descriptions of brain states. Hence my comment that it mixes the two discourses. And because I reject the premise of the question, that is the mixing,, I decline to answer it as posed. Do tell: have you stopped beating your wife — yes or no? 🙂
Your F refers to be and thus inherits from it the same problem.
Now, my turn to ask a non-rhetorical question for me (the above one does not count!)… Do you think pain in a phantom limb is an illusion or not?
I certainly agree that Searle is no predicate dualist. I merely pointed out that there are similarities. Both Searle and Davidson are non-reductive physicalists, with Searle claiming that there is no ontological reduction, and Davidson denying strict psycho-physical lawlike relations. But Searle’s notion of ontology is very weak (not substance), and really, he mostly means it in a descriptive sense (descriptions of the physical won’t capture the mental). This is also why Searle adopts a two-levels discourse, again very similar to Davidson.
“As far as I am concerned, arguments that predicate dualism collapses into predicate dualism are flawed in that they mix mentalist and physicalist discourses, thus missing the whole point of Davidson’s approach.”
Mike, the only difference between your two-level talk and my language is that you have artificially restricted the scope of your quantifiers, so that (for example) physical propositions only range over physical facts. Whereas in my language, every proposition quantifies over every possible fact. As I already mentioned, unrestricted quantification can’t be meaningless, because then we wouldn’t even be able to assert that the two-levels description talk was meaningful. To even begin to justify the two levels of description approach, you have to first speak in my meta-language, so clearly, it’s coherent to do so. Thus, your refusal to engage with my propositions is just a matter of preference; you haven’t actually demonstrated what matters, that such propositional talk is “non-sense” as you claim.
At best, the two-level description talk is a useful accounting of ordinary language propositions. It might be true that ordinary language sentences really do use two levels of description in mental/physical discourse, and that would be a great discovery if it did. But it certainly wouldn’t invalidate anything I say.
So going back to B and F, they should be read as unrestricted in modal scope. You would therefore believe that B is false, because experiential content (belonging to mentalist discourse) is not covered, contrary to the claim.
Now to answer your question: “Do you think pain in a phantom limb is an illusion or not?”
No, it’s not an illusion in either the phenomenal or functional sense. The phantom limb sufferer still experiences pain in the functional sense, in that they retain the appropriate brain states with intact dispositions. I think everyone (including the physicalists) would agree that their pain is real.
But the real question at the heart of this matter is whether physicalism is true. You view physicalism as a very weak claim that doesn’t quantify over all of reality, but just those propositions limited in the physicalist discourse, while I don’t. And I think my view is much more in line with theoretical physics. Physics is supposed to be an ultimate accounting of all reality, not just a particular domain of reality. In the end, you would still have to agree with me that physicalism can’t account for all of reality (that which is talked about in mentalist discourse), and as far as I’m concerned, that’s all that matters. But I think that would be news to most physicalists!
Phantom limb pain is indeed real. Yet the limb is phantom. How can that be? The answer is obvious: what is being experienced is not actually in the limb, be it phantom or otherwise. I.e. the “representation” has to be moved from the limb into the brain. This is uncontroversial.
Illusionists simply take the next step along the same route and say the representation being experienced is not actually separate from the experience. The experience *is* the representation. There is no phenomenal experience separate from the functional one. This does not deny phenomenal experience, as you seem to think. It merely asserts that the phenomenal experience is identical with the functional one. Hence Frankish saying (as quoted by Mike S) that there are no non-functional qualia.
Interestingly, that is the same move as Davidson makes with his Anomalous Monism, just expressed in different terms, which is why his position is radically different from that of Searle’s.
Am I, following Davidson, being arbitrary in refusing to mix vocabularies of mentalist and physicalist discourses (a.k.a. phenomenal and functional ones)? I’d say the boot is on the other foot. The common assumption that for a physicalism entails the two discourses being fully inter-translatable, is on closer inspection, based on no evidence at all. In fact it is that very assumption that leads straight into philosophical quagmire such as the notorious issue of the free will. So why persists with it?
I don’t agree with your claim that your refusal to mix “mentalist and physicalist discourse” is following Davidson. On the contrary, Davidson’s book ‘Mental events’ (I’ve been reading chapter 20) is full of language that engages in such mixing (see here: https://divinecuration.github.io/assets/pdf/davidson-mental-events.pdf).
Indeed, it would have to be, since anomalous monism is the claim that mental events supervene on physical events and that there is no nomological connection between the two. In other words, as I’ve previously stated, one can’t even assert Davidson’s stance without engaging in such a mixing of language.
Nor does Davidson even claim that it’s impossible/meaningless to give a complete physical description of some mental fact, as he admits that such a coextensive statement might exist (p.142 above). Rather, he merely asserts that such a statement would be both impractical and epistemically unknowable.
In any case, I never employed such language in my previous response. I was merely talking about the possibility of such a description existing; I never attempted to actually describe it. Davidson’s point is just that we shouldn’t be in the business of trying to describe mental events in physical terms, and not that we can’t talk about the possibility of doing so (which my argument was about).
Furthermore, Davidson’s view is totally tangential to mine. My argument is not about the nomological relationship of the mental physical, but rather the ontological. I’m saying that physical descriptions don’t capture mental facts, whereas Davidson’s argument seems to be that mental descriptions are physical but that it is wiser (but not necessary) to employ physical vocabulary if we want to actually flesh out such descriptions (a task I never even attempted).
Notice that my argument is not quite the same as the denial that the experience is the representation (a point I made earlier, so I’m not sure why you keep repeating the assertion?). To get there, we would have to accept the additional premise that my experience is identical to my physical brain state (which I deny). I accept that my experience is happening in my brain, but I just think that a physical description of my brain is incomplete.
So, to sum up, you never actually engaged with my argument, citing Davidson as your reason why. But this seems to me to be both a misunderstanding of what I am saying (an ontological claim) and what Davidson is saying (a nomological claim). Also, I view Davidson’s two-level position as more of a recommendation. He never actually asserts that those who mix physicalist and mentalist nomological facts are engaged in meaningless discourse, just that it would be wise not to do so to avoid “the tedium of a lengthy and uninstructive alternation” (p.142)
Firstly, may I suggest re-reading “Mental Events” while keeping in mind the distinction between “a discourse” and “events under a discourse”? Davidson’s conclusion is that to satisfy his premises, an event under mental description must have a physical description, and that this does not contradict the premise of anomalousness of minds, i.e. of there being no nomological law linking mental and physical discourses. This is possible because events are singular (tokens) while discourses are of necessity dealing with generalisations (types) and token identity does not entail type identity.
The upshot is that physicalism does not entail any law-like translation between mental and physical discourses. It does not rule it out either, of course — unless one assumes anomalousness of minds to be a fact, as Davidson does. But even without that assumption, we do not have any reason to assume that such translation is possible. I.e. a physicalist must agree that each particular instance of pain can be tracked down to particular dynamic brain state, but the same does not apply to “pain” in general.
This is not as surprising as it might seem. We can see the same situation in Conway’s Game of Life, where each individual “spaceship” can be seen resulting from application of the Game’s rules, but the same is not true for the general concept of a “spaceship”, because Game of Life is “Turing complete”.
Secondly, the reason I expanded on the issue of experience being identical to the representation is because it looked like I wasn’t getting the point across — exactly in light of your acceptance that the two might indeed be identical. As I tried to explain the last time, that’s exactly what illusionists (according to Frankish) are saying in asserting that only non-functional qualia are denied. If you agree that representation and experience might be the same thing, this directly contradicts your complaint that illusionists deny your experience.
Hello again Mike,
Thanks for the extensive write-up on Davidson. I agree that a mental type isn’t extensionally definable as a set of physical descriptors, but that seems to me to be more an issue attributable to the vagueness of our (folk psychological) language, rather than a deep metaphysical incompatibility between the mental and physical. I think a similar problem (vagueness) accrues when we attempt to reduce the special sciences to physics, but that’s no reason to reject reductionism of the sciences. I also think that Davidson was writing at a time when reductionism was perceived as being about the discovery of strict nomological “bridge” laws. Nowadays however that’s largely gone out of style (as I understand it).
In any case, to reiterate what I earlier said, I’m making an ontological claim, not a nomological claim. I’m saying that no description of physical token(s) exists that can capture a description of some mental token (this can be an “event” in Davidson’s ontology). Take a basic mental event, like my perception of an apple, and describe the mental facts about that individual perception/token. Now give a complete (physical) ontological description of my brain structure. The latter would still nonetheless leave out certain facts about the mental event in question, like spatial relations between my sense datum (i.e. the apple appears to be on top of the table), or the colour of the apple etc…
It might be objected that the “mental” facts are actually about the real-life physical apple and table being perceived, but this seems clearly wrong. There is a distinction between the sense data and the physical object, and it is easily discerned when we switch back to the example of visualizations (or dreams for that matter), which is why I preferred them over perceptions in the first place. This issue, as I see it, applies to all mental events, including your example of pain tokens. There are facts about that pain token, like it being a “sharp or biting pain” which are not describable in physical terms. However, for the time being, we can stick to the visualization examples, as it’s easier to see the descriptive/ontological differences at play.
“As I tried to explain the last time, that’s exactly what illusionists (according to Frankish) are saying in asserting that only non-functional qualia are denied. If you agree that representation and experience might be the same thing, this directly contradicts your complaint that illusionists deny your experience.”
Agreeing that representation=experience does not entail that there are no non-functional qualia, or that I don’t have access to classic qualia. For example, if panpsychism is true, my representation of my mental states are my brain experiences, but my brain experiences instantiate classic qualia (they are not fully describable in traditional physical terms). Since the illusionists are saying that non-functional qualia don’t exist, they are doing more than just asserting that representation=experience. My previous replies delve into this topic with more detail.
a) Davidson makes no ontological claims beyond straightforward monism, which he shows to be consistent with “anomalousness of minds”. Are you perhaps using “ontological” in some non-standard way?
As it happens, the claim you make (and reiterate) is in fact exactly the one that Davidson makes: the two discourses are not inter-translatable, so the physicalist discourse cannot be used to express mentalist experiences. There is no ontological mystery involved in this. I already gave you an example of a perfectly deterministic system (Conway’s Game of Life) in which the “physicalist discourse” of the rules of GoL cannot capture such simple higher level GoL concept as “a spaceship”.
b) Your panpsychism argument does not work unless you are invoking the kind of panpsychism which posits existence of phenomenal qualia separately from embodies minds. If that is what you mean, then your beef is not with illusionism but with functionalism as such.
I think we’ve meandered off the path of our original argument (not helped by the lengthy time frame of our conversation, I’m sure). Allow me to summarize so as to bring us back on track:
I began by charging the strong illusionists with denying basic mental facts, and I argued that illusionism is incompatible with the existence of an inner mental life. You replied with the claim that any attempt to reduce the mental to the physical is incoherent at best, a consequence of the irreconcilability of the semantics of physicalist and mentalist discourse. Thus, it would be improper of me to demand some sort of reductionist account on behalf of the illusionists. The lack of such an account only signifies a semantic problem.
Firstly, I don’t subscribe to such a semantic account, nor do the illusionists I have in mind (like Frankish) who are a priori physicalists, meaning they don’t believe in an epistemic gap between mental and physical concepts (we are able to understand/define one in terms of the other). That’s important because I’m just criticizing the illusionist claims. Secondly, even if I did believe in what you say (semantic dualism), that’s totally inconsequential to my actual argument. For it turns out that your belief that there exist two levels of incompatible discourses, justified by the works of Davidson, is concerned with nomological translation and not ontological translation. In Davidson’s ontology, mental events are fully translatable as physical events. By contrast, I’m making an ontological claim, and so I’m denying that such a translation exists.
[A separate issue entirely, which we appeared to briefly have gotten mixed up in (but which is irrelevant to the main conversation at hand), is whether Davidson has any justification to even invoke a semantic incompatibility at the nomological level. I won’t repeat my argument here, so as to avoid getting further sidetracked.]
For reasons 1 and 2, your reply about the existence of two levels of discourse hasn’t refuted anything that I’m actually saying. It’s simply a tedious red herring which has sidetracked our conversation and prevented you from replying to my actual points.
I don’t have time to reiterate my main argument again, I already hashed it out earlier, as well as in extensive conversations with Mike Smith on this blog. Briefly, I’m saying that mental ontologies (e.g. mental things/events/particular states) don’t seem to be describable in terms of physical ontologies.
About B, remember that the (a priori) illusionists are also claiming that:
X: Classic Qualia don’t exist
But Y: (Experience is the representation) doesn’t entail X. Therefore, your claim that the illusionists are only saying Y is false. Why doesn’t Y entail X? Because Y could be true and X false, if for example panpsychism is true. In that case, our mental representations are our experiences, but our experiences are intrinsic states (they are classic qualia).
When I wrote, “I agree that a mental type isn’t extensionally definable as a set of physical descriptors, but that seems to me to be more an issue attributable to the vagueness of our (folk psychological) language, rather than a deep metaphysical incompatibility between the mental and physical.”
I meant that for the physicalist, the problem of reducing mental-physical types can be solely attributed to vagueness. Not sure how Davidson justifies there existing a deep metaphysical incompatibility between the mental and physical. I don’t see a significant difference between the reduction of the special sciences and the mental-physical for the physicalist.
Groan… When I am tired, my “fonetic eer” (sic.) tends to take over. For “Your F refers to be” please read “Your F refers to B”
“because the phenomenal realist denies that there exists any such descriptors ”
I meant that the illusionist denies this!
If I might jump in, while I’m not sure exactly what you mean by substrate independence, I would say that it sounds like you’re talking about a metaphysical type. It’s true that a functional type is not identical to a token, but that doesn’t make it a metaphysical object which is not to be identified with its physical objects/particulars. When we talk about functional ‘types’, we usually just mean to refer to the class of physical objects which instantiate that type. But the type doesn’t exist ‘over and above’ the set of tokens (e.g. brains). So speaking of functionality is simply meant to be a useful shorthand for the large class of physical objects which have certain physical characteristics. It just so happens, under functionalism, that this class of token objects is identical to the class of conscious objects. So consciousness just ends up referring to a large class of physical characteristics (those which are characterized by the functional type).
Of course, you can be a non-physicalist functionalist, where a functional type goes over and above the class of physical token, but a physicalist must adopt a nominalist and non-metaphysical stance regarding these questions.
LikeLiked by 1 person
Yes of course you can jump in. So you’re not exactly sure what I mean by functionalists proposing substrate independent consciousness? That’s understandable. Mike and I have been talking about this for several years so clearly he’s able to grasp my meaning, even if inconvenient. I’ll try to speak a bit more plainly and then see if I can fit this back into the abstractions that you’ve presented.
There was a tremendous discovery in neuroscience long ago that neuron firing essentially reduces back to “and”, “or”, and “not” gates. I think I was clued in to this in a video from this post of Mike’s. https://www.google.com/amp/s/selfawarepatterns.com/2017/04/15/steven-pinker-from-neurons-to-consciousness/amp/
Anyway scientists then realized that the brain functions essentially as a computer functions — it accepts input information, processes it algorithmically, and then provides output information that goes on to animate the body in all sorts of ways. Go good.
The issue however is that many went further to decide that consciousness itself (and I always mean the innocent kind that even Frankish accepts) must exist as nothing more than computer code that the brain runs. That’s where I think things get spooky. Like the rest of the body I believe that there must be some sort of physics which the brain animates to create the phenomenal experience by which each of us perceive our existence. Conversely they believe that it’s just input code converted into output code. My favored theory is falsifiable because it’s possible to check whether or not consciousness correlates with certain brain based electromagnetic fields. Their theory is not falsifiable because they do not postulate any sort of physics to even conceivably check for your brain to animate to produce “you”. That’s what I mean by substrate independent consciousness — it exists by means of code converted to other code rather than code that animates some sort of consciousness substrate. On the positive side this lets people imagine that someday a human consciousness could be uploaded to a vast but normal computer for the person to live in a virtual world. On the negative side this conception of consciousness has all sorts of ridiculous implications, such as my thumb pain thought experiment.
As far as your abstractions go, yes I’m talking about metaphysical types where tokens exist which the computational brain animates, such as an electromagnetic field. Furthermore I’m worried that in all of the confusion some who use the functionalist shorthand have accidentally forgotten that code which animates nothing, shouldn’t effectively do anything. This is to say that it should not create the experience that you’re currently having. A shortcut seems to exist here that should need to be removed to help right the field, and specifically the denial of a consciousness substrate for brain information to potentially animate.
“Like the rest of the body I believe that there must be some sort of physics which the brain animates to create the phenomenal experience by which each of us perceive our existence. Conversely they believe that it’s just input code converted into output code.”
I’m not sure how to interpret this, are you saying that computer code is just an empty abstraction which on its own is non-physical? If so, then a functionalist will reject this (that a functional type is just computer code in the above sense). A functional type is not metaphysical; it instead refers to a set of physical states. So, saying that they believe “it’s just input code converted into output code” is not mutually exclusive to the belief that functional types follow physics.
On the other hand, if you’re saying that a functional type is not a natural kind, meaning there is no physical law which correlates neatly to the phenomena of consciousness, then yes that’s true but that is no strike against functionalism (as I’ll explain below).
“My favored theory is falsifiable because it’s possible to check whether or not consciousness correlates with certain brain based electromagnetic fields. Their theory is not falsifiable because they do not postulate any sort of physics to even conceivably check for your brain to animate to produce “you”.”
Under functionalism, functional states just are conscious states, there is no test that needs to be done, because “functional states = conscious states” is a tautology. Of course, if we found convincing evidence for some phenomenal experiences, then maybe the functionalists would want to update their definition of consciousness. But it wouldn’t, strictly speaking, falsify it. This is why I also brought up the point about behaviorism, because according to a behaviorist, “behavioral states = consciousness” is a necessary truth, and so Mike’s point that this leaves out our brain states and intention, while true, is insufficient on its own to refute the behaviorist credo.
Regarding natural kinds, the charge that functionalism is not a natural kind (there is no specific physical phenomenon which instantiates consciousness) falls flat, I think. According to functionalists, talk of “consciousness” is merely a useful abstraction that we’ve adopted in our folk psychological terminology, but there is no real conscious thing that goes over and above our brain states. Indeed, we could eliminate all talk of consciousness (and functional states) and just talk about the physical brain states, and nothing would be missed. Thus, it’s not surprising that functional consciousness is not a natural kind, in the same way that “human body” is not a natural kind. There is no specific rule of physics which is required to construct a human body for example, because human beings don’t go over and above their atomic constituents (and so the ordinary physical rules that govern the atoms are sufficient). No need to invoke special physics.
It seems like you want to have it both ways; to accept the functionalist thesis but to still assert that there is some need to account for “special” phenomenal experiences. But there are no such experiences under functionalism.
LikeLiked by 2 people
I’d like to go through your comment point by point, though I think it would lead us down all sorts of dead ends. The only way to effectively assess what I’m saying would be to first understand what I’m saying. Try this:
Your computer accepts input information and processes it to produce output information that, for example, animates the function of your screen. That’s my conception of how computers work in general — input, process, output. If your computer screen were disconnected then your computer should still keep doing all of its same code based operations that are meant to animate your screen, though without actually doing so given the disconnection.
Now consider your brain essentially like your computer and your consciousness essentially like your computer screen. From this perspective your brain can’t accept input information and process it into other information to create your consciousness, but rather (as in the case of your computer screen) it must animate some sort of worldly mechanisms. That’s where neuron produced electromagnetic radiation enters the picture as a consciousness substrate for the brain to potentially animate. Just as output from your computer operates your screen, output from your brain theoretically operates brain based electromagnetic fields that exist as your consciousness itself. It’s as simple as that.
Mike’s brand of functionalism is diametrically opposed to this position since from here there can be no conscious substrate. Instead it’s presumed that a brain receives input signals and processes them into output information that itself exists as consciousness. I’d say that he’s missing something, or a consciousness actualization mechanism for the brain to animate.
I hope this help clarify my position because I’d very much like you to assess it.
I hope I’m understanding your position correctly. It seems like it’s analogous to my first interpretation, where consciousness is identical to some metaphysical code, which is ‘outputted’ or caused by the physical processes of the brain. But again, I would say that this is not an accurate description of functionalism.
To speak of a particular functional state, is (in our case) just to refer to a particular physical brain state. So, saying that something is missing from the physicalist/functionalist picture, on account of the fact that “it’s presumed that a brain receives input signals and processes them into output information that itself exists as consciousness” is to already deny functionalism, since your quote assumes that consciousness is separate from the physical brain system. Our consciousness is not some metaphysical state instantiated by the brain, it IS the brain system (or a part of it).
One way to think of the physicalist/functionalist picture is like this: There’s only one type of existing thing, physical stuff. “You” refers to the physical system which is your human body (or a part of it, like your brain), and nothing else. In everyday life we use folk psychological terms, like “consciousness”, but we shouldn’t think of consciousness as some ontological entity that needs to be accounted for in our picture of the world. Rather, what we call consciousness is just a particular physical state, those physical states which happen to fulfill an important function.
It just so happened, for evolutionary reasons, that it was useful to develop a vocabulary focused on these things (hence the invention of the term ‘consciousness’). It’s also why our terminology of consciousness doesn’t map neatly to some specific physical phenomena (because the vocabulary wasn’t designed with that in mind).
The phenomenal realist would say that this ontological picture of the world is woefully deficient, because it leaves out the phenomenality of certain physical states (like our brain states). This phenomenality is real and needs to be accounted for. Therefore, we should be searching for some objective physical phenomena (like EM fields) which might instantiate these phenomenal experiences. In the functionalist picture, there is no need to do this because “consciousness” is just a made-up term, which doesn’t track any natural kind or physical phenomena, but rather maps a subjective phenomenon (whatever we consider to be functional).
I can just come up with some other made-up term “hamburgerness” which is supposed to track the quality of something being like a hamburger. We can speak of this quality being ‘real’, insofar as some physical states come close to matching this subjective property, but we shouldn’t confuse ourselves into thinking that the “hamburgerness’ of a system is some natural kind, or that a description of a perfect hamburger is missing something because a complete physical description of my hamburger is still missing a “Hamburgerness actualization mechanism”.
The only reason we commonly talk about consciousness as opposed to Hamburgerness (according to the functionalists), is because talk of consciousness is much more useful.
LikeLiked by 1 person
It would seem that you essentially do understand my position Alex. I’m that phenomenal realist who believes that we should be searching for objective physical dynamics (perhaps like EM fields) which might instantiate subjective phenomenal experiences. Excellent! Let me go a bit further however into the situation that I perceive here.
You’ve suggested above to Mike that functionalism, like behaviorism, seems true by definition. This has been a concern of mine about functionalism as well, and may have incited Mike to observe that he’d consider it falsified if it could be established that consciousness can only exist by means of a specific kind of substrate. So his position seems to have gone from true by definition, to unfalsifiable except through the validation of a theory that actually is falsifiable. I don’t know if anyone else who calls themselves a functionalist has taken this step as well, but perhaps.
In any case it was this modified conception of functionalism (in correspondence with my provided conception of how computers work) which may be interpreted such that “consciousness is identical to some metaphysical code, which is ‘outputted’ or caused by the physical processes of the brain.”. (This is also my conception of what’s misleadingly known as “computationalism”, or the scenario I presented above where many seem to have effectively taken a shortcut to presume that because the brain functions computationally, that consciousness must essentially exist as metaphysical code rather than certain mechanics operated by code.)
Yes clearly any functionalism which is instead true by definition would not need to stand by this, or metaphysical dynamics of any other kind. And it could be that I’m wrong about how computers work, or even if they do work like that, there are all sorts of other ways to achieve phenomenal function that a falsifiable theory could effectively falsify. Dualism should do it (not that it’s directly falsifiable either, though at least given strong evidence that it’s true).
If you think we’re reasonably square on this, there are many other things that I’d like your thoughts on. This doesn’t need to be now however. Here are some of the positions that I enjoy discussing:
I believe that we need a respected community of professionals to provide scientists with effective principles of metaphysics, epistemology, and axiology. I’ve developed one of the first kind, two of the second kind, and one of the third kind, each of which I like to discuss individually as well. I suspect that the failure of philosophers to provide scientists with any such principles has largely led to the softness of its mental and behavioral varieties.
As you know I’m extremely interested in McFadden’s cemi and ways that it might effectively be tested.
I’ve developed a psychology based “dual computers” model of our function where the brain is essentially a non-conscious computer that creates an auxiliary phenomenal computer by which existence is experienced. While electricity powers our computers, and electrochemical dynamics power brain function, an equally real value dynamic powers the phenomenal variety of computer.