The debate between phenomenal realism and illusionism, and the scope of perceptual properties

In the last post, I pondered the idea that the real difference between a realist and anti-realist stance toward a scientific theory is about how broad or narrow the scope of the theory might be, about it’s domain of applicability. An anti-realist takes a narrower view on scope; such as that the theory can be used to predict current observables, and that’s it. A realist takes a broader view on scope; the theory can be used to make predictions to at least some degree beyond current observables, although there can be different takes on just how far the theory’s implications can be followed.

At the end of that post, I noted this might have implications outside of just straight scientific theories. Consider the debate in consciousness studies between phenomenal realists and illusionists.

Phenomenal properties, aka qualia, are often seen as elements of subjective experience that are intrinsic, ineffable, private, and that we are directly acquainted with. Put another way, they’re seen as fundamental, irreducible, indescribable, unanalyzable, and inaccessible in principle from any third party observation, yet from the first person perspective we have direct and infallible access to them.

These phenomenal properties, if they exist, seem irreconcilable with what we know about the brain, or physics in general. It’s what causes philosophers to talk about the mind-body problem, explanatory gap, or hard problem of consciousness.

The intrinsic, ineffable, private, infallible attributes above were recognized by Daniel Dennett in his 1988 paper: Quining Qualia, in which he makes the case that qualia, as described, do not exist. Since then, many philosophers have backed away from these attributes, arguing that they’re considering something without those commitments.

However, as Keith Frankish pointed out in his 2012 paper: Quining Diet Qualia, this move is problematic. If we dispense with the attributes Dennett identified, then what separates qualia from just straight perceptual information, of a kind a machine might have? Using “qualia” or “phenomenal” to refer to this information, which I’ve often done myself on this site, doesn’t have the deep mystery noted above. Getting the mystery back involves reintroducing Dennett’s identified attributes, typically implicitly or under different names.

So if we’re going to talking meaningfully about phenomenal properties, then these attributes seem like a necessary part of the conversation. But here’s the question. Is the existence of these attributes a binary determination? Or could we be talking about a narrower versus broader scope?

Dennett himself in Quining Qualia implies a possible answer toward the end of the paper, when he considers why we think phenomenal properties are intrinsic, ineffable, private, and directly apprehensible. He notes that most of these attributes are practically true.

For example we don’t have the technology yet to examine thoughts and perceptions from the outside, making them practically private,. And, due to the complexity of many perceptions, and the limitations of language and introspection, base perceptions are practically ineffable. We also do have some level of internal access to those practically private internal states, making that access usually less fallible than third party observation.

So maybe the real bone of contention here is what the scope of these perceptual attributes might be. An illusionist will see the scope as narrow, more along the lines of practicality as identified by Dennett. A realist sees that scope as broader.

From the realist perspective, the illusionist is denying the obvious, ignoring the first person data. However, it seems like an illusionist can accept that data, but see the realist as pushing our intuitive model of perception and thought too far, leading to conclusions that there’s something intrinsic, fundamental, and irreducible about them, resulting in the sense of mystery, the hard problem of consciousness.

Looking back on an old post I made about whether qualia exist, I think this is the point I was trying to get at. But this insight, if accurate, doesn’t incline me to start using “phenomenal” and “qualia” in the limited sense again. Their use in the broader sense is just too pervasive, making use of them without careful qualification an invitation to confusion.

Still, it seems like understanding what the real bone of contention is between the camps can clarify many discussions. At least unless I’m missing something. Are there aspects of phenomenal properties I’m overlooking here? Or of either viewpoint?

166 thoughts on “The debate between phenomenal realism and illusionism, and the scope of perceptual properties

  1. “intrinsic, ineffable, private, infallible”

    I don’t see “intrinsic” as saying anything more than conscious experience seems different from the external world precisely in the sense that its has certain “intrinsic” qualities. This is something we practically acknowledge at every moment when we treat our thoughts and perceptions as different from the trees outside my window, for example. The world exists but so does my perception of it which differs from it

    “Private” means my perceptions are my own. Yours are your own. We can describe them (hence “ineffable” makes no sense) to each other but can’t directly share them.

    Some people may want to argue there is something “ineffable” about blue or red. But we can describe it – “like the sky” or ‘like a rose”. Descriptions of anything require a common frame of reference and qualia in that sense are no different.

    “Infallible” – nope. My perceptions are fallible. I mistake things. I sometimes see something not there, usually to realize the mistake later, or I remember things that didn’t happen exactly in the way they did, which I usually correct after consulting with other people.

    Frankish et al may be overthinking this.

    Ordinarily when we speak of illusions we are talking about a phenomenon that appears to be one thing but is actually something else and we have at hand an explanation for why the illusion happens. Without an explanation we can’t really be sure it was an illusion or the real thing. We can explain the mirage of a lake in the desert with light and air temperature. Frankish offers no explanation for how the illusion of consciousness happens. Without an explanation the claim is empty.

    What’s more the claim is likely wrong in its extreme version. If consciousness is an illusion, why would our perceptions parallel so closely the real world. If I reach down to grab a can of beer from the cooler, why would I have the perception of bending, my hand reaching out, my hand touching the lid of the cooler, have the feel the weight of lid as I lift it, feel the cold of ice surrounding the can of beer, and so on. Each actions and the perceptions associated with the action track each other. If it were all of illusion, why wouldn’t the brain just present the aroma and flavor of IPA while my zombie body does the work of procuring the actual can, opening it, and bring it to my lips.

    Liked by 2 people

    1. “Intrinsic” is one of those problematic words where it’s never entirely clear what is meant by it. It’s often taken to refer to a property that’s part of the essence of something, with no relational aspects, but if we take that seriously, then how would we even know about them? Dennett points out that philosophers struggle to agree on a definition. He’s largely dismisses it as a useful term.

      My interpretation of it, based on many conversations over the years, usually takes it to refer to something fundamental and irreducible.

      Frankish’s explanation for the illusion usually references the limitation of introspection. And there are various theories exploring those limitations, and their implications, in more detail. One of them is Michael Graziano’s attention schema theory, which I’ve highlighted a few times.

      Remember, illusionists aren’t saying that we aren’t functionally conscious, in the sense of taking in information about ourselves and the environment to solve goals, like getting a beer out the fridge. They’re just saying that the experience of the taste of that beer isn’t something with its own essence beyond that functionality.

      Liked by 1 person

      1. The problem with that is that ” taking in information about ourselves and the environment to solve goals, like getting a beer out the fridge” is conscious activity. It doesn’t happen if we’re passed out drunk in the living room. Our actions moving from the living room and going to the frig always are mirrored by some level of perception about what is happening. Are you saying the taste of beer is illusion but everything else that led to having the taste is functional, not an illusion, even though all of the activity from leaving the couch to tasting the beer was accompanied and mapped by perceptions? Maybe you think “taste” is too primitive a perception (wine experts and brewers might disagree) so it is an illusion but vision is advanced so it is not an illusion but functional?

        Liked by 1 person

          1. Your argument would apply to every aspect of moving from the couch to the frig and getting the beer. It is all accompanied by perceptions with no reality in your argument. So you seem to be saying it is all functional and unreal.

            I think the word “illusion” is completely wrong for what you are describing. It is all a model or simulation because it maps to reality which is why it is functional. The model or simulation would, first of all, have its own physical existence; secondarily, since it maps to reality like the elements of a map correspond to roads and terrain it has some degree of fidelity to underlying reality unlike a mirage.

            Liked by 1 person

          2. If you would excuse the musings of a neophyte on these matters, I’d like you to expand on your comment that “The idea that there’s a reality to taste beyond the functionality is the illusion.” Here’s my question and excuse my awkward approach. If indeed the “taste” of my beer is unreal (merely an illusion) then it would seem to follow that my other human perceptions (smell, touch, hearing and sight) would be subject to the same account. Are you saying that the smell, touch and visual perception of my beer are also unreal, merely illusions?

            Liked by 1 person

          3. Neophyte, or any honest questions, are always welcome!

            I’m not saying any of these things are unreal. I think they exist as functional mechanisms.

            It’s the idea that they have fundamental aspects beyond the functionality that is being called into question.

            Dennett actually uses the taste of beer as an example. Consider the first taste you ever had of beer. Now, assuming you’re a regular or semi-regular drinker, think about the most recent taste. Did they taste the same? If you’re like most people, that first sip was probably repulsive, and the most recent a lot more pleasant. Beer, like coffee and many other things, is an acquired taste.

            But during the process of acquiring that taste, what’s changing? The chemical composition of the beer stays the same. So it must be our reactions that are changing, the affective associations we have for that particular stimulus to our taste buds. The point being that there’s no fundamental intrinsic taste to beer. There’s only the way our systems react to it when it comes into contact with our receptors. And it changes as we train our systems that something that they initially react to as toxic over time acquires ever more pleasant associations.

            Of course, there’s a lot of complexity that goes into that taste, which is only represented as something relatively simple introspectively. The mistake is in seeing that as anything more than the limitations of introspection, a mechanism that didn’t evolve to give us accurate insights about the architecture of the mind.

            Hope that helps.

            Liked by 1 person

          4. Thanks Mike. I’ll ponder that and perhaps I can fully comprehend what you’re saying. As a minimum, I agree with James—use of the word “illusion” seems erroneous. I think one may logically assume that the “taste” of my beer is caused by my beer and thus, is not an illusion. That is, for my beer to have the taste of beer there must be something—something objective—in my glass that has the ability to cause my experience of tasting beer. As far as I’m concerned, that is not an illusionary event. A “bent stick” in a pool of water is an illusion; the taste of my beer is not—regardless of the fact that I have developed a liking for the taste of beer.

            Liked by 2 people

          5. I’ll add my own interpretation as well Matti. When Frankish and Dennett say “That’s an illusion”, you might better interpret them to mean “That’s a conception of consciousness which demands function beyond worldly causal dynamics, and I don’t think anything functions that way”. So their “illusion” term seems to generally misrepresent what they ultimately mean. I think Mike has acknowledged this as a reason that he agrees with the spirit of the illusionism position itself, though not its terminology. And given the amazing popularity of Frankish and Dennett, I can’t say that they’ve failed here at all. Misrepresentation can be a very good way for a person to become more popular, and indeed, their popularity effectively constitutes the success of their careers right now. As science straightens things out here however, I suspect that their “information only” claim regarding actual phenomenal experience, will become understood as a causal dead-end.

            Like

          6. Thanks Phil Eric. I shall not comment on Dennett or Frankish, I’ve read none of their works and, from what I know, I’m not likely to do so. I suspect Mike and perhaps Dennett and Frankish are saying that the “taste” of my beer does not have an objective reality beyond it‘s functionality. What that means exactly is unclear to me—functionality being a slippery concept that I’ve struggled with elsewhere. And Mike’s use of that concept is what I’m struggling to grasp here. I may agree in one sense and I may seriously disagree in another. If Mike and perhaps Dennett and Frankish really mean that the taste of my beer is not real outside my subjective taste experience, I have grounds to disagree. As I said above, for my beer to have the taste of beer there is something—something quite objective—in my drinking glass that causes my experience of tasting beer. Hint: it’s the beer. That is, I taste the beer. The object of my taste experience is the beer, not some functional experience of the tasting of beer. I think this is not a trivial point. In brief, I do not taste the functional experience. That is, there is a causal relation between the taste of my beer and certain objective qualities of my beer that cause that taste experience. Thus, I experience the taste of the beer—just as I see the beer in my glass and smell the hops and feel the mug in my hand. And that is because the beer causes all those sensory experiences. I think to separate out the sensory experience as its own object of experience, if that is what is going on, is at best a confused muddle. And that is what concerns me. But, then again, as I’ve said I’m a novice in this area.

            Liked by 1 person

          7. Actually Matti, my interpretation is that they do consider there to be objective components to your taste of beer, and even though they may describe this in muddled ways often enough. The stick that looks bent when partly in water exists as an illusion to us, and explained by the different speed that light travels through different mediums. Nothing magical there in the end. They’re also calling the inability of science to grasp why qualia seems inherently private, for example, to be an illusion. Okay, but we’ll need scientists to make progress on a causal explanation for this as well. As you know I suspect that this only seems inherently private because the experiencer (such as yourself) is constituted by neuron produced electromagnetic radiation. Such radiation would be an element of reality appropriate for science to objectively measure and so wouldn’t be private if we had the right tools to assess that radiation from moment to moment, and thus what you experience itself by means of beer or whatever else.

            Conversely they believe that the experiencer exists by means of the right code properly converted into other code — no physics based instantiation mechanisms to potentially check. I find this belief to have all sorts of magical implications that have not yet been sufficiently scrutinized. In a causal world computers should only do things by means of code that animates appropriate output mechanisms, not by means of code alone. So that’s my only true beef with “functionalists” — they’ve taken science down a magical path here when it might otherwise explore various potentially causal explanations. And note that I’m speaking of an experimentally falsifiable idea while their explanation cannot in itself be demonstrated to be false. “Whoops, wrong code. Let’s try another….”.

            Like

          8. “I’m not saying any of these things are unreal. I think they exist as functional mechanisms”.

            So none of it is illusion because it is all functional? You’re just objecting that anybody makes a big deal out of it and making a big deal out of it is problem.

            Stating something has a functional doesn’t let you off the hook of explaining how it works. Wheels on a car are functional but they still consist of inflated rubber on a rim attached to the end of axle. I think you and Frankish are skating over the lack of explanation. What are rims, tire, and axel for consciousness?

            Like

          9. It’s generally agreed that functionality is where scientific progress on the “how it works” is possible and happening. These are Chalmers’ “easy” problems. Everyone agrees they’re not really easy, just scientifically tractable, as opposed to the hard problem, which if you accept its distinct existence, is beyond science.

            Like

          10. “Functionality” mainly explains what it does not how it works. We could propel a vehicle on treads like a tank or on wheels with rubber tires but the implementations are different. That’s the problem. There is usually more than one way to do a particular function. And we could also debate what exactly is the function of treads or wheels. If the function is just to provide a stable upright platform for a passenger compartment then the runners on a sled might serve the same function. If the function is to transmit power from an internal power source to generate motion, then some sort of claw in front of vehicle might pull it along and serve the same function. If it is both functions, then how do we know there might three or four more functions we don’t understand yet. How do you know what the correct function(s) is for what you are trying to explain. You can pick and choose your functions to match whatever is your theory.

            Right now in consciousness research, we can see the wheels but we can’t see an connection from the internal power source of the vehicle and the wheels. The hard problem only seems insolvable because our models for how it works have too many gaps for an adequate explanation.

            Like

          11. It seems like what establishes function is the causal relationship between the process and its environment. For example, DNA molecular chemistry may work the same whether it’s embedded in cellular machinery or not. But it’s the relationship with the surrounding cellular systems which provide its genetic function. It’s the same with neural circuitry. We still have a long way to go, but progress is being made.

            Liked by 1 person

          12. Your DNA example is purely chemistry. At the end of the causal chain, you end up with more chemistry. At the end of a causal chain for neural processing of the flavor of beer we would expect taste. If taste is not more chemistry it the originating stimuli must have been transformed along the way. Where and how does the transition happen? If taste in a representation of the molecules that flavor beer, then what is it?

            To me the only thing that makes sense is that taste is a waveform that represents the molecules or the EM waveforms of the molecules that flavor beer just as the qualia of sight and hearing are also representations of waveforms. The function of the neural circuits that produce consciousness is to transform external waveforms into internal waveforms that represent the external reality.

            Liked by 2 people

          13. James, I think I agree, at least partly, with your analysis. You say taste is a representation of the molecules that constitute the flavor of beer—not something else. And, if I’m following you correctly, then I totally agree—as should be clear from my ad nauseam comments about beer above. That seems to me simple and straightforward and a valid rebuttal to any claim of illusion. What I think confuses the discussion (and impels some to use the term illusion) is a refusal to accept that the taste experience is about one object. In brief, the object of the taste of beer, and what causes that taste experience, is the beer. As I said above the cause of the experience is the object of the experience. The “object” of the experience is not the internal subjective experience—it’s the beer. So, I agree with you completely that Mike’s use of the word illusion is erroneous. However, it is a totally separate and different question as to “how” the mind concocts that representation of the object of experience. But the object of our perception is certainly not the subjective experience itself. As I said, I don’t think this is a trivial distinction and I think we have to break this issue down into its parts to make any progress here. Built having argued this much I feel comfortable saying that my subjective experiences are not illusions.

            Liked by 2 people

          14. Taste in this example is a representation of some limited aspects of the composition of beer. By controlling the type and amount of hops in the brewing, for example, we play to certain aspects that are detectable. I would imagine the aspects we can perceive are controlled to some degree of the excitability of sensory neurons and their chemistry., which is why we see only a limited range of the electromagnetic spectrum.

            There are other factors too at work. The taste (wavelength) of beer can stored as a memory for later retrieval to compare with the tastes of future beer consumption. It can be paired with other memories in learning. Direct and indirect memories continually modify the current experience.

            The term “illusion” is fine with me in the sense that taste is a limited representation of something else but there is something else – the molecules of the beer – which is being represented so the word is misleading to think it completely disconnected from reality. The mind is capable to producing something disconnected from reality but it is usually called a “hallucination” . Even those may come from memory or perhaps some sensory primitives that embedded into the structure of nervous of the species.

            Liked by 2 people

          15. Interesting discussion guys. I’ll just note again the example I used above, that the taste of beer is different for someone taking their first ever sip from the taste a regular drinker experiences. Again, the question is, what’s changed? The molecules in the beer and initial chemical reactions on the taste buds should be roughly the same.

            Or consider the taste of broccoli. For me, broccoli is basically tasteless, so as long as it’s paired with cheese or something, I have no problem eating it, even enjoying it. But many people find it tastes bitter and can’t stand to be anywhere near it. Are we getting different versions of broccoli? No. We’re having different taste experiences, differences which may be genetic.
            https://en.wikipedia.org/w/index.php?title=Broccoli&oldid=1126806123#Taste

            I think the problem here is regarding taste as something distinct from functionality. But taste has an evolutionary adaptive purpose. It clues us in when to avoid something, or when to lap up a lot more of it. In other words, it’s a reaction, a conclusion (actually usually a mix of conclusions), a judgment reached by the early sensory processing areas of our brain. So it shouldn’t surprise us that there will be variations, or that we can learn to enjoy the taste of something that initially tastes awful, like beer or coffee.

            Liked by 1 person

          16. “taste of beer is different for someone taking their first ever sip from the taste a regular drinker experiences”

            Actually I think I answered that when I wrote:

            “There are other factors too at work. The taste (waveform not length as I wrote above) of beer can stored as a memory for later retrieval to compare with the tastes of future beer consumption. It can be paired with other memories in learning. Direct and indirect memories continually modify the current experience”.

            We never drink the same beer twice. This is even more profound with PTSD and phobias.

            Liked by 1 person

          17. I suspect we agree on a lot more than it would seem if we completely deconstructed our positions. 🙂

            Actually in the comment above I really didn’t name a substrate. “Waveform” simply describes that the end of the causal chain as something that has wavelike properties. It might be EM or something else. EM at least has the advantage that we already know and understand something about it. And it seems a natural byproduct of electrical activity. But I’m open to other ideas.

            I think it must be wavelike because it is hard to imagine how the brain can integrate hundreds of circuits digitally – the binding problem. Taste is sometimes said to consist of five modalities – sweetness, sourness, saltiness, bitterness, and savoriness. But there could easily be many more which we haven’t identified or are out of our vocabulary. At any rate, all of this merges in unique ways for each type of beer to produce what seems to be a single taste. This merging gets explained fairly easily if we think of circuits for each modality generating something wavelike that combines with the waves of the other modalities/circuits to create a unique waveform for a single taste. Likewise the taste can combine with the aroma and the beer experience can combine with prior beer experiences (memory) to produce the final resolved taste of beer at one point in time. I can’t see how this could easily be done digitally without an identifiable location for the consolidation. But the efforts to the locations where consolidations occur have been fairly fruitless. We can find where some of the modalities are processed because usually damage to a single area will eliminate or severely reduce the ability of some modality – for example, like the rare people who lose the ability to perceive color through brain injury.

            Liked by 1 person

          18. We probably do agree on a lot. I think we’re just drawn to the differences, because they’re more interesting to talk about than just confirming each other’s views.

            I tend to think the binding problem is mostly misconceived. There isn’t one place where everything gets combined (unless we want to consider the overall brain that place), but numerous, with the convergences happening in each particular location for particular needs. It feels more unified to us than it is, because we flit between the fragments very fast, and can’t perceive the gaps. (We’re not conscious of what we’re not conscious of, of the gaps in our consciousness, unless there’s something there to clue us in about a gap, like a time jump.)

            Liked by 1 person

          19. I think you are underestimating the binding problem. I have often thought of this is combining vision, hearing, etc into a single integrated sense of reality. I could agree that we quickly move from hearing to vision and back again quickly so there might not be as much integrated as it seems. But the problem goes beyond that.

            With the taste of beer, we don’t taste bitterness apart from the hoppy flavor. They are combined.

            Vision is the more complicated. Imagine tossing around a yellow ball in the backyard. Motion, color, edge detection, and probably a lot more needs to be combined to create any useful viewing of the ball moving from person to person. The color needs to be combined inside the edges of the ball as it moves in the air. We don’t ever see edges and colors separate from each other. The yellow color doesn’t lag behind the ball.

            Brain lesions to V5 can produce akinetopsia – an inability to detect motion. Damage to V1 can produce blind sight that results in an ability to detect motion but not see. Damage to V4 can result in complete loss of color vision. We can find locations for each modality but there is no spot, I know of, in the visual cortex or elsewhere that can be damaged that results in completely uncombined sight – yellow splotches in one part of the field of vision, a colorless ball in another, and something moving somewhere else that is maybe more felt than seen. We can, of course, damage the entire visual cortex and see nothing. I doubt we are flitting between edges, color, and motion when we watch a ball being tossed about in the yard.

            Similar complexities are with every sense.

            Liked by 1 person

          20. The issue, I think, is that there isn’t just one integration happening in one location. I don’t think we’ve found anything like that because it doesn’t exist. But there are innumerable specialized integrations.

            For example, in the ball you mentioned, detection that it is a particular ball, or maybe a particular type of ball, happens in the temporal lobe, where signals converge from various modalities and intermediate detectors, including color, shape, feel, weight, etc. However, we may not be conscious of all those details. Initially we may only be conscious of its ballness (maybe it’s soccer ballness if that’s what it is). If our exposure to the ball is brief, we may well never be conscious of some of its details.

            If we do focus on the ball, we quickly flit between various details, such as its size, shape, color, surface pattern, smell, etc, by each of these gestalts, these gists, dominating attention (influencing the population of detectors) for a brief period. Our memory of it will be in terms of perceiving the whole thing, but even recalling that memory of the whole thing will involve flits between those gestalts.

            So with the lesion examples you gave, some of the detectors get knocked out. The higher level detectors (like the ball one) might still have enough lower level detector signals to work. So maybe a ball is detected but not its motion, or color.

            Put another way, we’re dealing with a network of specialized detectors, with no need for one detector to rule them all, and no single detection path from stimuli to attention to report (or other action).

            Yet another way is to say the binding is a distributed crowd sourced process rather than a centralized one. If you think about it, that would be true for any putative central one anyway, since at some point even a central one has to have components, at least if we’re sticking with physicalism.

            Like

          21. ” Put another way, we’re dealing with a network of specialized detectors, with no need for one detector to rule them all, and no single detection path from stimuli to attention to report (or other action)”.

            Yeah, that’s what I’m saying but I’m also saying a digital approach doesn’t work without multiple integration points If the brain detects color in one location and motion in another then some calculation would need to be done somewhere to put the two together. One thread can wait on another thread to combine a result but every thread can’t wait on every other thread without generating too much overhead.

            You seem to be confusing the flitting of attention from one thing to another with the binding problem. With careful self-observation we can notice that the detail of what we noticing shifts constantly from one thing to another. The binding problem, however, is at a much lower level than the attention flitting you seem to be talking about.

            Liked by 1 person

          22. James, thanks. This is a stimulating discussion for a novice to the philosophy of mind like me. It stretches some new mental muscles. But, frankly, the term “illusion” is not fine with me. Language is important—much fallacious thinking is built on ambiguous and imprecise use of language. And, as I tried to say, I think the use of illusion to talk about subjective experience comes from the erroneous idea that our subjective experiences are the object of our subjective experiences—which is wrong. It creates a confusing duality muddle in my opinion. I’ll be a stick in the mud on that one! It has taken me years to shake off those inane discussions about “sense data” I suffered through as an undergrad many many years ago. I do not experience sense data—I taste the beer! And I’m sticking to that story! (And, oh yes, there are illusions. I don’t deny that. But we know what they are; we can puzzle them out or identify the pathology that creates them.)

            Like

      2. In the broader branch of philosophy known as metaphysics, “intrinsic” often means localized in spacetime. So, for example, the fact that my temperature is 37 C is localized to me. Remove the rest of the universe outside my skin, and I’m still 37 C. The fact that I am having a conversation with you is extrinsic/nonlocalized. If you did not exist, I wouldn’t be. Moreover, if you had never existed, I couldn’t even *think about* you. The aboutness of thoughts is extrinsic. Good Old Fashioned Functionalism (GOFF – that’s a funny coincidence of opposites) makes phenomenal qualities extrinsic too.

        Like

        1. One of the problems with intrinsicality, which Dennett points out, is all of the examples of it, if you scrutinize them, turn out to be problematic. For example, saying temperature is intrinsic to your body is true at the level of your body, but within your body it involves the kinetic energy of the molecules interacting with each other. And if your body was in space, it wouldn’t be at 37 C for long. Another common example in physics is mass, but even here mass turns out to be about the interaction between matter and spacetime. So instrinsicality in any practical sense seems relative to a particular viewpoint and level of description.

          Metaphysical or absolute intrinsicality, if it exists, seems like something that would be completely acausal (or a-interactionist if we’re talking about fundamental physics). Under physicalism, that would make it completely unknowable. Of course, this is fine for the non-physicalist, although it seems dependent on the non-physical working according to completely unknowable principles.

          Like

          1. But why do you have to get all absolutist about it? Sure, 37C is temporary. And sure, temperature is a useful concept only at a (broad range of) level(s) of description. But that’s OK, *we* live at that level of description.

            Like

          2. We don’t. But if we don’t, then my point in the post is we’re working with a model with a narrower scope, and no longer have the explanatory gap everyone wrings their hands about. It’s the strong realist that gets absolutist, and it’s the resulting notions illusionists deny.

            Liked by 1 person

    2. What I like about James’s reply is that it breaks apart the “package deal” of four qualities and goes a la carte instead. It also, implicitly, notes that some (I use “some” broadly here so that it is compatible with “all”) of these qualities are subject to varying theories/definitions.

      Liked by 1 person

      1. I think a lot of what we call mind and matter derives from our sense that there is an internal and external in our reality. The terms “private” and “intrinsic” are trying to get at the internal part but they can only be understood in reference to the external part. If we believe mind comes from neurons, then the internal/external dichotomy must also come from neurons. The external world we know in all of its categories and relationships doesn’t exist outside of the knowing neurons even though those categories and relationships may (or may not) have correspondences in some reality outside of our own. If this is what illusionism is saying, then I can sign on for this part but calling this reality an illusion seems so Eastern mystical. 🙂

        The real “mystery” is that our current scientific and philosophical models of the external have no way to explain the internal,. This explanatory gap seems to be what illusionists are trying to dismiss. Rather than dismissing the issue, I am simply in favor of finding better models.

        Liked by 1 person

        1. Your comment reminds me of Noam Chomsky’s claim – which I also like – that the “mind/matter” problem of traditional philosophy is imaginary, because “matter”, as conceived around Descartes’s time and for a few centuries after, doesn’t exist.

          Liked by 1 person

          1. I have previously called the notion of “matter” incoherent. At the very least it would include particles but then there are multiple types. But particles are also waves. And what about all the other forces. You can lump it all together and call it all physical and then in a sense we almost get about the commonsensical meaning of anything outside of our own mind. Except as I said, all of that stuff we think outside our mind actually is in our brain, according to science, which in a way I suppose would also make it inside our mind. So the entire nonsense breaks down. These are the current scientific and philosophical models we need to move beyond.

            Like

          2. “…all of that stuff we think outside our mind actually is in our brain, according to science, which in a way I suppose would also make it inside our mind.”

            A model that posits an “external objective reality” as well an “internal objective reality” is illegal under subject/object metaphysics. You’re not going meta-physical on me are you Jim? 😎

            Liked by 1 person

  2. I think the discussion of scope provides a useful categorization, although I would instead phrase it in terms of knowledge. The phenomenal realist wants to widen the scope of our knowledge beyond the confines of science, whereas the physicalist wants to limit it to the scientific domain. Both the realist and physicalist mostly agree on what the epistemic limit of the scientific domain is supposed to be (usually limited to knowledge about structures and functions). Since the realist construes the phenomenal to be intrinsic, it will necessarily fall outside of our scientific knowledge. I do see intrinsicness as the most important disputed property. All the other purported properties of qualia, like privateness and ineffability, are entailed by intrinsicality.

    If qualia are intrinsic and non-relational, then obviously they will be private and ineffable. The former because a scientific analysis of my brain won’t capture intrinsic states, and the latter because my verbal behavior is functional in nature, and so it can’t describe my own intrinsic qualia to another observer.

    Thus, everything hinges on whether you accept the existence of intrinsic qualia, and that in turn hinges on whether you accept the possibility of non-relational knowledge beyond the scientific domain. Most physicalists adopt causal epistemic accounts (where you can have knowledge of x only if your epistemic state participated in some causal relation with x). By contrast, (many) realists like Chalmers propose to expand the scope of our epistemic accounts by introducing knowledge via acquaintance. The idea is that we can acquire non-relational knowledge by being constituted by our own non-relational phenomenal states. In other words, our epistemic beliefs about intrinsic states are themselves literally intrinsic (so they are not brain states). One can therefore ‘know’ about one’s phenomenality because one IS their own phenomenality. That’s how these accounts typically work anyway.

    So I would co-opt your discussion of scope if I may and bring into the domain of knowledge. It’s really all about intrinsicality and whether we can acquire non-relational knowledge. Obviously, most phenomenal realists agree that we can’t have ordinary scientific knowledge of classic qualia. They just disagree about the scope of our knowledge, and claim that it shouldn’t be restricted to the causal kind.

    Liked by 1 person

    1. It seems like all of the attributes: intrinsicality, ineffability, privacy, are tangled up with each other. Ineffability and privacy are the indicators, the symptoms, with the others being the conclusion. But that conclusion requires a strong version of ineffability and privacy. The weaker practical versions Dennett discusses wouldn’t make it necessary. Arguing for just one of them, as someone on Twitter recently did when they focused on privacy, seems problematic. Privacy implies ineffability and vice versa, and strong versions of these seem to imply intrinsicality, which then implies direct acquaintance. At least unless I’m totally missing something.

      The knowledge focus is interesting, getting into Frank Jackson’s knowledge argument. The question I wonder is, what do we mean by “knowledge” here? What does the knowledge from direct acquaintance get us? What can we do with it we couldn’t do before, specifically what can we do that we couldn’t do with the functional knowledge?

      Of course, I’m asking a question about causal effects, and you noted above that this is about knowledge that isn’t the causal kind. But if we exclude causality, then we seem to be in epiphenomenal territory. (Of the philosophical kind, not the engineering / scientific version.) This seems to leave the relevant entities as a metaphysical add-on that makes no difference in the world.

      Usually at this point, I have to admit we can’t definitively rule out such an add-on. But we also can’t rule it in either. It’s like Platonic abstract objects. Something we can see as existing or not, with no effect on scientific investigations. I suspect this is what Noam Chomsky meant the other day when he said that panpsychism and illusionism can be reconciled as long as we frame them in the correct manner. Still not sure about that. But it definitely seems true from an instrumentalist perspective.

      Like

      1. “Ineffability and privacy are the indicators, the symptoms, with the others being the conclusion. But that conclusion requires a strong version of ineffability and privacy. The weaker practical versions Dennett discusses wouldn’t make it necessary. Arguing for just one of them, as someone on Twitter recently did when they focused on privacy, seems problematic. Privacy implies ineffability and vice versa, and strong versions of these seem to imply intrinsicality,”

        You can reason from (strong) ineffability & privacy to intrinsicality, but typically it’s done the other way around. The common philosophical arguments for qualia, like the knowledge and zombie arguments, are meant to demonstrate that qualia are intrinsic because they cannot be accounted for in our functional accounts. From there we infer a strong version of privacy and ineffability. Although if you had an independent argument for the latter stuff, you could also use that to infer intrinsicality. I just haven’t seen those types of arguments.

        “But if we exclude causality, then we seem to be in epiphenomenal territory. (Of the philosophical kind, not the engineering / scientific version.) This seems to leave the relevant entities as a metaphysical add-on that makes no difference in the world.”

        Just as phenomenal realists and physicalists will often differ in their epistemological outlooks, they also differ in their ontological picture of the world. Most phenomenal realists, like Chalmers, conceive of the world domain as being composed of physical and phenomenal elements. So, under this conception, knowledge by acquaintance would make a difference to our understanding of the world, specifically to our access to the phenomenal component.

        Thus, there is no question that we could definitively rule out (or in) the existence of phenomenal consciousness, provided we had direct knowledge of such states.

        All of this assumes of course that qualia are actually intrinsic. That is something that I am not on board with as of late.

        Like

        1. That’s interesting. I guess since ineffability and privacy are observables, at least to an extent, I assumed the reasoning went from them to the other concepts. If not, then I wonder what the motivation is for positing intrinsicality. But once someone is convinced of intrinsicality, I can see deriving the others.

          With Chalmers and company, the question for me is what it means to have those phenomenal elements in our ontology. If they don’t physically exist, then they exist in some other manner. Which is why I made the comparison with platonic abstractions. Under platonism, abstract objects exist, but with no temporal-spatial extent. Their existence is of a different sort than physical existence. Chalmers has an interesting stance toward platonism, a sort of attitude that there’s no fact of the matter on whether they exist (at least as far as I understand it).

          “Thus, there is no question that we could definitively rule out (or in) the existence of phenomenal consciousness, provided we had direct knowledge of such states.”

          Right, but how do we establish that we have such knowledge? Similar to platonic objects, is our knowledge of those phenomenal properties? Or of functional perceptual information that lead us to think they’re phenomenal properties?

          Like

          1. About observables, remember that it all depends on whether you accept direct acquaintance. If you do then phenomenal states are directly, if not infallibly, observable, since it won’t depend on some external causal network which might be fallible (one effect might have many different causes). The same applies to the platonic universals; we don’t have direct knowledge of them, so it’s no surprise that Chalmers is agnostic about their existence.

            About intrinsicality:
            Interestingly, if you read Chalmers’ book (The Character of Consciousness), he basically says that the traditional arguments (zombies, Mary’s rooms) are more meant to serve as intuition pumps. But he has always maintained that the principal argument against physicalism is that physicalism only explains structural and functional knowledge, but phenomenal states aren’t just structural. Why not? I think there are two main reasons:

            1. Our evidence from introspection. Phenomenal states present themselves as intrinsic.

            2. Ontological accounts of structure (like in ontic structural realism) typically try to ground structure in abstract mathematical terms, usually by invoking something like set theory.

            I think the biggest problem with illusionism is that it is incomplete. It only addresses the first argument for intrinsic qualia, but not the second. The second would demonstrate that qualia can’t be structural, because structure by definition is just some abstract non-concrete entity. Hence, the ultimate need for eliminativism regarding phenomenal consciousness. The weak illusionists are thus starting from an inherent disadvantage, because they too are trying to find a way to cram phenomenal consciousness into the second, and inherently incompatible, conception of structure.

            I actually disagree with both (realist and illusionist) approaches. I’m writing a paper at the moment on this very topic, where I’m trying to find some mathematically independent way to characterize structure. The ultimate goal will be to dissolve the second objection and show that qualia can be completely structural. We just have to re-conceive our notion of what we think structure *is*. I can send you a link when I’m finished if you’re interested.

            Importantly, once we have done so, the phenomenal realist will have to concede that the hard problem is solved. All the phenomenal realists that I know and follow (e.g. Chalmers, Goff, Hedda Hassel Morch) define qualia in terms of the “what-it’s-likeness” subjective sense. They then argue towards intrinsicality, using the arguments of either 1 and/or 2. Chalmers typically relies on arguments from introspection, while arguments about abstract structure are a favorite of panpsychists. Next they conclude that since causal knowledge can’t account for intrinsic states, we must have some kind of special direct acquaintance (in the form of constitution) with such states. It might not be put explicitly in those terms, but that’s the argument chain as I’m following it.

            Liked by 1 person

          2. Good point about intrinsicality implying direct acquaintance. Okay, that makes all of the attributes Dennett listed hard to dismiss, at least if we’re going to posit qualia / phenomenal properties as something in distinct from functionality.

            Thanks for elaborating on the motivations for intrinsicality. The introspection one resonates with what I’ve read. I think it crucially depends on accepting that introspection is infallible. If it is fallible, then we have to attach question marks to what it’s telling us, particularly things that don’t accord with what science is elsewhere telling us. (Which of course is what illusionism is about.)

            I’m not following the second one though about structural realism. Chalmers actually discusses this in his book Reality+. His take there is that what separates structural realism (both OSR and ESR) from mathematical platonism / the mathematical universe hypothesis is causality. I’m not entirely sure it works, but it doesn’t seem like he still saw structural realism as a motivation for intrinsicality by then. (That said, there’s a lot in that book so it’s possible I missed or forgot about it.)

            Myself, I don’t necessarily need the causal part to distinguish structural realism from the MUH. For me, it’s enough to know that not just any mathematics will do. Unlike the MUH, structural realism doesn’t say all math exists, only math that describes and predicts phenomena. Although I suppose you could say that this is equivalent to Chalmers’ argument, since causality would what SR has that many mathematical structure lack.

            Your paper sounds interesting. Definitely let me know when you’re comfortable sharing it. Although if it’s heavily mathematical it might be over my head.

            Liked by 1 person

          3. I’m onboard with the argument from introspection being rather weak with regards to the intrinsicality of qualia, though I demur on going full-scale eliminativist. As for structural realism, I haven’t read Chalmers’ Reality+ and I don’t actually recall him anywhere stating an argument of the second sort (although I still think he implicitly relies on it).

            Arguments from abstract structure are really something more common to panpsychists, since they also want to use the problems with structural realism to motivate Russellian monism.

            About the collapse of physical structure into mathematical structure:

            This is definitely a chief concern. I’m not sure it works to say that the physical domain is limited to observable phenomena however. Some account of what observables (or ‘phenomena’) are will have to be given, and this will also have to be defined in structural terms. You can definitely construct a limited mathematical model to characterize the world domain, but it’s not clear how to justify belief in that model in a non-circular manner. In other words, how can we describe what separates this model-theoretic structure from another structure without appealing to it being ‘physical’? Some ontic structural realists, like Ladyman and Ross (in their book Everything Must Go) advocate for just accepting the separation of the physical and mathematical as a brute fact. That’s perhaps fair, at some point or another we will have to be satisfied with a fundamental answer and stop our questioning.

            About causal structuralism:

            Goff actually addresses this here (p.3): http://www.philipgoffphilosophy.com/uploads/1/4/4/4/14443634/is_it_a_problem_that_physics_is_mathematical.pdf

            He writes that there are regress arguments against causal structuralism, and that it seems circular. If this is true, then that’s also an argument for intrinsic qualia. For it seems like we do have knowledge, but we wouldn’t have knowledge if causal structuralism + physicalism were true (according to the regress arguments).

            So that would be a defeater for physicalism. I don’t really buy into the regress argument but the point is that concerns about structural realism motivate phenomenal realism, and people like Goff don’t think that appealing to causality can save physicalist-structural realism.

            I’ll definitely let you know when I’ve finished the paper, and don’t worry it’s not really math heavy at all.

            Cheers Mike!

            Like

          4. Goff does seem to spend a good amount of time exploring what he sees as the issues with structuralism. In fact it was one of his writeups about it that alerted me to the viewpoint’s existence. Overall though, I think the standard he’s judging it by is too high. Structural realism can’t account for ultimate reality, but neither can any other viewpoint I’ve seen. At least without the same resort to asserting brute realities. That’s never convincing for me. Historically what seemed like brute facts always eventually ended up reducing to something else.

            Thanks Alex! Looking forward to the paper!

            Like

          5. Alex:
            “I’m trying to find some mathematically independent way to characterize structure. The ultimate goal will be to dissolve the second objection and show that qualia can be completely structural.”

            That sounds very cool; I hope to see it soon.

            Like

  3. My view is that in discussing qualia we are categorising stuff using mental categories, and that mapping these on to physical entities and process is possible, but messy. For example it results in spatial boundaries that align with the physical position of neurons and their parts (even when we are moving around in a complicated way), and temporal steps that correspond to the discrete steps of the cognitive cycle, with something hidden to the conscious mind going on to get from one such cycle to the next.

    I would also note that relative to a physicist’s perspective, discussion of qualia smuggles in a couple of things which seem so normal to us that we don’t usually question them as assumptions: That I am a single thing, even though I’m just a bag of bits…and similarly that the world out there is split into discrete bits that persist over time. By contrast the equations of physics just have values over time and space that vary, and don’t show up such discrete entities (even if we do categorise some for convenience). Some interesting ideas follow from shifting perspective to think of oneself as just a set of cells all collaborating to try to get a better overall result than if they each did their own thing.

    Liked by 1 person

    1. I’m on board with regarding a functional version of qualia as categorizing conclusions. As you note, no one really doubts that the functionality can be mapped to physical processes. Although we obviously still have a long way to go on this. But it seems to me that just talking about perceptual information in a functional sense, for which there’s no deep metaphysical mystery, is clearer for this than using the loaded term “qualia”. It’s like talking about “spirit” when we mean something other than a non-physical agent.

      As someone who posts under the handle “SelfAwarePatterns”, I agree with your second paragraph completely. I think we struggle with this because we have a tendency to think about consciousness and the mind using social concepts, in which a person is a unified entity to which we apply the intentional stance. But to actually understand the mind, we have to look past that weakly emergent reality and get into the sub-agent processes. That’s very counter-intuitive. A lot of the mystery, it seems to me, stems from our struggle with this.

      Like

  4. So Mike, I think your approach to philosophical language will soon drive you out of your “mind”. Literally!

    For any language that applies to the most subjective aspects of human experience, lay people will import their implicit theories. Such as dualism. And philosophers sympathetic to those lay people will make it explicit, and do their best to attach these assumptions to any terms of art: “qualia”, “phenomenal”, whatever. Some of the logical positivists (I forget who) objected to the term “mind” for exactly this reason. If you insist on avoiding terms that people attach such associations to, you will soon be out of “mind.”

    I say no to the eternally moving goalposts. Lite qualia are qualia.

    Liked by 1 person

    1. Well, obviously I hope I don’t lose my “mind”.

      All words involve associations. It’s what words are. In the end, my goal with words is to invoke the right associations. When a particular word has associations that are pervasive and they’re not associations I want to invoke, then it makes sense to use another word, or add clarifications.

      I’m not particularly enthusiastic for having a battle on the meaning of words like “qualia” or “phenomenal”. Others can have that fight. If they succeed, I’ll adjust accordingly. Until then, I’ll use words to more closely match what I mean.

      Like

  5. How about this Mike. When people use terms such as “qualia”, we interpret their meaning on the basis of their stance on causality. If we don’t know that stance then we simply ask them. So when I use such terms, you can be certain that I mean worldly stuff which is at least ontologically non-private (and so on) rather than spooky. It’s the same if you, Frankish, or Dennett ever use such terms, (and even if I suspect that your “information only” explanation would require mechanistic instantiation in order to actually be causal). Conversely when David Chalmers uses the term “qualia” for example, from his platform he should be referring to something that actually is ineffable and whatnot in an ontological capacity. Thus I’m suggesting that we always interpret such terms according to the specific causality position of the person speaking. How about that?

    Liked by 1 person

    1. I don’t think that would work Eric. It depends on what the person is trying to convey. So Frankish and Dennett, who are card carrying physicalists, when they use “qualia” or “phenomenal”, are usually referring to the classic versions used by philosophers with the ontology that allows for intrinsic, etc, which of course they’re usually attacking. So really, you have to ask the person which version of those concepts are they referring to, if they haven’t stipulated it and it isn’t obvious from the context.

      Chalmers is a tricky one. He’s one of the philosophers who try to back off of those intrinsic, etc. attributes. He’s far from alone in this. Goff did it in a response to me last week on Twitter. And Michael Tye does it in his SEP article on qualia.

      The philosophers who do this still maintain that it’s all very mysterious and beyond science, but seemingly while denying the attributes that make it so. Pushed, they typically respond with ambiguities like what it’s like-ness, ostension, or using different words to describe what are essentially the classic attributes. (In Goff’s case, he threw privacy under the bus, but then said that science can’t access phenomenal properties, which is usually what privacy in the strong sense means.)

      So if someone is going to use those words, they really have to clarify what they mean, or live with the fact that a lot of people will misunderstand their meaning. And those of us listening have to be aware of the ambiguities.

      Liked by 1 person

      1. I meant someone using those terms earnestly Mike. If Frankish and Dennett want to use them as inherently magical and non-existent, then that’s on them for not using the terms earnestly. In truth they don’t need to use such terms like that for attack purposes. If it were understood that science grows obsolete to the extent that causality fails then they might more effectively just say “Maybe, though I don’t believe in magic so for me that’s a nonstarter”. Of course saying things which display effective thinking is not the same as saying things that make one popular.

        Regarding Chalmers, I’ve noticed that he likes to choose his side depending upon who he’s speaking with, a hallmark of good salesmanship. Though I can argue with the integrity of that stance, I can’t argue with the success he has achieved.

        In any case what you might be missing is that a person who resorts to “what it’s like” and so on is not necessarily referring to a magical idea. That’s certainly not the case for me, and naïve or not, I don’t consider it to be the case for others in general without reason. Chalmers and Goff clearly do say spooky things in this regard, and even if they sometimes also attempt to claim otherwise. Unfortunately we must assess the earnesty of our interlocutors.

        Liked by 1 person

        1. Eric,
          In general, I think all of these guys are more earnest than you do, even the ones I see as wrong. That’s not to say there’s not some salesmanship going on. But I think it’s mostly focused on getting ideas out they believe in. Most of them hold tenured positions with only modest income from books or articles.
          They’re not in it for the money. (If they are, they made poor life choices.) Their biggest goal is to eventually be vindicated by history as the one who was right.

          Along those lines, I don’t think they mean to be evasive with denying the intrinsic, ineffable, etc attributes. I think they’re just not thinking clearly.

          For the “what it’s like” phrase, we’ve discussed this before. The issue is it’s really impossible to know exactly what someone means with it. By itself it’s just too ambiguous. Certainly some people use it to refer to something physical. But the coiner of that phrase, Thomas Nagel, means it in ways similar to what Chalmers, Goff and others do.

          Liked by 1 person

          1. You’re probably right about that Mike. Beyond standard salesmanship each of these highly regarded intellectuals probably tend to believe what they say, and even a chameleon like Chalmers. In order to truly be false they’d need to know what was right and then choose to go with something that sells better. I doubt any of them have such insights often, let alone then go with the darker path.

            And it could be that I’m simply more optimistic than you that people who use the “something it’s like” heuristic in a naturalistic capacity, mean nothing more than Schwitzgebel’s innocent/wonderful conception of consciousness? Either way it seems like it would be productive for a respected band of meta scientists to emerge who agree that science itself grows obsolete to the extent that causality fails. Thus they’d effectively split science into both a mainstream “causal” form of study, as well as a demoted “causal plus” variety. Furthermore I’d hope for various effective epistemological and axiological principles to be provided as well.

            Liked by 1 person

  6. It makes sense to ask “How colors look to dog?” but it does not make sense to ask “How colors look to computer” because presumably dogs have qualia but computers haven’t got any qualia, computers have merely functional states of detecting colors.

    Liked by 1 person

        1. So if a machine with a control center exhibited avoidance behavior when damaged, would that be sufficient? If not, what would it have to have to qualify? If an organic brain, what about that in particular makes the difference?

          You don’t have to answer. Just trying to trigger consideration of what’s driving the intuitions.

          Like

          1. “So if a machine with a control center exhibited avoidance behavior when damaged, would that be sufficient”
            No, there are more fine-grained functional differences. I am nonreductive functionalism, merely beings with suitable functional organization are conscioues but conscionousness is not mere functionality. If my seeing colorours is only functional, colors have not any look to me similarly as for computer.

            Like

          2. It depends on what we mean by “look”. I think “look” is a collection of detections, of the type you mentioned above for the computer, but just a lot more of them all at once for a dog or person. To talk about what something “looks like” is to talk about the galaxy of detections it sets off in us. If there’s something extra there, it seems like it would come along for the functional ride without contributing.

            Like

          3. Look of colour is this thing about it Mary learns when she sees colour at the first time. Look of some object for example table is what shape, colour this thing has got. Look of colour isn’t defined by something else however it is knowable by experience. I don’t ask “what colour looks like to computer?” because computer does not experience colour.

            Liked by 1 person

  7. The following is a quote from SEP:
    “Functionalism in the philosophy of mind is the doctrine that what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions, or the role it plays, in the system of which it is a part.”

    Sorry, but this rationale is woefully inadequate and a “whitewash” job at best. Consciousness is a “state of being” and that state is categorically dependent upon its internal constitution for its existence and subsequent function because that internal constitution is the substrate of the system itself. This rationale would be the same as asserting that the classical realm of physics does not depend upon the quantum realm which constitutes it but only on the way it functions, or the role it plays in the universe. If one’s profession is “white-washing”, then the rationale works…….

    “This doctrine is rooted in Aristotle’s conception of the soul, and has antecedents in Hobbes’s conception of the mind as a “calculating machine”, but it has become fully articulated (and popularly endorsed) only in the last third of the 20th century.”

    The biggest problem with this doctrine is that it is modeled after what? That’s right, a calculating machine. In laymen’s terms, that makes functionalism an analogy. The rationale is simple: one cannot model some thing we do not understand (the mind) after some thing that we do understand (calculating machine). Why? Because analogies are only true for the “thing” which is the analogy but not true for the “thing-in-itself”, which is the true nature of reality.

    At the end of the day, the so-called hard problem of consciousness is an artifact of the ridiculous notion that mind is a calculating machine. It is a problem we created with our short-sighted, silly notions. As an aside: What cracks me up about Frankish is that when an article is written calling him out on his ridiculous position, he whines and cries like a little girl.

    Liked by 2 people

    1. I agree with the spirit of what you’ve said here Lee, though I’m not sure that you’ve reduced it down to the most essential issue. If you agree with the following assessment then I’d like to hear about it. If not then what am I missing?

      The brain does function as a calculating machine, and it clearly does create mind somehow. But how? The functionalist believes that mind exists by means of the proper neuron firing in itself, or information without any instantiation substrate. That’s important because mechanical instantiation would severely hinder the popular dream of existing as mind in a human made computer in the form of information alone.

      Thus if some marks on paper correlated with the information that your thumb sends your brain when it gets whacked, were scanned into a computer which then prints out sheets of paper correlated with your brain’s response, then the functionalist believes that something here would feel what you do when your thumb gets whacked. This of course makes no causal sense. What exactly would experience this thumb pain? They don’t say, and surely somewhat given that my thought experiment is virtually unknown in academia so far.

      Conversely I believe that brain information creates an experiencer of thumb pain by animating the proper sort of substrate. We can argue over what that substrate might be, but does that seem like a necessary condition in a causal world? Should an experiencer of thumb pain exist as brain information alone, or rather as something that brain information animates?

      Liked by 1 person

      1. Eric,
        In agreement with your brief assessment, the brain animates an experiencer. This experiencer is a separate and distinct system, one that emerges from the brain, a system who’s very existence depends upon that brain, a system that is intrinsically linked to the brain and yet, it is a system that has causal power, is autonomous and uses the substate of that brain for its own purposes.

        This rendition of mind as you and I see it is not accepted within the scientific or academic communities; and this lack of insight by so-called intellectuals is a huge problem. What actually constitutes this experiencer is unknown from a physics standpoint, but being a pragmatic physicalist myself, I would wager that this experiencer is wholly physical. The only physical substrate that could fit the bill for this experiencer would be a localized quantum field.

        We know absolutely zero about the quantum realm. But, if that realm is the internal constitution of a macro-classical universe, surely that realm could easily be the internal constitution of a localized mind. The quantum realm and classical realms do not operate in a vacuum, they are forever intrinsically linked.

        Liked by 1 person

        1. It’s good to hear that we’re aligned on the causal impossibility of substrate-less consciousness Lee. As you say, we know absolutely zero about the quantum realm. Or at least I know zero about it, that is beyond popular displays of particle/wave super position, tunneling, and entanglement. So it’s too exotic a consciousness explanation for me right now. But classical neuron produced electromagnetic radiation? And with its unified field nature just as conscious experience itself consists of unified experiences of pain, taste, and so on all glommed together? And given that synchronous firing is the only reasonable neural correlate of consciousness found so far (since presumably synchrony elevates a given EM field into the correct parameters)? To me this possibility seems quite promising.

          If we could somehow disturb an EM field in someone’s head (and particularly of the minuscule parameters associated with synchronous neuron firing), and the subject were to report various funky phenomenal experiences correlated with that disturbance (with the tampering otherwise hidden from the subject), then the problem here might effectively become solved. In that case a wide range of notions on the market today should effectively be illustrated as erroneous. Or if researchers were able to induce a wide range of appropriate alterations to the EM field of various subjects, though none were to detect anything phenomenally strange, then it seems to me that this theory would become invalidated. Would you be in favor of designing and creating such testing, and even though you currently suspect an ultimately quantum rather than classical explanation?

          Like

          1. I can’t see mind as a classical system that owes its internal constitution to EM fields. It’s already been demonstrated by Hameroff’s research team that the mechanism for wakeful-ness of the experiencer has its origin in the microtubules not EM fields.

            Liked by 1 person

          2. I’m not at all convinced by Hameroff’s research. The anesthesia argument isn’t that convincing since anesthesia pretty much shuts down or significantly alters electrical activity so there is hardly a physical theory out there that would expect consciousness to be maintained under that circumstance. It would also not be unexpected in almost any of the theories that microtubules could be involved but it may be no more than they are simply involved in performing computations. Maybe he has other arguments I’m not aware of.

            The Klein portion of Kaluza-Klein provides a quantum interpretation to the 5th dimensional unification of electromagnetism and gravity. There is nothing that rules out mind as both a electromagnetic and quantum phenomena, possibly with the EM field as a bridge between the quantum and classical world.

            Like

          3. But come on Lee, you can’t just say an EM field substrate seems wrong to you. We all become invested in positions that seem more and less sensible to us personally. If certain neuron produced EM fields happen to be the wrong consciousness substrate, then shouldn’t it be helpful to demonstrate this experimentally? Or otherwise, the converse? How might anyone who seeks the advancement of causal science, disagree with experimental assessments of both effective and ineffective explanations?

            Regarding your belief that the Hameroff proposal has already been validated experimentally, I’m unfamiliar with that evidence. My understanding is that he’s been given quite a bit of publicity though. If he has good evidence from which to validate his theory, and his platform is quite well known, then does a vast conspiracy exist to thwart him even in the face of that evidence? Or could it be that his evidence isn’t all that good? Until sufficiently demonstrated otherwise, I’ll go with the latter.

            Then regarding my EM field proposal, if scientists could impart EM radiation in the head that was around the parameters of synchronous neuron firing, and subjects would report all sorts of non expected phenomenal distortion during those periods, would you then tell me that this probably was not because consciousness exists by means of this specific substrate? Would you say that something else provides a better explanation for such evidence? If so then what would that explanation be? And note that it’s sometimes at least politically best to concede that your theory should be less explanatory than others given certain evidence, that is IF such evidence ever becomes found. People who refuse to make such concessions tend to be considered more “faithful” than “reasonable”.

            Like

          4. “….I’m unfamiliar with that evidence.”

            In contrast, I’m pretty familiar with his work. His research is narrowly focused right now but the main point I am trying to make is this: There is so much, much, much more going on inside neurons than simply the opening or closing of logic-gates and the EM fields that radiate from the electrical activity; and his research is demonstrating that fact.

            Our consciousness is a unified, localized field that once animated by the brain is intrinsically linked to all of the different parts of that brain; and those connections are in parallel not series.

            As far as a complete theory of consciousness itself, I think Hameroff has “taken a wrong turn at Albuquerque” as Bugs Bunny would say, and that wrong turn will diminish the credence and credibility of the work he is doing now.

            Jim Cross has come up with some pretty good insights of his own lately so we all need to keep an open mind and not get fixated on just one thing.

            Liked by 1 person

    2. Seems hard to have any chance of making progress if we can’t compare things we’re trying to understand to things we do. I think the real test is how well the resulting model fits the data. But any model is, I think, going to be causal. It’s the only place scientific progress is currently happening.

      Like

  8. I follow the science Mike, and right now the science is telling us that there is much, much, much more going on inside the synchronous firing of neurons than simply the opening and closing of a logic-gates.

    Microtubules; it has been scientifically demonstrated that microtubules are the mechanisms within neurons responsible for wakefulness. The same chemical compound that is used to put a human to sleep for a surgery is the same chemical compound that puts a plant to sleep (keeps it from growing). You really should broaden the scope of your own research instead of whipping a dead horse…….

    Liked by 1 person

    1. As I noted above, life is electrical activity. If you shut it down completely, life goes away as we know from sedative overdoses. If you shut it down partially in a sufficiently complex animal, consciousness goes away.

      Like

  9. I think there is a little bit of confusion over what functionalism is in this comment chain, and I hope people won’t mind if I address a few of the statements that were written above. To begin with, I interpret the definitional statement (in the SEP article) that mental state types do not “depend” on their internal constituters to be denying a necessary condition, not a sufficient one. A specific physical arrangement might not be necessary to instantiate a particular mental state (e.g. pain), but it is still nevertheless sufficient. So, it doesn’t follow on functionalism that mental state types can exist in the absence of any constitutive physical realizers, or as substrate-less entities, only that they can exist in the absence of a specific physical realizer (because they might be constituted by something else). This is similar to how a wheel can be made out of wood or rubber. The definition simply amounts to a multiple realizability claim.

    I also think it is useful to divide functionalism into a semantic and ontological doctrine. As an ontological doctrine, functionalism will be (at first glance) difficult to reconcile with illusionism, to the extent that illusionism drifts into eliminativism. In order to provide a complete ontological reduction of mental states to functional states, one must first start with some conception of what the ontological basis for a mental state is to begin with. On phenomenal realism, this would just be qualia, which we can identify or grasp through an acquaintance relation. But on illusionism, it’s not clear that there is anything to grasp. We might try to argue that we have some privileged kind of access to physical brain structure, and that functionalism is just the doctrine of reducing these ontological states to functional states, but this seems implausible.

    Far more plausible is the semantic doctrine. If we interpret functionalist theories as being semantic in nature, then the main role of such theories is to explain all discourse of mental states by providing the correct ontological referents for mental state terms. On the semantic interpretation, we shouldn’t think of functionalism as being a theory that [ontological entity A is actually ontological entity B] but rather the theory that [talk about a mental state is actually talk about ontological entity B]. It’s only if you start with the preconception that we already have some solid grasp on what the ontological referents for mental terms are that you run into the trouble of fitting in functionalism with such preconceptions. But that’s not what functionalism as a semantic doctrine is about. I feel like much of the disbelief concerning functionalism stems from misinterpreting it to be an ontological doctrine. While a functionalism account can be purely ontological in that sense, it doesn’t have to be.

    It’s also not surprising that the semantic doctrine should incorporate multiple realizability. Since talk about mental states typically happens at a high level of physical abstraction, of course it will be reasonable to expect that the ontological referents of our mental terms are equally high-level states.

    Liked by 2 people

    1. Okay Alex, I may have been using the “functionalism” term in an overly loose way. Perhaps the “computationalism” term would have been more appropriate, though that term troubles me as well. Few perceive the brain to be as computational as I do, though I also consider those who wear this banner to take a magical step. In any case let me try to explain myself.

      I can’t disagree with functionalism given that certain things can semantically be said to function like other things to some degree. Yes a wooden wheel will not be the same as a rubber wheel, though in many regards it can be said to function the same. Furthermore where it does not function the same it’s simply not functional in that sense. Thus when strictly interpreted it’s impossible for functionalism to ever be wrong — in a causal world this is true by definition.

      As I understand it there has long been a push to say that the brain is nothing more than a standard computer, and thus the mind must be like information that this computer processes. Thus it’s argued that the more a computer is programmed to speak like a human, for example, the more that it will become something that’s essentially a conscious speaker of the human language. This would be a place where the functionalism tautology is implemented. John Searle most famously attempted to counter this perspective with his Chinese room thought experiment. He and others seem to have failed miserably however given the modern prominence of his opposition. I hope to do better.

      Searle tried to demonstrate that even if a future computer were to pass a robust Turing test (not that he believed one ever would pass such a test), it still wouldn’t “understand” in a conscious capacity. I suppose you know the details of his Chinese room thought experiment. I suspect it was too complicated however and left people with too many ways to claim that it doesn’t matter. In the end I don’t think Searle fully grasped the non-causal nature of his opposition’s beliefs.

      In essence they presume that the brain creates a conscious experiencer by means of processed information alone. In a causal world however processed information should only exist as such by means of mechanical instantiation of some kind. For example, processed information causally animates the function of a computer screen. So the question then becomes, what medium does the brain animate to exist as a conscious experiencer? What is consciousness made of?

      Beyond that question however, am I wrong to conclude that processed information alone cannot exist as a phenomenal experiencer of existence in a causal world? If the right set of markings on paper were properly converted into another set of markings on paper, is there causal reason to believe that something here would thus feel what you do when your thumb gets whacked? Or rather would that second set of marked paper need to inform some sort of phenomenal mechanism, as I presume happens in the brain?

      Like

      1. Eric,

        I do not see computationalism as being a theory of substrate-less mind or entailing that we don’t need physical mechanisms to implement mental states. When (physicalist) computationalists talk about information, they are not referring to some abstract entity which is not grounded in physical mechanism. They’re just using it as a shorthand for a particular kind of physical structure that they have in mind. It’s the same thing with functionalism. When we talk about “wheels” for example, we are not referring to some abstract structural entity which isn’t physical, but nor are we talking about a specific physical mechanism. As you noted, a wheel could be made of rubber or wood and vary in a lot of other ways. The word ‘wheel’ just picks out a huge set of physical configurations, all of which share a similar kind of property (call it ‘wheelness’).

        Similarly, the phrase “processed information”, in this context, is meant to serve as a substitute for a huge set of physical properties, instantiated by different physical mechanisms, all of which can be said to share the same informational property. It is just shorthand. The computationalist is saying something that amounts to this: “All these physical configurations in the humongous set [x,y,z…] would count as being minds” and uses the term information to roughly pick out the desired mechanisms (because it would be impossibly exhausting to list them out individually one by one). The abstract phraseology of the computationalist is just an intension; the real meaning of what they are trying to say is in the extension.

        As for the thumb-paper analogy. That’s a complicated question that depends on a lot of factors. I think naive computationalism isn’t going to do much work as a semantic account of mental phenomena. We will need to introduce a lot more ad hoc parameters to account for the idiosyncrasies of folk psychological language use. The China brain (not the Chinese room) scenario is also a great example of this and shows that discourse on mental phenomena likely incorporates many quirky features, such as being limited to a particular level of physical scale. I think computationalism is much more feasible as an ontological account of phenomenal consciousness (which is probably why Chalmers spent so much time and effort trying to get a counterfactual version working).

        But again, keep in mind that as an ontological account, it will seek to provide the necessary and sufficient physical parameters for when phenomenal consciousness comes about. Again, talk about computation and information is not meant to be talk about some abstract entities, but rather shorthand for the kinds of physical parameters that the computationalist thinks will be relevant for consciousness.

        Liked by 1 person

        1. Alex,
          I agree that “computationalists”, or “functionalists” (and I quote them because these are merely nominalistic markers of a position that may ultimately fail strict interpretations of what those terms suggest), would like to think that they aren’t proposing substrate-less mind. But once we get into the details of their position itself, the question is, do they succeed in proposing a kind of consciousness that actually is substrate based?

          First let me say that I’d be very pleased if what you seem to be suggesting were true. I’d love it if these people could say that they believe in substrate based mind though weren’t yet sure of the specific causal substrate in any given case. Thus they might work on this question experimentally to help their intension be demonstrated through extension based dynamics. I’ve come to doubt this however.

          Consider this. You write a coherent sentence on paper. This is clearly “information” in the sense that you understand it and could even give it to someone else who understands it. But this information should only exist in that capacity given that you or someone could read it. Thus here “information” seems dependent upon the existence of an instantiation mechanism, which is to say a reader of your note. A scanning computer could effectively be said to “read” it as well, though obviously in a different way. If your note were buried in the ground never to be “read” by anyone or anything, then it seems to me that those marking should not still be considered “information” in that sense, and even if they otherwise display the causal function of ink on paper.

          Similarly your computer was designed to process information such that certain components of it will animate the function of your computer screen. But what if the information which is meant to animate your screen, doesn’t make it either there or to any interpreter of it? Should it still be considered “information” in the intended sense, or rather like a written note that’s never read, “just causal stuff”? I’d say just causal stuff rather than screen animating information. So here I’m defining “information” in terms of what it animates rather than something that exists beyond that substrate. Furthermore if we’re talking about how the brain might create “thumb pain”, it seems to me that this should be a useful definition. As in the other two cases there’s a specific job to be done here.

          My understanding is that the platform of “computationalism” violates this rule since phenomena like “thumb pain” are proposed to exist without the use of mechanism based information (which is to say substrate that renders information to exist as such). Thus instead of brain information animating something that creates an experiencer of thumb pain, this is proposed to exist by means of information processing alone. That’s not true in the written note situation since the reader will be the mechanism which makes it informational in the intended sense, or in the computer screen situation since the screen will make it informational in the intended sense. Another example I’ve sometimes used is VHS and Betamax tapes — the “information” associated with them should only exist as such in respect to a mechanism that’s able to unlock it. Under the premise of causality I don’t know of a single case of information that should be considered substrate independent.

          I have no doubt that when a person’s thumb gets whacked, that associated information then becomes neurally transmitted to the brain. Furthermore I have no doubt that the brain then processes such information into new information. The question is, should this conversion itself create something that experiences thumb pain in a causal world? Thus if markings on paper correlated with the information that your whacked thumb sends your brain were algorithmically converted into markings on new paper correlated with your brain’s response, would something here experience what you do when your thumb gets whacked? I consider this notion to violate causality because those initial markings on paper should not exist as “whacked thumb information” given that no whacked thumb instantiation mechanisms are propose to exist here. As in the case of a written note, a computer screen, a Betamax tape, and all others that I know of, instantiation mechanisms should be required for markings on paper to exist in a “whacked thumb” capacity. But if the output set of marked paper were then fed into an appropriate machine that was armed with the sort of physics which the brain uses to create “thumb pain” (whatever that physics may be), then yes this machine should use that processed information to create something which causally experiences what you do when your thumb gets whacked. I’m saying that the machine would animate that physics based element of reality (whether electromagnet fields or whatever) to exist as the substrate of a conscious experiencer.

          Does that reasoning make sense? Or if not then where do you consider it mistaken?

          Like

          1. Hi Eric,

            The first thing to do is to clear up what exactly you think is missing in the “paper not being read” case that is nonetheless present to instantiate consciousness (or at least semantic content) in the case of the “paper being read” scenario. I can think of two different ways to interpret this complaint:

            1. A piece of paper with syntactic symbols bears semantic content (what you call information) if a conscious observer reads it or becomes aware of it. What makes the letters on the paper meaningful is just that it is interpreted to mean something by a conscious observer.

            2. A piece of paper with syntactic symbols bears semantic content if those symbols should casually participate in the right kind of functional mechanism.

            1 would appear to be begging the question in that it already assumes that a generic computational state can’t instantiate consciousness, so that’s a non-starter as a complaint. This leaves the second complaint. What the examples of a human being reading the paper and a computer scanning the paper and translating its contents to a monitor have in common is that they are both functional mechanisms. In other words, 2 is just the criticism that if the markings on the piece of paper don’t do anything useful, then they don’t count as bearing semantic content.

            I don’t necessarily see this as a bug, however. It’s not clear why a representational (semantic) state should have to be always embedded in a functional mechanism. In fact, that seems downright unintuitive. If I took a human brain outside of a human body and placed it in outer space somewhere where it received no inputs and produced no useful outputs, it’s not obvious that (for the time it remains alive) this brain wouldn’t be conscious, even if for all intents and purposes it is functionally equivalent to your unread piece of paper (it’s not doing anything).

            Once you accept that brains can be conscious despite lack of external functionality, it’s not clear what the second objection to computationalism amounts to. Of course, it might be true in our universe that you need a particular physical mechanism to create phenomenal consciousness. Maybe phenomenally conscious beings can only exist in a carbon substrate, or through EM fields. Or maybe not. The hypothetical computationalist in question is proposing that all kinds of mechanisms (any mechanism that produces the right kind of computational state), even in theory highly complex arrangements of pieces of paper, will produce phenomenally conscious entities. I’m not seeing an argument, outside of 1 and 2 (which I already addressed), for why this stance is mistaken. Merely stating your belief in something like CEMI doesn’t defeat (this kind of) computationalism.

            Liked by 1 person

          2. Hi Alex,
            I didn’t mean to imply that something which reads words on paper will instantiate consciousness (or at least semantic content). I also didn’t mean to imply that a living brain in space isolated from input/output information would thus not be conscious. It’s interesting to me that what I’ve said gave you those impressions since I consider them contrary with my position. I do remain hopeful however that you’ll be able to effectively assess my position itself, and whether you end up finding it effective or rather find legitimate problems for me to potentially overcome. Either way you’d be doing me a wonderful service!

            Observe that I’m defining the term “information” such that it does not exist independently of what’s informed. Whether in the capacity of a written note, or the stuff that your computer sends its screen, or a Betamax tape, their informational components shall exist here in terms of a thusly animated substrate — a “reader”, a “screen”, a “Betamax player”, and so on. Thus an entity which is causally informed. Without such an entity to inform they shouldn’t be considered informational in the sense that I’m using the term, and even though any such “non-information” should still exist as causal stuff in general. This is because the intended job wouldn’t be done. So if we have an intended job of creating an experiencer of what you know of as feeling thumb pain, there should thus be a substrate that exists as that experiencer just the same. Can you think of something that may be said to function by means of substrate-less information? In a causal world I don’t see how that would be possible.

            Here I anticipate the objection that another person might just as well use a more broad definition for “information” such that it can exist without animating what it’s intended to. Yes one could say that text that is never read, or a non completed computer transmission to its screen, or a Betamax tape that’s been forced into a VHS slot, all exist “informationally” in themselves. I simply don’t consider this to be as useful a definition in general since the intended action will obviously not occur. And since it’s the action of creating an experiencer of what you feel when your thumb gets whacked that’s being considered here, surely we should avoid a definition for information where it exists regardless of whether or not such an effect occurs to imply that it actually would.

            I see two ways of potentially challenging my position. One would be to demonstrate that computationalists actually do believe in some sort of consciousness substrate that brain information animates, though apparently haven’t yet identified any parameters for it. I’d love for this to be true! Thus I’d be able to retire my thought experiment as something that doesn’t apply to their position. But what sort of common substrate parameters might exist between brain function and marked paper properly converted into more marked paper? Regardless the theorized commonality would itself exist as the experiencer and so this idea might thus be experimentally verified. Here the popular “mind uploading” concept wouldn’t just depend upon proper coding function, but rather proper coding function which animates substrate that causally experiences its existence in associated ways. Not impossible! (I’ve merely mentioned neuron produced electromagnetic fields as an example of something that could potentially exist as such substrate so that one might better grasp a legitimate answer as I see it.)

            The other way that I know of to potentially show that my assessment is wrong would be to demonstrate why causality should permit an experiencer of thumb pain to exist when paper with certain markings on it becomes properly processed into more paper with the right markings on it. I have no idea how one might effectively argue this in a causal capacity however. What might such an experiencer exist as in terms of both brain function and marked paper converted into more marked paper?

            I should also say that I do realize that you’ve been arguing here for the causal legitimacy of a position that you don’t actually hold yourself. Regardless of this specific debate I do hope that we will be able to have some discussions regarding your favored position itself at some point. I’ve found it more difficult than I’d like to find people who are both willing and able to effectively discuss this sort of thing with me. Perhaps you as well? In any case you might use me as a medium from which to display the effectiveness of your ideas (should I be up the the task of course), as well as the converse for me.

            Like

          3. Hey Eric,

            Apologies for the late reply! I got absorbed with some other things, and should be much more prompt with future replies. You write:

            “I didn’t mean to imply that something which reads words on paper will instantiate consciousness (or at least semantic content)”

            In that case I must admit that I’m not sure that I understand what your objection amounts to. In order to refute computationalism, you would have to show that some necessary ingredient for consciousness is missing in the unread paper example case, where it is stipulated that computationalism would entail consciousness.

            You say that the unread paper case lacks the right kind of information, but it’s not clear why this matters if your idiosyncratic definition of information isn’t analogous to semantic content.

            “So if we have an intended job of creating an experiencer of what you know of as feeling thumb pain, there should thus be a substrate that exists as that experiencer just the same”

            Why? You haven’t presented an argument for this. By parsing your language, it appears you define informational capacity in terms of a specific kind of physical substrate, the criteria for which remain unspecified (whatever separates the Betamax tape player from the unread piece of paper). I tried to offer some criteria in my two points, but it seems you disagree with them. I have a feeling the real criteria will be really peculiar to your conceptual scheme and not at all obvious to outsiders, so I think it would be useful if you could identify them.

            In any case, you haven’t presented an argument for why consciousness can’t exist in the absence of your kind of substrate. Where “your kind of substrate” just means whatever matches the (unspecified) criteria you feel are important. So you need to establish some reason for why the generic computationalist should care about any of this, without equivocating by appealing to the ordinary language meaning of the term “substrate-less”.

            “Can you think of something that may be said to function by means of substrate-less information?”

            The physical piece of paper is the substrate which instantiates consciousness on (certain versions of) computationalism. You have to present some argument for why this can’t count as a legitimate substrate.

            “But what sort of common substrate parameters might exist between brain function and marked paper properly converted into more marked paper?”

            The common parameter is that both the brain physical states and the paper physical states undergo certain sequential processes which implement certain automaton or computational dynamics.

            Liked by 1 person

          4. No worries on timeliness Alex. In truth the only thing that I truly value is receiving earnest responses from reasonable people. I’ll take that whenever and wherever I can get it!

            When I brought up the example of you writing some text on paper, the point was merely to demonstrate that this text should only be considered “informational” in the capacity of it being read by you or someone/something else. I even mentioned that it would be informational in the sense that a non-conscious computer could scan what you wrote and thus potentially do various things on the basis of that now valid bit of information. Otherwise no. I truly did not want to imply anything about the consciousness of the note reader. This was just a beginning observation illustrating what I consider to be a useful definition for “machine information”.

            Furthermore if that example sends you away from my intended point, I also used the example of the stuff that your computer sends its screen as well as the case of a Betamax tape. The point remains that I continually find it useful to say that “machine information” does not exist in its own right, but only ever in respect to a machine which is able to use it in the intended sense. Thus for example the machine information encoded on a Betamax tape should not exist when the tape is merely being used as a table shim. I suppose I could say that what’s encoded on the tape of this table shim potentially exists as information in the sense that it could potentially be run through a Betamax machine which thus unlocks its content. As a table shim however this machine information should not be said exist as I see it.

            Does that seem like a reasonable thing for someone to assert? Or can you at least withhold judgement on the point I’m making here until after I present the advertised argument against “computationalism”? It rests upon this observation.

            Like

          5. Eric,

            Sure, we can define ‘machine information’ in the way you like. I’m still not sure what the criteria for possessing this kind of information is (because I’m not sure what counts as a “machine” in your view) but I at least understand the delineation by example. Of course, the challenge is to demonstrate why this is relevant to the computationalist. Meaning, why machine information is a necessary, but not sufficient, condition for consciousness.

            Liked by 1 person

          6. Alex,
            I’ll try to nail down what I mean here a bit more thoroughly to see if it seems reasonable to you, since my argument does still rest upon this premise. So here it’s not simply that I can define “machine information” as I’m doing, but also that it seems quite sensible to do so. From here a violation of what I mean to demonstrate should not just reflect the violation of an arbitrary nominal definition, but rather a void in worldly causal dynamics itself.

            The reason that I’ve been going with “machine information” rather than just “information” here, is to help display a more full causal circle. I’m saying that we shouldn’t consider there to be an informer which exists independently in itself, but rather given an “informee” which permits an informer to exist at all — a machine that serves as the informed entity. Thus information here depends upon the existence of both a transmitter and a receiver — no receiver means that the transmission shouldn’t be considered informational in a causal world. So the stuff that your computer sends its screen would not be information in the intended sense if there were nothing like a screen which became animated by such a transmission. In that case the transmission may simply be considered “causal stuff” rather than “informational”.

            Though we generally consider machines to have various parts that function together in purposeful ways, that’s not actually part of my argument. Just as rain could inform a machine like myself in various ways, it might also be said to inform a relatively homogenous rock given that it thus gets wet. So then just as rain wouldn’t be informational to a person if there were no such informee present, rain also wouldn’t be informational to a rock if it were not present, and even if that rock isn’t normally considered “machine”. I’m merely saying that the informational dynamics of anything, whether or not this involves something with various parts that function purposefully, depends upon the existence of a causally appropriate informee. So a Betamax tape which is being used as a table shim would inform the table given this causal relationship, and even though a far more dynamic informational display might be apparent if it were inserted into a working Betamax player.

            Does this seem like a reasonable point for someone to make? If so then I’ll display my argument itself. If not then how would you object to the stipulation that information depend upon the existence of an informee?

            Like

          7. Hi Eric,

            Apologies but I’m still not getting the difference between machine information dynamics and causal dynamics. You say that machine information isn’t necessarily functional, but then I don’t see what makes it different from regular “causal stuff”.

            You write that a computer sending outputs in the absence of a screen doesn’t have a receiver, but it will still transmit its outputs to the outside environment (for example, by heating up the surrounding air). Why can’t the air count as an informed receiver if the rock being impacted by the rain count as one? The only difference between the “computer-screen” case and the “computer without the screen” case are that the latter mechanism serves no functional purpose, but neither does the rain impacting the rock.

            Similarly, what stops us from asserting that part of the paper is an informed receiver taking in thermodynamic outputs from the other parts of the paper? Once we have eliminated functionality as a criterion, it all starts to seem arbitrary to me.

            Liked by 1 person

          8. There’s no reason at all for you to apologize Alex since my argument has essentially evolved on the fly during our discussion. Apparently your examination has helped remove some dead wood from my argument to thus create something more parsimonious! Under this changing landscape your concerns should be expected. I’ll now explain my current stance as effectively as I can manage. If successful then well be able to get into “computationalism” itself.

            The “dead wood” here is essentially “machine information”, since apparently “information” works sufficient alone. To begin, from a naturalistic perspective all of reality functions by means of worldly causal dynamics. So here we have a foundation for naturalists to potentially build upon. Then one subset of causal dynamics would be an informational variety. Here the constrain which I’d like you to assess is that an informer cannot exist without an intended informee in a causal world. That’s the condition by which I mean to display that “computationalism” violates causality.

            The “intended” element seems important regarding some of your current concerns. For example in the case of the stuff that a computer creates to potentially animate its screen, I’m saying that this should only be considered informational to the extent that it does animate its screen. Or perhaps informational in that sense if it’s intercepted and animates a different screen? Or perhaps another such scenario? Otherwise it should not be informational in the intended sense that’s generally meant regarding the function of a computer screen. This stuff should still be causal however, and may even be considered informational in respect to the creation of heat, increase of entropy, and so on. When we mean things like this, then yes it would be informational in the intended sense.

            Here one might observe that I have not sacrificed functionality since intended function is baked right into the definition of a given variety of information. So would you agree that it should be considered useful to say that if an intended informee does not exist, then an associated bit of information should also not be considered to exist in that regard?

            Like

          9. I agree it can be useful in certain contexts to talk about information in that way. More broadly, when we speak of semantic content we generally have in mind a particular context. It doesn’t make much sense to assert that a computer digital state represents characters on a screen (which in turn represent words in English) if there is no screen attached, or better yet if screens have not even been invented yet.

            So, I think it’s actually more appropriate to say that we are talking about the meaning of some computational state, as opposed to its informational capacity. Whenever you bring up intention, you are going to automatically invoke semantics. ‘Information’ is somewhat of a vaguer term, in that there are many alternative conceptions and definitions which are completely syntactical, like Kolmogorov complexity or the measure of information entropy. These physical definitions of information don’t depend in any way on the intention behind the characterization of that computational state, or the functional role it might play.

            Personally, I wouldn’t use such language because it might be confusing, but in any case it doesn’t matter as long as you specify what you mean (which you did).

            Liked by 1 person

          10. Okay Alex, it sounds like you’re in agreement that it should be useful to say that if an intended informee does not exist, then an associated bit of “information” should also not be said to exist as such. And I do appreciate that you’ve avoided writing me a blank check on the matter when you said “it can be useful in certain contexts to talk about information in that way”. Yes. And to be clear the context I mean here is that an informer should only exist as such if/when a causally appropriate informee becomes so animated to facilitate such an informing. So what’s sent to a computer screen will only be informational to the extent that it animates that screen. The Betamax tape will only be informational to the extent that it animates a Betamax player. Or when used as a table shim, to the extent that it animates the table. Rain may be said to be informational to a rock to the extent that it’s animated by wetness. Or to a person to the extent that the rain is understood to exist (or perhaps to the extent that the person gets wet, or whatever is meant at the time). Without a causally appropriate informee, an informer will not exist as such, and even though it will remain causal stuff. My plan is to use this observation in conjunction with my thumb pain thought experiment to demonstrate a causal void in the premise of “computationalism”.

            To begin I’ll say that it’s quite well known in science today that when someone’s thumb gets whacked, that neurons and such convey information about this event to the brain. Furthermore one might also say that this should be considered “information” in the sense that a causally appropriate informee should exist, which is to say a brain which then goes on to process that information in various ways for associated function.

            Thus if there were sufficient markings on paper which were correlated “perfectly” with biological whacked thumb information, the question to ask would be, should this also be considered informational in the intended sense? As demonstrated above these markings should only be considered informational to the extent that a causally appropriate informee exists. Fortunately however such an informee does exist in my thought experiment, or a computer able to scan those pages of marked paper. So good. Let’s even posit that this informee computer is able to scan them just as fast as a brain accepts neural whacked thumb information.

            (Because I’ve just mentioned the distinction of “correlation”, let me pause here to say that my coming conclusion does not rest upon the well known observation of “correlation is not the same as causation”. I consider this point tangential. Though my clock does correlate with my perspective on the sun’s position in the sky reasonably well, this doesn’t mean that the sun causes my clock to function as it does. So I’m not faulting computationalism for asserting that marked paper shouldn’t do its proposed job because its merely correlated with brain information rather than exists as brain information. In fact I actually do suspect that marked paper could do what brain information does, though only through a sufficiently complete chain of causality.)

            Next the computer processes its scanned information to print out a new set of paper with markings that correlate with the brain’s now processed whacked thumb information. In the brain’s case it seems to me that the processed result should exist informationally in the sense that it should go on to inform the body in various ways. Our essential concern here however is what’s commonly known as “the hard problem of consciousness”. We shouldn’t need to specifically answer this question right now, though in order for a phenomenal experiencer to result from such processing, the informer/informee lesson remains that “processed information” will need to inform something which exists as the phenomenal experiencer in order for there to be a causal information dynamic at all in this sense.

            This is where I consider computationalism to stop short of a complete causal chain. In the theme of “reductio ad absurdum” its premise suggests that something would feel what a whacked thumb causes you to feel, if certain marks on paper were converted into other marks on paper… though without those marks informing anything that could be said to exist as a phenomenal experiencer of existence. I consider this absurd since a given type of information should not exist as such without an appropriate informee to be animated by that information and so exist as the phenomenal experiencer.

            In order to rectify this causality void they’d need to either assert that they aren’t yet sure what would need to be informed here, or even propose some causally appropriate aspect of brain function to exist as a phenomenal experiencer. As I understand it however they’re prevented from such a path by long investing in the notion that the more a standard computer is programmed to seem like it (for example) speaks the English language, the more that it will phenomenally feel like a speaker of the English language. Thus I suspect that “computationalism” will fail rather than become reformed if/when evidence emerges regarding the theoretical informee that brain information animates to exist as a phenomenal experiencer of existence. Certain beliefs seem to become too ingrained to ever be given up.

            Alex, I recently noticed you to mention at The Splintered Mind that you have a potential solution for the hard problem of consciousness. I don’t know its details and don’t mean to bring this up for general discussion right now (unless you’re up for it, though either way hopefully we’ll get into your potential solution sometime). I mention the matter now more regarding the question of objectivity. If your proposed solution can in some sense be reduced to “information sans causal informee”, then you should also naturally be biased against my just presented argument. Or in the converse then biased for it. Or perhaps something in the middle. Ideally however one would fight such biases.

            Liked by 1 person

          11. Hi Eric,

            I understand the argument, but I’m not sure how you ended up with the conclusion that the piece of paper won’t instantiate an informee. We can draw an easy analogy between the two causal chains (paper and brain). In the brain case, the information of the afferent neurons will stimulate some pain brain state, which in turn will activate some appropriate physical behavior: Pain sensory information>Brain Neural State>Bodily behavior.

            Or P > N > B

            If phenomenal consciousness is immaterial, then this causal chain will get slightly more complicated. However, I understand that you identify consciousness with some EM material substrate, so this simplifies matters in that we can identify the ‘N’ state with the appropriate EM field state. In the brain example, the N state will be analogous to the conscious informee.

            Likewise, nothing stops us from having a complex chain of paper, wherein the markings on the pieces of paper correlate to each of the above states. So we’ll have a paper computational state which correlates to P, and another one correlating to N, and another to B and so on…

            We get P’ > N’ > B’

            Thus, the answer to your question is that the conscious informee is the N’ paper state, which is ordered in such a way as to approximate the relevant EM field parameters. On computationalism, if you’re a conscious entity you could be a complex EM field arrangement, or a complex paper arrangement, or some other kind of complex structure which instantiates the appropriate computational states.

            About my potential solution to the hard problem: Yes I’m actually writing up an article on this at the moment. I should finish it soon, in a couple of weeks or less (hopefully). I want to get it sent in to a journal somewhere before I start giving out any links, but if you’re still interested I’ll be sure to send you a copy when I do.

            Liked by 1 person

          12. Hi Alex,
            I suppose that the reason I haven’t considered the second set of marked paper (N’) to exist as the informee from the computationalist perspective, is because my understanding of their position is that it’s actually the processing of information that creates something that phenomenally experiences its existence. This is to say a conversion from a properly marked first set of marked paper to a second, not the second’s existence in itself. I’m no expert on the matter however so I’d welcome an expert to weigh in. (I’d love it if Dennett or Frankish would do so and thus publicly admit that they believe “thumb pain” can occur by means of the right marks on paper in itself, or alternatively certain marks on paper converted to the right other marks on paper? 😄)

            There are other issues to consider regarding the prospect of marked paper itself existing as a phenomenal experiencer. Wouldn’t this mean that this paper/ink entity would have such a phenomenal experience for as long as those marks remain sufficiently preserved? Weird! Furthermore this relationship seems to violate the “correlation ≠ causation” issue that I excused computationalism from last time. Why should we presume that a correlation between one thing (processed brain information) and another thing (marked paper) will result in the same causal function given that correlation merely suggests causation? For example, regardless of how well the sun’s position in the sky may correlate with time, clearly the sun’s position does not cause time.

            This time I’ve decided to try my argument from the opposite direction. This is to say display a potential solution that I do not consider to inherently violate causality, and then contrast this proposal with either of the forms of computationalism that we’re now considering.

            As you know I’m a big supporter of Johnjoe McFadden’s proposal that the exclusive causal substrate by which existence may be phenomenally experienced, exists in the form of certain parameters of electromagnetic radiation. Specifically these parameters are proposed to be created by means of the right sort of synchronous neuron firing. I consider this idea to potentially be causal because here brain information would animate the function of a causally appropriate substrate or informee — EM radiation. So just as computer information animates a computer screen for its picture to exist as it does, theoretically brain information animates an EM field for an associated consciousness to exist as it does. I don’t know of a second element of the brain that seems appropriate for this job.

            Conversely with marked paper converted into more marked paper, or even the right marked paper in itself, a causally appropriate informee should not thus be animated to phenomenally experience its existence. Instead the processed set of marked paper should need to inform something that would itself exist as a whacked thumb experiencer. It’s true that the printer used to mark the paper could be said to thus inform the paper and so the paper could be such an informee in that sense, though here only in the sense that it gets some marks on it. It’s somewhat like encoding potential information on a Betamax tape — a Betamax player would still need to be animated by such information in order to be unlocked.

            Anyway that’s what I’ve come up with so far to suggest that “computationalism” rests upon a non-causal premise. In truth however I suspect that this sort of reasoning alone won’t be enough to straighten things out. Hopefully soon a falsifiable consciousness proposal will become experimentally validated conclusively enough to essentially rid science of competing notions in general. Furthermore beyond McFadden’s I don’t know of a second consciousness proposal on the market today which would be possible to disprove in itself. I wonder if you know of any other falsifiable proposals?

            On your potential solution for the hard problem of consciousness, yes I would be interested in considering and discussing your proposal whenever you’re ready. Without good feedback to work from such proposals should be far less successful. Hopefully I could help with that just as you’ve been helping me with this.

            Like

    2. Thanks for taking a shot at the substrate clarification Alex. I used to do it regularly, but just can’t muster the energy anymore. Easy to forget that not everyone has seen all those previous conversations.

      I’m not sure if I follow the distinction between the ontological and semantic versions of functionalism. It seems like any talk of something like this will be semantic in the sense you describe, since it’s a description and not the thing in an of itself, particularly for any system that we can deploy multiple levels of description, or from multiple perspectives. But then when would we ever have an ontological version of a description?

      Fully possible I’m missing key points though.

      Liked by 1 person

      1. Mike,
        On semantic rather than ontological functionalism, from my interpretation Alex was essentially demonstrating the usefulness of the nominalist platform. You and I have been discussing the virtues of nominalism recently as well. That is until a nominalist goes far enough to claim that our words themselves don’t exist as language elements. I don’t recall anyone making such a claim myself, though I will keep an eye out for that extremity of anti-platonism.

        Given my derogatory attitude towards “functionalism” above, Alex seems to have presumed that I was taking this term as something “real” rather than “nominal”. I agreed that perhaps I had been using it too loosely, and that maybe “computationalism” would be more representative of the position that dismays me. And indeed, I mentioned that I actually agree with functionalism in a technical capacity since it can never be false. I went on to explain that my actual concern is that the term is sometimes used to help support a position that I consider non-causal. Furthermore I mentioned some details such as the essence of my thumb pain thought experiment. He can weigh in if he likes. Few seem to however. My suspicion is that many would love to effectively dismiss my concerns, though haven’t yet figured out how to do so. I’ll continue pushing my agenda as long as I find it sensible, just as I expect you to for yours.

        Liked by 1 person

        1. Eric,
          The problem with your thumb pain thought experiment, similar to Searle’s Chinese Room, is it just amounts to an incredulous stare. If information processing (distilled causality) is non-causal, you should identify exactly where you think it’s going non-causal. I suspect a lot of people are like me.
          We’ll only spend so much time dealing with incredulity if there’s no chain of reasoning behind it.

          Liked by 1 person

          1. Mike,
            It seems to me that I’ve continually identified where I think this position goes non-causal. To be specific, it’s the notion that the brain creates a phenomenal experiencer by means of nothing more than converting certain information into other information. Conversely in a causal world it seems to me that computers do what they do by means of animating associated output mechanisms. I don’t know of a single case where information converted into other information may be said to be output in itself. Instead there always seems to be physics based instantiation for whatever action a computer may be said to do. I no longer seek to change your opposition, since I realize that you oppose my reasoning here just as strongly as I support it. What I truly seek with my thought experiment is to help educate people in general about what’s being proposed. I suspect that many, and even strong supporters, do not grasp what’s being proposed nearly as well as you and I do.

            One dream of mine would be for my thought experiment to “go viral” some day, and thus even Dan Dennett would be forced to address it in some way. So what do you think he’d say? Would he effectively assert that IF enough paper with the right markings on it were properly converted into other paper with markings on it, then something here should indeed experience what he does when his thumb gets whacked? If so then I suspect that he’d feel like a reasonable portion of his popularity would be lost. Thus insufficient integrity might lead him to only squishy responses. But you know him far better than I do. How do you think he’d respond if people in general were curious?

            Liked by 1 person

          2. Eric,
            “it’s the notion that the brain creates a phenomenal experiencer by means of nothing more than converting certain information into other information”

            A lot depends on what you mean by “phenomenal” here. If you mean it in the strong sense I discuss in the post, then most functionalists would deny it exists in that strong sense. If you mean something more limited, then what separates what you’re talking about from perceptual information?

            Hopefully I don’t have to defend that perceptual information can be produced by information processing. And I’ll again note that information and information processing are 100% physical 100% of the time. It’s why your computing devices get hot.

            I think Dennett’s response would be similar to his response to the Chinese Room, that it’s a faulty intuition pump.
            https://philosophybites.com/2013/06/daniel-dennett-on-the-chinese-room.html

            Liked by 1 person

          3. Mike,
            It sounds like you think Dennett would have just as much integrity as you do (and would thus plainly state that if a vast computer were to scan paper with certain markings on it and then process this to print out paper with a proper second set of markings, then something here should thus experience what he does when his thumb gets whacked). I’d hope for such integrity as well! Unlike in Searle’s Chinese room however, this might not go well for him.

            It seems to me that my thought experiment is less vulnerable to the sort of attacks which are raised against the Chinese room. First observe that unlike Searle I don’t need to presume the existence of a standard computer which can pass a robust Turing test. Here he essentially begged the question in the service of his opposition. More importantly however, I think Dennett erroneously sells an irrelevant point of the CRA as a faulty intuition pump. I don’t consider the situation here magical because it should take Searle many lifetimes to run the lookup code to answer a question in the manner that a Chinese speaker would (which is to say by means of Schwitzgebel’s innocent/wonderful conception of consciousness, as endorsed by Frankish). It’s that given what this work entails, nothing here should phenomenally understand what’s being asked and then provide the sort of answer that a true Chinese speaker might. That’s the part that I consider magical.

            My own thought experiment boils down the essence of “computationalism” in order to help people understand what it means, and even if supporters do not yet grasp any associated void in causality. The essence of this causality void is that because “machine information” can only exist as such in respect to the causal entity that it animates, it’s wrong to presume that information processing alone should create a phenomenal experiencer. Instead the right sort of substrate should need to be animated, as in the case of text on paper, a computer screen, a Betamax tape, and so on as I’ve been discussing with Alex.

            Furthermore Searle also didn’t grasp a highly possible solution and so couldn’t give people a reasonable example of what it might take for a human made computer to phenomenally experience its existence. As consistent with my dual computers model of brain function, it could be that brain information animates faint electromagnetic fields in the head which themselves exist phenomenally. That’s what I suspect consciousness is made of.

            Regarding perceptual information, this depends upon how you’re using the term. I think its reasonable to use it in both non-conscious and conscious ways however. My brain can perceive information about my breathing and so algorithmically alter my pulse in appropriate ways for example. Or it might even function given neural information associated with light that enters my eye. Each of these should display non-conscious perceptual information. But if light that enters my eye were processed in a way that animates the sort of physics which creates a phenomenal experiencer of light, then this should instead be of the conscious variety. Here this sort of perceptual information should exist as potential input to a conscious form of computer as a visual image that I might thus see. I might even process such information in a conscious capacity, which I call “thought”. While a thinker should process information in series, non-conscious perceptual information tends to be processed in the brain in massively parallel ways.

            Liked by 1 person

          4. Eric,
            By “perceptual information” I just mean information gathered through the sensory systems about the environment, whether coming in at the moment or being activated later. It’s information in the same sense that a self driving car collects about its operations and environment for the decisions it makes.

            So the original question still pertains. If you mean something more than this with “phenomenal”, but you don’t mean the attributes attacked by Dennett (absolutely private, ineffable, intrinsic, with direct acquaintance), then what do you mean? You can’t just say “conscious” here, because that’s just a synonym. And you can’t say “EM fields” because that’s just begging the question for your preferred implementation. What about these entities requires that implementation?

            If you do mean classic qualia, then I agree information processing can’t provide that, but that’s like saying George Lucas can’t provide a real lightsaber. I don’t think classic qualia exist. If you mean functional perceptual information, then that’s just information, which information processing, by definition, should be able to provide.

            So no magic required in functionalism, unless you bring it in yourself.

            Liked by 1 person

          5. Okay Mike, if you’re using “perceptual information” as the stuff taken in by a self driving car then I’d say it’s entirely non-conscious. Nothing wrong with that.

            Above I only referenced what I mean by “phenomenal” in passing as something which Keith Frankish does not dispute the existence of. When I use this term, or consciousness, or qualia, or subjectivity, or something it’s like, and so on, I mean Eric Schwitzgebel’s innocent/wonderful conception of consciousness. That’s all I’ve ever meant in the end. Observe that it’s so innocent that it doesn’t even bake in worldly causal dynamics — I add that part later by means of own metaphysics. http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/DefiningConsciousness-160712.pdf

            I suppose it’s possible that Schwitzgebel, Frankish, and I accept this conception of consciousness, though you don’t. In that case however you might explain what you consider problematic about this idea?

            The thing to understand about me is that I’m just as illusionist or eliminativist as you, Dennett, and Frankish are, but beyond that I consider one conception of consciousness that each of you accept, to also mandate a void in causality. So this is effectively a more stringent version of illusionism, not that I’ve taken the title itself.

            As I’ve been saying, in a causal world machine information should not be considered to exist independently of what it animates. Thus text on paper, or the stuff that your computer sends its screen, or a Betamax tape, and so on, should merely exist as causal stuff rather than machine information when there is no appropriate machine to unlock their content. Thus it should not be causally possible for markings on paper which are correlated with the machine information that your whacked thumb sends your brain, to exist as machine information as well given the void of an appropriate machine. But if those markings were converted into other markings correlated with your brain’s response, and if these new markings were then fed into a machine armed with whatever physics the brain uses to create an experiencer of thumb pain, then yes, the causal circle should become completed. Something here should thus experience what you do when your thumb gets whacked.

            Here I anticipate you to say, “Just a second. We don’t believe that the brain requires any dedicated sort of physics to create thumb pain, or at least not beyond the processing of information itself”. Yes I know. That’s the position which I’m suggesting violates causality. My demonstration has been that the information which you posit cannot exist independently of an associated machine. Thus in a natural world, paper with certain markings on it converted to more paper with markings on it, should never in itself create something that experiences what you do when your thumb gets whacked.

            I realize that you cannot accept this right now. But if scientists were able to induce faint electromagnetic fields in the brain around the parameters that synchronous neuron firing is known to create, and if people exposed to such exogenous fields were to report distortions to their vision, hearing, and a wide assortment of phenomenal dynamics, then I suspect that you’d earnestly revisit my assessment of what does and does not function by means of worldly causal dynamics.

            Liked by 1 person

          6. Eric,
            As we’ve discussed before, the problem with Schwitzgebel’s innocent conception. It’s so innocent it doesn’t actually pick out anything. Frankish accepted it, but noted that it’s completely compatible with illusionism. It’s also compatible with dualism, panpsychism, and just about anything else. I’m a fan of Eric S, but I don’t think he improved the situation with this notion.

            Invoking it doesn’t answer the question: what separates what you’re calling “phenomenal” from information? Put another way, can you elaborate on what causes the “void in causality” you noted? The rest of the discussion seems to hinge on this.

            Liked by 1 person

          7. Mike,
            I realize that Schwitzgebel’s consciousness conception does not rule out various ideas that you and I rule out. I consider this a strength however. Here we’re talking about paring an idea down to a point that we can all agree exists, or an effective essence. With such agreement, intelligent speculation about how it might arise should then be possible. Conversely when we use all sorts of more detailed competing definitions that some of us will not be able to accept, there shouldn’t be much potential for earnest discussion to occur and thus for progress to be made. If widely adopted I think his definition could help the field tremendously. (Furthermore I suspect that his epistemological process of using positive and negative examples to help demonstrate essentials that are difficult to otherwise enunciate, could be a boon for science in general.)

            Hopefully you now grasp what I mean by “phenomenal” and its various synonyms. (If not then maybe take a quick look through Schwitzgebel’s positive and negative examples again?) Next would be the question of how this sort of thing might actually arise. As I’ve said, I dismiss all that illusionists dismiss though I also use the premise of causality to dismiss something that illusionists accept. You’ve asked me to elaborate a bit more on this, and particularly in terms of a relationship between the “phenomenal” idea (that even Frankish accepts the existence of), and “information”.

            You and I consider the brain to exist as a machine which helps operate the body. Thus I’m calling the information that it receives, “machine information” as opposed to some other conception of information that may not be quite as appropriate. Furthermore we understand that sometimes the brain creates an entity which phenomenally experiences its existence given certain machine information, whereas other times it does not. So the question to ask from here is, how might the brain create this phenomenal experiencer associated with its machine information?

            I’ve then observed that in a causal world machine information does not exist independently of an appropriate machine, and displayed in terms of text on paper (which is reader dependent), the stuff that a computer sends its screen (which is screen dependent) and a Betamax tape (which is Betamax player dependent). Thus I’ve reasoned that the machine information associate with a brain creating a phenomenal experiencer, should also be machine dependent in a causal world. This is to say that there should be some sort of physics which brain information animates in order for a phenomenal experiencer of existence to be created by means of that information. Then to clarify this further I’ve presented an appropriate example of the sort of physics which the brain might implement to cause such information to exist as such, which is to say certain parameters of EM fields associated with synchronous neuron firing.

            That’s the heart of my argument. Thus in a causal world marks on paper which are correlated with whacked thumb brain information, should not exist as machine information independently of a machine which is animated by that information. So converting certain marks on paper into other marks on paper should not create a phenomenal entity in a causal world.

            Here I know you’ll say that I haven’t answered your question. What separates phenomenal from information? And I do consider phenomenal dynamics to be informational, though that’s another story. A better question right now might be, in a causal world how might a phenomenal entity arise by means of information? My answer would be through appropriate instantiation mechanisms that I presume the brain uses to create such an entity. Furthermore the mechanistic nature of this proposal means that such an answer should even be possible to check experimentally.

            Liked by 1 person

          8. Yeah, sorry Eric, confirmed that I see no answer to the phenomenal question here. Your final remarks indicate you see phenomenality as distinct from information in some sense, but you seem unwilling or unable to clarify. To be clear, I’m not asking for commitment to implementation details, just identification of how you see phenomenal properties being distinct from information, for the entity itself.

            Ah well, par for the course.

            Liked by 1 person

          9. Okay Mike, Schwitzgebel’s positive and negative examples do not help you grasp what he means by “phenomenal”, or “consciousness”, or “sentience”, and so on. Most people seem to grasp a standard meaning for this in everyday life without reading any academic papers at all. Even Frankish seemed to grasp such meaning when he read Schwitzgebel’s paper, and then agreed that this does exist as an element of reality. If you cannot grasp such meaning then it should be difficult for you to understand my argument about how “computationalism” violates causality. Here you should need to understand what people mean by a difference between existing consciously versus non-consciously.

            If you were to grasp a difference between existing consciously versus non-consciously however, then we might also discuss two different kinds of information that should not be conflated. Here the non-conscious kind is essentially the machine information idea that I’ve been presenting, while the other exists for a conscious experiencer of existence. Causality mandates that machine information cannot exist alone, but only in respect to the mechanics of a machine that it informs. That’s why markings on paper which are correlated with the machine information associated with a whacked thumb, that is then converted into more markings on paper which are correlated with the machine information associated with the brain’s response, should not also create an experiencer of what you know as having a whacked thumb. In order for that second set of markings to also become “machine information” and thus create something that experiences what you do when your thumb gets whacked, causality mandates that it animate an appropriate sort of machine. This is displayed in the stuff that your computer sends its screen, as well as the case of a Betamax tape. Without such a step I’m illustrating that “computationalists” present a magical platform. I do not expect you to believe any of this however given that you do not grasp what’s meant by “conscious” versus “non-conscious”, and also because good experimental evidence does not yet exist regarding the causal stuff that consciousness might exist as. Perhaps some day however…

            Liked by 1 person

      2. @Mike: By an ontological account, I mean one that seeks to explain some mental ontological phenomenon, and by a semantic account I mean one that seeks to explain the meaning of mental terms. For the former, ontology is the explanandum, whereas for the latter, it is the explanans. As an example, an ontological account of natural evil might be sought by a Platonist who thinks that ‘evil’ is just some abstract platonic kind, and who wishes to uncover the necessary and sufficient conditions (i.e., the noumenal-platonic laws) which might bring about evil into the world. By contrast, a nominalist about evil just seeks to understand the conditions under which human beings talk about evil.

        Notice the ontological account need not overlap with the semantic account. For instance, if you think that evil is just some abstract platonic kind, then there is no reason to believe that human vocabulary concerning evil will match the actual natural conditions of the world that bring it into existence. It might turn out that eating Oreo cookies is a sufficient condition for evil on the ontological account (but obviously you won’t conclude that for the semantic version).

        Bringing this back to functionalism, an example of an ontological account would be one that seeks to explain phenomenal effects as being caused by functional states (due to the psycho-physical laws of the universe). But functionalism makes a poor bedfellow here because it’s not clear that functional states are natural kinds. Whether a state is functional or not is determined relative to your conceptual schema (e.g., what your desired end state is), and so it would be weird if the objective psycho-physcial laws of the universe (which determined the existence of phenomenal consciousness) somehow perfectly matched with our desires.

        For that reason, I think the theory of functionalism works best as a semantic account and should be paired with a kind of eliminativism about phenomenal consciousness.

        Like

        1. Thanks Alex. That makes sense. I think a big part of functionalism is defining mental states as functional ones, and then proceeding from there, which it sounds like would fit within your semantic version.

          On whether functionalism must be paired with eliminativism, I think it must either be paired with eliminativism toward strong phenomenality (absolute intrinsicality, etc), or with a reconstructed weaker version of it. Both approaches work for me, although the second options seems to upset people less.

          Like

    3. Alex,
      Your comments are always welcome. There is one item that gets overlooked in these discussion of information processing and that is information itself. Information does not exist independent of mind, only a fundamental reality exists.

      Mind processes information, no question about that but first, mind has to create information from its intimate relationship with that fundamental reality. The information that it creates is a structure, and that structured model may accurately map to that fundamental reality or it may not.

      Like I discussed with Eric, the so-called scientific and academic intellectuals first have to recognize that this system we call mind is a separate and distinct system that is animated by the brain, one that emerges from the brain, a system who’s very existence depends upon that brain, a system that is intrinsically linked to the brain and yet, it is a system that has causal power, is an autonomous, sovereign system that uses the substate of that brain for its own purposes. Unfortunately this notion is categorically rejected, and their objections are reasonable.

      Here is a quote from Michael Egnor from the article “WHY THE MIND CANNOT JUST EMERGE FROM THE BRAIN:

      “The thing is, with the philosophy of mind, if the mind is an emergent property of the brain, it is ontologically completely different. That is, there are no properties of the mind that have any overlap with the properties of brain. Thought and matter are not similar in any way. Matter has extension in space and mass; thoughts have no extension in space and no mass. Thoughts have emotional states; matter doesn’t have emotional states, just matter. So it’s not clear that you can get an emergent property when there is no connection whatsoever between that property and the thing it supposedly emerges from.”

      My only comment to their objections is: “No shit Leroy!!” The only physical explanation that can fill this epistemic and ontological gap is an emergent system that is quantum. It’s like Penrose continually reminds his constituents: Let us not forget that everything in the material universe is a quantum effect.

      Like

  10. I’m interested, but I feel that the post starts from an assumption that one has read the paper by Dennett. I guess that’s a way of saying it’s too high level for me. Presumably Dennetts’s view is that with sufficient technology we should be able to identify and quantify qualia, ie feelings. But you can do so and still not know what it is like to feel. There’s surely a fundamental gap here

    Liked by 2 people

    1. Sorry, I did skate over Dennett’s arguments. It’s always a judgment call on how much to include in these posts.

      If you’re interested, I do think Dennett’s paper is worth checking out. (The fonts on his site tend to be a bit small. I find using the browser zoom option helps.)
      https://ase.tufts.edu/cogstud/dennett/papers/quinqual.htm

      In terms of qualia, Dennett doesn’t really see them as something coherent. So he wouldn’t expect them to be scientifically identifiable or quantifiable. He walks through several thought experiments in the paper to illustrate why.

      For example, consider the taste of beer. For most people, that first taste is vile. But if they continue drinking it, it eventually becomes pleasurable, an acquired taste. But if the qualia of the taste of beer is intrinsic, then surely they must have been wrong one of those times, which calls into question infallible direct acquaintance. If the qualia did change, then it isn’t intrinsic. He goes through a number of these types of examples.

      The phrase “what it is like” is problematic. It implies comparison with something, as in what it’s like to sit in a particular chair (“like this other one but higher”, etc). But what we’re talking about here isn’t like that. It’s supposed to be indescribable, unanalyzable, in other words, ineffable. So the question is whether it actually picks out anything, or is just something we feel is there, even if it isn’t.

      Granted, this is seriously counter-intuitive. But science often puts us in the position of having to accept those kinds of things.

      Like

      1. Mike, you stumped me several days ago with this argument about the taste of beer. I’m still pondering it. Is this part of your argument that the “taste” is a so-called illusion?—a term I took issue with.

        Like

        1. Matti, it depends on what you mean by “taste”. Functional taste of course exists and can be scientifically studied. The illusion would be a “taste” as a “raw feel” somehow distinct from that functionality.

          Like

      2. I guess this argument has been made, but aren’t qualia, understood as the experience as experienced, liable to change? If you rephrased ‘what is it like..’ as ‘how does it feel..’ and you asked ‘how does it feel to be me’ that experience of being me would change from moment to moment. That makes it transitory but not meaningless. On the other hand if you were to ask ‘what is me?’ that’s a more difficult question.

        Like

        1. The word “qualia” can be used in different ways. One is simply to refer to manifest experience, to the way experience seems to us. When used in that manner, it obviously refers to something that exists, but not anything that leads to explanatory gaps. Most qualia realists are committed to it being something more than informational states or processes.

          Consider, for whatever conception of qualia you’re considering, what separates that concept from functional perceptual information or processes? Are the differences the attributes that Dennett argues against? If not, what are they?

          Like

  11. Are you saying the raw feel is the same as qualia? Sounds like it. I’m trying to understand what’s the “illusion” here. Raw feel or qualia is illusory-yes? And since one cannot scientifically (i.e., objectively) study my (raw feel) taste of beer, then it’s an illusion—yes? Perhaps you could explain how functional taste can be scientifically studied. I think I know. But don’t want to presume. And, I assume, the illusory nature of the taste of my beer is proven by the argument that my first sip of beer was something I didn’t care for but sometime later I liked the taste of beer? I’ve been pondering that a lot lately. In my comments above from a few days ago I said that I think the use of “illusion” to talk about subjective experience comes from the idea that our subjective experiences are the object of our subjective experiences—which I believe is erroneous. But that point may be too far down the road. I’m trying to do my thinking in baby steps at this point.

    Like

    1. “Raw feel” or “raw experience” are common synonyms for “phenomenal properties”, “qualia”, “what it’s like-ness”, etc. If that synonym circle refers to something understood to be distinct from the functionality, then it’s what’s seen as illusory.

      Functional taste can be studied because it makes a difference, allowing an organism to make discriminations between various stimuli, to make use of the information. Because it makes a difference, that is, has causal effects, it can be observed and measured. Incidentally this making a difference is what natural selection can act against. In other words, functional taste can evolve; it’s not clear how a phenomenal taste distinct from that functionality can.

      I’m not following “subjective experiences are the object of our subjective experiences”. But the word “subjective” is interesting. It can have grounded or contentious meanings. A typical grounded one is a system taking in information from a certain type of perspective. The more contentious version is typically part of the synonym circle I noted above. The grounded version is undeniable, but the contentious one is, well, contentious. 🙂

      Like

      1. “So that, as clear as is the Summer’s sun.” (Shakespeare, Henry V)😊. However, I asked how (e.g., functional taste) can be studied scientifically. I think you made the same conclusory statement by saying “Functional taste can be studied because it makes a difference…. has causal effects, it can be observed and measured.” So, could you give me an example? Can you give me a measurement someone has made that illustrates that it can be scientifically studied? I don’t think I can comprehend your “qualia-is-illusion” argument until I understand fully your distinction between the functional taste of my beer (real) and the qualia taste (illusion).

        You’re not following the “subjective experiences are the object of our subjective experiences”. That has to do with the process of perception as I tried to explain above. I may circle back around to that after I understand “qualia-is-illusion” argument better. It might be helpful. But for now I think I have to take baby steps—I don’t want to critique something I don’t fully comprehend.

        Like

        1. Not quite sure what you’re looking for here, so apologies in advance if these are missing the point.

          Here’s an example of measurements made on whether animals can taste differences in water.
          https://arstechnica.com/science/2010/04/can-we-actually-taste-water-insects-can/

          Another on the effects of social learning on taste, for us and rats.
          https://www.brandeis.edu/now/2019/november/thanksgiving-taste-don-katz.html

          A study on the mechanisms of how animals (mice) detect sour tastes
          https://news.usc.edu/160827/how-sour-tastes-otop1-protein-humans-mice-usc-research/

          More generally, we have the field of neurogastronomy.
          https://en.wikipedia.org/wiki/Neurogastronomy

          Hope this helps.

          Liked by 1 person

          1. The articles you linked were interesting. But not what I was hoping for and unhelpful. Your qualia-is-illusory argument, as you say, breaks down to that if qualia refers to something distinct from the functionality, then it’s illusory. This is supposedly because functional taste (and presumably our other functional senses) can be “observed and measured” as you say and qualia cannot. The articles you suggested were interesting. I especially liked the one about Thanksgiving dinner. But I submit they were observing and measuring the “effects” of changes in taste under various circumstances and not observing or measuring taste itself. That would be something different. And I’d love to see some scientist do it. I was hoping for some insight into how the two descriptions of sensation differ which would help demonstrate that one is real and one is illusory. I am thus left in a quandary. I have not dismissed the possibility that functional taste and qualia are in fact a distinction without a difference. That is, qualia-as-illusion has been watered down as a concept which, at the end of the day, side-steps the ludicrousness of saying the taste of my beer is an illusion.

            I admit you stumped me with your argument (above) about the taste of beer. It deserved some serious thinking. I am, at present, skeptical that this argument (which you say to took from Dennett) is any sort of proof. In fact, the more I ponder it the more I find it unpersuasive. In response let me just say that there can be a shift in my subjective experience of drinking beer over the years. Seems like a common enough thing. I didn’t like the taste of beer years ago. My preference for the taste of beer changed. I like it now. Here’s where I have problems. From that it does not follow that I was somehow “wrong” about the taste of beer when I took my first sip as a young man. Being “wrong” I would argue does not apply here. That’s because being “wrong” implies an objective standard by which to judge. It really makes no sense. It’s asking for an objective account of my shift in taste. Because I cannot objectively account for my shift in taste is no proof that qualia are illusory. If that’s what was intended, I fail to see how it comes close to a proof. In fact, it seems like an impossibly unfair challenge—to demonstrate objectively a subjective sense experience. I refer to a short line in Thomas Nagel’s essay, What It Is Like To Be A Bat: “What would be left of what it was like to be a bat if one removed the viewpoint of the bat?” Your (or Dennett’s) argument is a challenge is to do that. In all fairness, I think this deserves more open-minded thinking on my part. I’m far from an expert. But this is where I stand at the moment.

            My previous comments above regarding the process of perception are, I think, consistent with what I’ve just said and may provide some explanation as to how this idea came about that our subjective experiences are, in some way, illusory. I don’t think it’s useful to repeat myself on that, however. And, finally, thank you for indulging my questions and thoughts on the matter.

            Liked by 1 person

          2. On studying the effects of the taste rather than the taste itself, right, that’s a common reaction. (Which, sorry, I should have anticipated.) I’d note that one of the studies also looked at the molecular mechanisms involved and the neural firing patterns that resulted. The common reaction there is to assert that those are the precursors of qualia.

            So the idea is that somewhere between the precursors and the effects is the “taste itself”. But where exactly would this boundary be? Remember that we’re dealing with a neural network, with numerous parallel streams with crosstalk between them. So if there’s a boundary, it seems unlikely there would be just one, but numerous ones.

            “I was hoping for some insight into how the two descriptions of sensation differ which would help demonstrate that one is real and one is illusory.”

            Consider the description for the functional account. One version of it is the molecular mechanisms trigger a neural firing pattern which produces the effects you noted were the target of most of the studies.

            Now consider the phenomenal account. What is its description? It’s widely acknowledged that none can be provided. It’s ineffable. You can’t describe the sensation of sweetness, only provide examples of it. All we can describe are those precursors and effects.

            For example sugar on the tongue leads to the sensation of sweetness, which feels highly desirable and we crave more of it. That sentence refers to sweetness but doesn’t describe it. Or at least that’s the strong intuition we all have. It feels like something is missing, even though we can’t describe it.

            B. A. Farrell, writing in 1950, pointed out that whatever this missing thing is, it’s utterly featureless. It has no differentiations that can make any difference in our subsequent mental states or behavior. Certainly the feeling that’s there does, but the thing itself doesn’t. The question is, how much should we trust that feeling?

            What if I had the strong feeling that I was the king of Greenland? Should we conclude from that feeling that the lack of evidence for any office of king in Greenland, much less that I’m the incumbent, is a hard problem that needs to be solved, that there’s an explanatory gap between the facts on the ground in Greenland and my feeling? Is there a Mike’s kingship – Greenland problem? Or should we focus on where my feeling is coming from?

            Likewise with the “taste itself” as distinct from the overall process. Should we focus on this featureless thing for which there’s no evidence, or should we conclude the featureless thing is redundant, that the actual “taste itself” is the overall precursors going through to the effects, and instead focus on why we have the feeling, the intuition, that there’s more there?

            On the taste of beer and “being wrong”, I agree. But I think you’re overlooking the implications of that conclusion. If there’s nothing to be wrong about, then there’s only the stimuli and our reactions to it, which you agree can change over time. Which means we’re not talking about something intrinsic, that is, fundamental, non-relational, but something that is part of the overall causal framework, therefore functional, which presents no metaphysical problem. If you still say that the taste of beer remains distinct from that functionality, then what makes it distinct?

            Thanks for asking these questions? I enjoy the discussion!

            Liked by 1 person

          3. Mike, I’d like to take another try at your concept of qualia-as-illusion. I think I may have a better handle on it. The theory is, in part, I think a reaction to what its proponents appear to claim is a theory of consciousness that non-illusionists subscribe to. In short, illusionists think that folks (like me) who don’t accept (or understand) illusionism believe in some sort of inner representation of the brain. I’ve read that Dan Dennett uses the term of an “inner show” created by the brain. I think by this he may mean that the conscious experience I have when I take a sip of my beer is somehow misleading me into thinking I’m experiencing a show created by and in my brain. This interpretation therefore justifies the ambiguous label of illusion. However, I am quite adamant that my own understanding of “perception” (described in several entries above) is clearly not that—I do not believe my brain is putting on an inner show. When I discussed the process of perception above I thought that by stepping back to something basic like perception might explain why the illusionists theory, at least in part, came about as a response. Thinking about it more it helped me understand what you may mean by qualia-as-illusion. And, at least in part, I think we may not disagree completely.

            I hate to repeat myself, but on December 13th I remarked above that “…the use of illusion to talk about subjective experience comes from the erroneous idea that our subjective experiences are the object of our subjective experiences—which is wrong. It creates a confusing duality muddle in my opinion. … It has taken me years to shake off those inane discussions about “sense data” I suffered through as an undergrad many many years ago. I do not experience sense data—I taste the beer!” I then go on to repeat this point ad nauseam.

            In brief, it’s misleading to say we experience sense data. Sense data are not the objects of our perception. But, obviously, we do experience through the mechanisms of our senses. So, it was easy to mistakenly say we experience the sense data. Empiricists labored under that erroneous assumption for many years. To me sense data amounts to a needless supplementary explanation which can easily lead to the conclusion that we experience the sense data distinct from the object causing our experience. So, and here is my point, likewise when we experience through our senses we obviously experience the “qualities” of the object. This, I suppose, can also lead one to mistakenly conclude that we experience qualia—separate from and distinct from the object. That would be wrong. It is not the sense data nor the so-called qualia that “causes” our subjective experience. Rather the object “causes” our experience. So let me suggest that if I am closer to your position than (perhaps) it might make more sense to say that the use of the term qualia is superfluous rather than an illusion. Am I off the mark here?

            Liked by 1 person

          4. Matti,
            I don’t think you’re off the mark at all. It does sounds like we’re a lot closer than we might have thought.

            Sense data is an interesting concept. The term “qualia” was originally coined (at least philosophically) in the 1920s to refer to properties of sense data. But sense data fell out of favor in the philosophy of mind in later decades, so that later uses of “qualia” were supposed to refer to something different. However, I’ve never been able to understand the difference between them. (I don’t doubt there are nuanced differences that professional philosophers care about, but if so the concepts remain very similar.)

            But yes, a big part of what Dennett and others are arguing against is the idea that there’s a prepared presentation somewhere in the brain. Dennett calls it the “Cartesian theater”, and notes that hardly anyone in philosophy or science believes in it explicitly, but their implicit belief typically comes out in the assumptions they reveal in discussions, such as being concerned about whether neural activity is before or after the conscious part, as though there’s a consciousness finish line somewhere, a screen, that once reached, is now in consciousness. Even hardcore materialists can often slide into this way of thinking without realizing it.

            And Frankish would agree whole heartedly with your statement that the object causes experience, not some inner presentation. Of course, what often gets brought up here are hallucinations and remembered / imagined objects. But it’s pretty well established that those are far more limited than the directly perceived versions.

            Definitions are the bane of these types of discussions. It can be very hard to tease apart actual ontological differences from the semantic ones. It’s not unusual for long debates to happen which are really about differences in definition.

            Like

          5. Good to know! Now I need to struggle with your idea of functionalism. I suspect we may be very far apart on that. Although, as you know from past remarks, I have flirted with the more recent thinking of Hilary Putnam who was known as a functionalist and helped make functionalism a philosophical household word in the 70’s. Putnam later abandoned much of his early functionalist ideas in his work “Representation and Reality.”

            Like

          6. Could be. It’s a lot easier to agree on what’s not true than what is. Although my functionalism is fairly loose in terms of the variants out there. And we’ll have to be on the lookout for those definitional issues again.

            Like

          7. I want to make sure we really are on the same page. It seems we agree at least that “qualia” are not the objects of perception-as the older term “sense data” are likewise not the objects of perception. Then I think you must agree that “qualia-as-illusion” is a misleading exaggeration. The object of my perception, say the taste of my beer, will be experienced with certain qualities. That is, it is the way—the pathway—in which I perceive the object. Hence it’s not illusory. Thus, if the term qualia refers to my conscious perceptual experience, it is real. My point was that the term itself could be mistaken for the object of perception leading to confusion. As I said, perhaps the most that one can say is that the term “qualia” itself is superfluous to the experience of tasting of my beer. Yes?

            Like

          8. Again, definitions. We can always define “qualia” as something more grounded and plausible. The question is whether we’re then using the word in a manner that matches the historical and conventional meaning.

            Dennett, in his Quining Qualia paper, observed that, “My quarry is frustratingly elusive; no sooner does it retreat in the face of one argument than “it” reappears, apparently innocent of all charges, in a new guise.” That’s been my experience as well. A lot of people say Dennett attacked a strawman in that paper, but then ask them what separates their version of “qualia” or “phenomenal properties” from just reportable perceptual information, and Dennett’s target reemerges, “apparently innocent of all charges, in a new guise.”

            It seems like we have better words with less historical baggage to use. Granted, none in this space are entirely free of issues.

            Like

          9. So, I take all that as a “no.” I was hoping my multiple comments directed from the standpoint of perception theory would be helpful. I thought an explanation of qualia as an illusion could be understood as stemming from the common mistake some get caught up in when they assume that the “object” of our perception is the qualia or the sense data—which it is not. That is, one cannot experience the qualia of an object without experiencing the object—the taste of my beer is the caused by the beer. Alas, my hopes for a simple explanation of where this idea of qualia-as-illusion came from are ignominiously dashed! I think I’ll get off this merry-go-round for a while.

            Like

          10. Hi Matti,

            Sorry to jump in. I have a query for you. When you write “That is, one cannot experience the qualia of an object without experiencing the object—the taste of my beer is the caused by the beer. ”

            How do you account for hallucinations, dreams, and vivid imaginings? This statement of yours would also seem to conflict with our understanding of neurology, since I could stimulate the correct neural pathways in your brain and exactly reproduce the taste of beer (with no actual beer present). Do you disagree? And if not, then what exactly do you mean by “qualia” and “experience”? I don’t know about you, but when I’m talking about the objects/structures of my perceptions, I’m taking about my own experiences and how they feel/seem to me, not the objects of the external world. Of course it could turn out that they are the same thing after all, but the above examples seem to refute that rather conclusively.

            Liked by 1 person

          11. Hi Alex. No problem, I welcome the conversation. I do argue, and have done so over and over in this thread, that one cannot experience the qualia of an object without experiencing the object—the taste of my beer is caused by the beer. In other words, the object of my experience is what causes my experience. The alternative view, which you clearly expressed is that you experience the objects/structures of your perceptions—how they feel/seem to you—not the objects of the external world. In other words, the object of our perceptions are our inner experiences themselves. I paraphrased that and I hope I was fair to you. These two views are sometimes called direct realism and indirect realism.

            Indirect realism more or less begins with the early empiricists especially John Locke. And it persists today. You are right to challenge me to explain apparently conflicting things like hallucinations and our understanding of the physical and neurobiological process that ends with our perceptual experience. First, from an evolutionary standpoint our perception had to give us a pretty good account of the real world otherwise we would not have made it this far. Next, neurobiology does not in fact refute direct realism. Because we understand, more or less, how the process works and we can bypass that neurobiological process to simulate, for example, the taste of beer does not demonstrate that the process fails to give us an accurate account of the world. It merely tells us we are clever enough to bypass the process and simulate a perceptual experience—nothing more.

            Next, we have the issue of hallucinations and other such pathological events. In short, something other than our clever intervention bypasses the normal neurological process and simulates a perceptual experience. We have many ways to discern when the process is affected by some pathological cause. Nothing about that refutes direct realism either.

            Liked by 1 person

          12. Hey Matti,

            If all you mean by direct realism is that our sensory experiences are caused by the objects of the world, or that our experiences are (typically) veridical, then I too am a direct realist. But if you instead mean direct realism to be the view that our veridical perceptual experiences are literally constituted by the objects of the external world, then I am not a direct realist. The example of hallucinations and neurobiology is meant to refute the latter, not the former.

            Modern day direct realism, as far as I understand it, is also of the latter variety. The reason hallucinations are problematic for the latter view is because it seems like you can have two of the same experiences, where one is caused by an external object and another is not. But on direct realism (of the latter kind) the experiences should be different, because the fundamental building blocks of our experiences, the external objects, are missing in the hallucination case. An analogy would be the claim that cars (experiences) are made of legos (external objects), but then showing an example of a car that isn’t made of legos (hallucinations), which refutes the stated claim. This is also why most modern-day direct realist philosophers have turned to something like disjunctivism.

            Most philosophers of mind, I think, are representationalists of some kind. In such views, like intentionalism, our experiences can refer to the external objects of that world without putting us in direct contact with that object. The difference with direct realism is that our experiences aren’t constituted by the represented external objects, and the difference with sense-data theory is that our experience aren’t constituted by sense data either. There is no mental state-object relation that we are acquainted with according to representationalism. Instead, experiences are just mental states like intentions.

            Anyways, modern illusionism as I see it isn’t really geared towards challenging the idea that we have some kind of experiential acquaintance with objects (either sense data or external objects), but rather criticizes the notion that our experiences have phenomenal character.

            Liked by 1 person

          13. Hi Alex. Yes, I think we are probably the same flavor of direct realists. I only started down this cul-de-sac of perception theory in the hope of gaining some insight into the meaning of qualia as illusory. As you say, modern illusionism criticizes the notion that our experiences have phenomenal character. That befuddles me. Perhaps, I thought, it is merely just some manifestation of indirect realism. As I said above, my hopes were dashed. I’m afraid, however, that I am most likely ill equipped to engage in an in-depth and nuanced discussion of indirect vs direct realism with you. I can say that I’m skeptical of disjunctivism—but still struggling with what it means exactly.

            Liked by 1 person

          14. I understand. If I might take a stab at explaining illusionism, I would first back up a bit and try to explain what the hard problem is about. So, as was stated, most physicalists and non-physicalists alike are probably representationalists of some kind. They think that mental states exist, and that our experiences are just mental states which don’t give us mysterious access to some external entity, like a sense datum or physical object. However, our experiences seem to undeniably have phenomenal character. If representationalism is true, then it would straightforwardly follow that our mental states have phenomenal character, in which case, under physicalism, our brain states would also have phenomenal character.

            The hard problem just comes about from those non-physicalists who think that brain states are unsuitable candidates to be mental states, on the grounds that they aren’t the kind of things that have phenomenal character. Why not?

            Mainly two reasons I think:
            1. Brain-states seem to be functional/dispositional and structural by nature, whereas most phenomenal realists like to think that phenomenal experiences are intrinsic in some way.
            2. The structure and character of phenomenal experience doesn’t seem to match with the structure and character of brain states. Call this the “structural mismatch problem”. A detailed description of the map of my phenomenal visual field, for example, would be left out entirely in a description of my brain-states. Of course, any description of my brain states would reveal my dispositional tendencies to talk about my visual field, as well as my beliefs about my visual field, but it nonetheless wouldn’t describe the structural characteristics of the field itself (e.g. how it looks and behaves). If we think the field actually exists (i.e. if we think phenomenal character is real), then this is a problem for physicalism.

            So illusionism aims to deny phenomenal character to avoid the hard problem. Unfortunately, most of the literature is geared towards tackling problem 1 (intrinsicality) and comparatively little attention has been paid to problem number 2 (structural mismatch). As such, I can’t really tell you what the illusionist answer for #2 is.

            One possible answer would be to just double down on the denial of phenomenal character, and to deny not just the intrinsicality of phenomenal experience (which most illusionists already do) but their apparent structural content. Another solution to problem 2 is to argue that phenomenal structure is both real and physical. Maybe our brain states actually are structural phenomenal experiences, and maybe there literally is something like a visual field “inside” my brain. At first glance this might seem absurd since brains are soft and mushy things, not full of colorful objects like my visual field is.

            However, we have to realize that our own characterizations of brains (and brain states) is itself a representation constructed by our own brain. Our representation of a brain seems soft and mushy, but maybe real brains have certain states which are full of phenomenal colorful objects.

            Hope this helps!

            Like

          15. Alex,
            We’ve talked about your concerns on 2 before. But I think most people today see this as just a matter of understanding transduction.
            https://en.wikipedia.org/wiki/Transduction_(physiology)

            An illusionist like Dennett’s primary input here is that there is no double-transduction in the brain, no preparation of a presentation which is then interpreted by later systems. The extraction of meaning begins with sensory neurons and continues all the way through with no sharp boundary made by the nervous system.

            Like

          16. Mike,
            Isn’t that just equivalent to the first “doubling down” approach to the problem that I previously discussed? The first approach basically says that by describing brain states and processes like transduction, we can accurately describe the dispositional properties of the brain to hold beliefs about phenomenal structures as well as the representational properties that make talk about phenomenal structures meaningful (at least according to certain theories of meaning). But we’re still missing a description of the phenomenal structure themselves, so this approach would have to deny their existence.

            Like

          17. Thanks, haven’t read it yet, but I’ll be sure to check out the link. About “doubling down”, yeah there was probably a better way to phrase that. Sorry 🙂

            As for the differences between functional perceptual structures and phenomenal ones, I would say there is no difference. By definition, a phenomenal structure has to function and be reportable, and of course it is since we are here talking about it. Problem number 2 isn’t a problem of functionality (because that’s getting into intrinsicality, or problem 1), but rather of ontology. While phenomenal structures are clearly perceptible and reportable, they don’t appear to be brain structures. Their makeup is just different.

            As for how we would go about determining whether phenomenal structure is brain structure, how do we go about determining whether anything is anything? We take a look at the descriptions of their properties and see if they match the right category. A chicken doesn’t seem to be a good candidate for being an airplane because a description of a chicken doesn’t really match our conception of what an airplane is (but a Boeing 737 does).

            Similarly, a description of phenomenal structure (e.g. the colorful entities encompassing my visual field) doesn’t match a description of any brain state structure (no colorful entities of the right type to be found). That’s just one example out of many. Resolving the problem therefore requires that we either abandon our conception of what phenomenal structure is (the “doubling down” approach) or revise our conception of what brain structure is (the second approach I mentioned).

            Remember that we were led here because we adopted something like representationalism in the first place, which identifies experience with mental states (that is, brain states). On direct realism however, this wouldn’t be so problematic, because we could say that the structure of my visual field is just the structure of the external objects themselves (unfortunately that has problems of its own, like I discussed with Matti).

            Anyways, the problem with the first approach is that it (to me at least) requires a huge introspective error. It certainly seems to me like I’m having experiences, and that these experiences are composed of phenomenal structures, but if they’re not actually composed of phenomenal structures then we are radically mistaken about the nature of our experiences. In that case, I don’t have a visual field made up of colorful entities, I just behave and think like I do.

            Liked by 1 person

          18. So how would you distinguish what you’re talking about from someone being puzzled by the difference they see in the grooves of an old style record from the music the record player outputs? Or the difference between the transistor states of an image file stored on an SSD in comparison to the image on a screen? Or the structure of a hurricane in a software model in comparison to the way that model is stored in RAM?

            Like

          19. I guess I’m not understanding the relevance of the question? I agree that all of those things are descriptively different, and therefore ontologically different. If you’re trying to draw an analogy between the differences between brain structure and phenomenal structure and the differences between a record player and sound waves it emits, I would accept the analogy.

            But in the analogy of the record player it’s obvious that both the sound waves and the record players are physical entities. In our example however, it’s not obvious that phenomenal structure is physical. If you accept the analogy then you’ve admitted that phenomenal structures are not brain structures (in the same way that sound waves are not record players).

            Well then what other physical entities could they be? They can’t be the actual objects of the external world, because of all the problems with direct realism that I described. So what are they?

            Like

          20. I included the last example, the hurricane model, on purpose, because I think it gets at what you’re talking about. In terms of the simulation working with the model, it works with it at a particular level, moving from one state to another because of the structures in it. If we add a mechanism for the simulation to produce output, that output will be caused by the structures at the level of the model.

            But the simulation and model can, at the same time, be described at a lower level in terms of program operations, or lower in terms of machine language opcodes and instruction sets, or down to transistor states, or even down to the level of atoms and electrons. To look at the structure of the model and its evolution and compare it to the operations of the machine at its lowest levels and see a problem in the distinctions would be a category error.

            In addition to different levels of description, there are differences from perspective, such as from outside the system and from within it. Consider artwork that only works from a particular perspective. The exact same structure looks very different from different perspectives.

            Like

          21. It sounds like you’re saying that, in the case of the hurricane model, a description of the simulation behavior is only applicable at a certain level of physical scale, but not at others, even though the ontology is the same at all levels. I agree with this, but I don’t think that the phenomenal-brain structure discrepancy is a result of differences in levels of scale or perception. If phenomenal-brain structure mismatch is due to our looking at the wrong level of physical scale, then what level do you think is appropriate?

            It seems that there isn’t any level of physical neurology, ranging from the quark-based to the higher-level complex neural net structures, that could be described in terms identical to phenomenal structures. By contrast, descriptions of the hurricane simulation model that we see represented on a screen do take place at a particular level of physical scale. They describe, for instance, the way that visual patterns on the monitor behave.

            Like

          22. I don’t think there’s any one “right” level to look at it, just right for particular purposes, just like the ones we use for other systems.

            I’m comfortable saying that there is a physical description of functional perceptual structures. I’m less confident of that with phenomenal structures because I don’t know what “phenomenal” is supposed to mean here (which applies equally to its synonyms: “qualia”, “qualities”, “what it’s like”, etc.).

            Like

          23. Fair enough. Got to go to bed but happy to continue the conversation tomorrow if you’re up for it. For phenomenal structures, just substitute the word “experiential” for “phenomenal” and you’ll get the same result. I gave the example of the structures of my visual field. My visual field is part of my experience, and it has a certain structure to it, like a left-right asymmetry, certain number of perceptual objects etc…

            Furthermore, I use the word “physical” to be identical to scientific ontology (the structures that are described by our core theories of physics). The objection here is not that my visual field isn’t describable in structural or functional terms, but that the description doesn’t seem to match any neurological description given in scientific ontology. In fact, if it matches any physical description, it would actually be that of the external objects that it so neatly represents.

            The problem is that my visual field is a property of my experience which in turn is a property of my brain (according to physicalism), and not a property of the external objects that I am seeing at the moment. Of course this all assumes that our scientific picture accurately models the physical picture, among other assumptions.

            Liked by 1 person

          24. Interesting discussion! Thanks Alex and Mike. I haven’t ignored you both or your pending inquires of me. I’m following your dialog and may jump in when I think I have something to say and when I think I can say it clearly. I think I may be able to defend a point of view but, equally important, I need to be clear. And that eludes me presently, which may mean I really don’t have a coherent position on these matters—yet.

            Liked by 1 person

          25. Unfortunately I don’t know that “experiential” helps, unless by it you literally mean functional experience, which doesn’t seem to introduce any metaphysically hard problem here.

            If your thesis is that we don’t currently have a full mapping between neural states and mental ones, then I agree, but that seems like a scientifically tractable problem. If you’re saying that no mapping will ever be possible, then I don’t see the case for it.

            But then my view is the whole hard problem stems from assuming that current unknowns and limitations are absolute in principle ones rather than just temporary pragmatic ones.

            Like

          26. Mike,

            Sorry for the late reply. Just to clarify, I’m not saying that this is my thesis, I’m just saying that is the structural mismatch problem as I see it. I do think it can be solved, but I believe it’s not prima facie obvious how. So yes, the structural mismatch problem is that it is not, even in principle, possible to form a complete mapping.

            I should be clear here what type of mapping I have in mind. We’re not talking about the kind of mapping that exists between a thermometer and the air molecules in the room. There indeed exists a tight correspondence between certain structural features of the temperature of the air molecules and the inner dynamics of the thermometer, but this shared structure is a second-order relationship at best. The non-relevant inner structural contents of the thermometer, like its chemical composition and geometry, obviously don’t match the structural properties of the air molecules in the room.

            But remember, if functionalism is true, then there is no intrinsic “inner component” to structure. Once you’ve constructed two structural (functional) isomorphisms, you’ve basically mapped every possible feature and property. They are, for all intents and purposes, identical.

            This kind of identity is an even deeper identity than the identity that exists between you and your cloned duplicate copy hopping off a star trek transporter, since you still exhibit some structural differences (like a difference in the spatiotemporal locations of your atoms).

            While I have no doubt that we will find, in the future, an extensive neural-phenomenal mapping of the kind that we currently possess with devices like thermometers, such that we can accurately look at your neural state and instantly tell what your phenomenal (experiential) state is, such a mapping will still be insufficient. If we wanted to assert that certain brain processes are identical to certain experiential processes, then we should literally find your visual field being instantiated as a brain process. The equivalent, in other words, of realizing the thermometer is actually a complex thermodynamic pattern of air molecules.

            I feel more comfortable saying that we’ll probably not find such a structural mapping. The property dualist, by contrast, has no problem explaining this structural mismatch, since in their view phenomenal structures are different from brain structures (but still perhaps tightly causally correlated with brain processes, similar to the thermometer case).

            It might come off as unfair to demand such a rigorous mapping, but if the identity claim is correct, then that kind of rigorous mapping would seem to be required.

            Like

          27. Alex,
            No worries on response time. These conversations happen when we have time and interest.

            I’m having difficulty parsing your response particularly this sentence:

            “If we wanted to assert that certain brain processes are identical to certain experiential processes, then we should literally find your visual field being instantiated as a brain process.”

            Maybe you could elaborate on exactly what you mean here?

            It’s worth remembering the Grand Illusion of vision (which I discussed in the post on Susan Blackmore’s illusionism), the impression we have of a detailed photograph-like visual field, is an introspective illusion, one different from what qualia illusionists usually talk about, albeit related. The Grand Illusion pretty much guarantees that your impression of what your visual field is, won’t be found in the brain. If that’s the introspective error you think is implausible, then we’re probably at the agree to disagree stage. But maybe you mean something different?

            Like

          28. Sorry Matti. For now I’m tapped out on different ways to describe this.

            Maybe consider, do you think there is a hard problem, an explanatory gap? Is it distinct from what Chalmers calls the “easy problems” that are scientifically tractable? If so, what about conscious experience leads you to that conclusion? The historically common answers, which I discuss in the post, are what illusionists deny.

            Liked by 1 person

          29. Matti,

            To help you parse this nonsensical, circular reasoning you have to keep in mind that the entire premise of “functionalism” is an analogy. Analogies are true for the thing they are modeled after, in this case it’s an information processing machine. But analogies are not true for the thing-in-itself, which in this case is the true nature of how the mind gives rise to consciousness.

            So at the end of the day, all of this nonsense about a functionalist’s feeble attempt to fill in the gaping holes surrounding the phenomenal experience of qualia or the “feel” of an experience is moot. And don’t forget that the entire premise of the “hard problem of consciousness” is an artifact of the original assumption that the brain/mind is an information processing machine.

            If one takes an alternate route and posits that mind is not an information processing machine but something else, something that is not understood, then the so-called hard problem becomes a single element of an overall whole that is not understood. Whipping a dead horse (phenomenal experience) with the bullwhip of functionalism is ridiculous. Personally, I don’t see the pay off other than an academic superiority trip.

            Hope that helps……..

            Liked by 1 person

  12. Butimbeautiful and Matti,
    Not only is illusionism very popular, but it also tends to leave outsiders scratching their heads. I think Dennett and Frankish like it this way. But permit me to perhaps shed some light on what they mean. All I think they mean is that certain conceptions of consciousness cannot exist by means of worldly causal dynamics. Thus given their naturalism they deny the existence of those conceptions of consciousness. Here I’ve essentially reduced their mysterious illusionism position back to a simple idea. Inherently private? Or intrinsic? Or ineffable? These ideas suggest a void in worldly causal dynamics, or at least when speaking ontologically (and even if they may tolerate such ideas epistemically given how mysterious things remain for humanity in this regard).

    There’s one more important thing to say about illusionists I think. It’s that they believe the human brain creates consciousness by means of information processing alone. Beyond Searle’s Chinese room, or Block’s China Brain, or Schwitzgebel’s USA consciousness, their position holds that if the proper marks on paper were scanned into a computer that then prints out the right second set of marked paper, then something here would experience what you do when your thumb gets whacked. I consider this otherworldly because in a causal world information should only exist as such in respect to what it informs. Thus your processed brain information should not be creating your phenomenal experience alone, but rather by informing some sort of brain physics that itself exists as “you”. And indeed if either of you would like to consider it, I also have thoughts on what that consciousness substrate happens to consist of…

    Liked by 1 person

    1. Phil Eric, I appreciate your helpfulness here. And, for sure, this is head scratching stuff. “Illusion” to me is a suspect term, quite loaded and slippery. I object to its use because I cannot believe that those you use it really mean the definitional sense of a sort of mirage, hallucination, or phantasm. If it’s not that meaning then it is ambiguous and mostly useless.

      Liked by 1 person

      1. Agreed Matti, though actually illusionism is something that I think I grasp quite well. And perhaps too well for supporters of the position. I’ll address the critical element of your inquiry to Mike above on the hope that I can provide a more complete account.

        You told him, “I don’t think I can comprehend your “qualia-is-illusion” argument until I understand fully your distinction between the functional taste of my beer (real) and the qualia taste (illusion).”

        The main thing to get here I think is that illusionists do not inherently believe that qualia are “illusory” (or their speak for something that doesn’t exist). They only mean this when the term is used to reference a supernatural idea. So if you say that you believe in the qualia taste of your beer by means of worldly brain dynamics, then they should never tell you that such qualia are illusory. And would such taste also be functional? Of course. Beer manufacturers depend upon the functional taste of consumed beer. But if you posit anything at all spooky regarding qualia in an ontological sense (rather than an epistemological sense), then they’ll say that they don’t believe that sort of thing exists. (Mike can say if he thinks I’m wrong about this.) Furthermore my own metaphysics puts me square with illusionists here. But if they’re actually claiming something no more complex than consciousness by means of worldly causal dynamics, then why can’t they say this as simply an an outsider like myself is able to?

        Perhaps because there seems to be a far greater ulterior motive behind illusionism that obfuscation might help them with? I see this motive as the desire for people in general to believe that consciousness (or pick your synonym) exists by means of information processing alone. It’s a dream that emerged back in the early days of computing I think. The thought was that maybe consciousness is essentially like software and so any computer could be conscious if it were to do the right information processing?

        Without getting into how I consider this position to ironically be supernatural, I’ll stop to see if this account seems sensible to you. So here illusionism is simply — natural qualia “good”, supernatural qualia “bad”. And it seems to me that the ulterior motive of wanting people to believe that the good kind exists by means of information processing alone, has unfortunately incited some head scratching obfuscation.

        Liked by 2 people

        1. Phil Eric, thanks. (Also thanks to “First Cause.”) You have confirmed a few of my own underlying inferences. I did want to give the matter fair and due consideration—and still do. I assumed that the mental gymnastics that I was struggling to navigate were indeed related to preserving a wider view. That’s normal for any philosophical argument including, of course, my own. And that background wider view was partly confirmed by Mike’s remark that his position on the matter “…presents no metaphysical problem.” As I’ve said previously, I got into these issues after spending most of my philosophical life elsewhere. I try to listen more than talk. And, I have to remark that, from my many internet wanderings, this is a great blog not only from the quality of the discourse but also because of the high degree of collegial civility. Mike gets a lot of the credit for that. I look forward to more discussion.

          Liked by 2 people

        2. Eric,

          “(Mike can say if he thinks I’m wrong about this.)”

          The issue isn’t that qualia are seen as supernatural. We should always be prepared to alter our ontology if the evidence and reasoning for it are solid. But when we reach that step, it’s worth going back over our previous reasoning steps and making sure we didn’t take a wrong turn.

          Illusionists generally think this wrong turn happens in trusting introspective impressions more than any other form of perception. Once we accept that our inner eye is as fallible as our outer one, the metaphysical mysteries get more straightforward answers, that we’re making a category mistake in considering the whole separate from the components, and seeing blindspots, which can be compensated for, as absolute unknowables, which can’t.

          “But if they’re actually claiming something no more complex than consciousness by means of worldly causal dynamics, then why can’t they say this as simply an an outsider like myself is able to?”

          Most illusionists are also functionalists, and as I’ve indicated many times, my preferred label is functionalist, basically because it emphasizes what I think is the case rather than what I think is wrong. But there’s been a long movement in the philosophy of mind objecting to functionalism with the problem of qualia (aka the hard problem, explanatory gap, etc). Illusionism is a response to that.

          Liked by 1 person

          1. “….my preferred label is functionalist, basically because it emphasizes what I think is the case rather than what I think is wrong.”

            But what is “wrong” with functionalism is the “elephant” in the room that no functionalist is willing to address; and that elephant is the premise upon which it is based. Therefore, one can emphasize what is the case such as the fallibility of introspection among other things until the cows come home and it doesn’t negate the “fact of the matter” that functionalism is based upon an analogy.

            Like

          2. Could you elaborate on what you mean here by “analogy”? In what way is it an analogy that any concept beyond our immediate sensory impressions aren’t? For example, we only understand planets and solar systems by analogy since they’re too big and complex to be understood in and of themselves. Do you mean analogy in that sense, or some stronger one?

            Like

          3. I don’t know what you mean by:

            “In what way is it an analogy that any concept beyond our immediate sensory impressions aren’t?”

            In your view; is a computer a concept, an immediate sensory impression or is it an analogy?

            Like

          4. The outside of a personal computer like a laptop or phone, and its responses and overall visible behavior are direct sensory impressions. But as far as I can see, we understand its operations through analogies, different ones in fact for different levels of abstraction. Understanding the file system (whose very name is an analogy) uses different analogies from understanding network operations, CPU, how a browser or blog works, etc. And it’s all concepts.

            As I noted in the most recent thread, analogies are just symbolic frameworks, and symbolic thinking is what allow us to understand anything beyond our immediate sensory environment. It’s how a hominid from the African savanna lands on the moon and sends probes to Pluto.

            Like

          5. “…….we understand its operations through analogies….”

            The fulcrum concept here is “understand” Mike. Now one can perpetuate this ridiculous word game by asserting “it all depends on what you mean by understand?”

            At the end of the day it’s really simple: one cannot use something that we do “understand” (computer) to explain some-thing that we “DO NOT UNDERSTAND” (mind). We are absolutely clueless on the later, clueless; and insisting that something as complex as the mind is nothing more than an organic calculating machine is short sighted and in my opinion, downright disingenuous.

            Now, computationalism might have been fine back in the 80s when Chalmers first canonized the phrase “the hard problem of consciousness”, but it has been nearly forty (40) years and everyone is still whipping the dead horse of computationalism. But hey, don’t give up because there are still those who insist that the world is flat and they have the “proof” to back it up.

            Like

          6. Why do you think clarifications are ridiculous? I think the actual “language game” involves hiding behind ambiguities to preserve cherished mysteries and beliefs. In any case, my use of “understand” is in the sense of being able to make predictions about something with some degree of accuracy.

            If we can’t use something we do understand to at least take a shot at explaining something we don’t (and then test that explanation), then how do we come to understand anything?

            You say we’re “clueless” about the mind, but then you make two assertions about it, that it’s complex, and that the complexity involves something other than computation. What do you base these assertions on?

            Like

          7. Can we at least agree that the mind is “complex”? And if so, I only have one assertion to account for: “that the complexity involves something other than computation.”

            My rationale is simple; we “UNDERSTAND” computation but we “DO NOT UNDERSTAND” the mind, nor can we account for our own first person phenomenal experience without defaulting to some form of duality or claiming that this so-called phenomenality is all an illusionism. The final nail in the coffin is Gödel’s incompleteness theorems.

            The first incompleteness theorem states:
            “no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system.”

            The second incompleteness theorem shows that:
            “the system cannot demonstrate its own consistency.”

            This theorem forcefully asserts that a “proof” is not actually derived by the rules that govern the system. The “shear UNDERSTANDING of this fact” demonstrates that mind and how the mind works is not computational because an algorithm would keep running indefinitely trying to prove that which is unprovable.

            This is Roger Penrose’s position, one that I support and agree with…..

            Like

          8. Oh, I agree the mind is complex, but I’m saying that based on the many clues available.

            On understanding computation but not understanding the mind, this assumes that because we don’t understand the higher level organization of something that we necessarily don’t understand the lower level principles on which it works. It seems equivalent to saying that if we don’t understand something in geology, biology, chemistry, etc, that the laws of physics can’t be involved.

            The phenomenal experience that can’t be accounted for with computation is the bone of contention in this thread, so I’ll let everything I said above stand.

            I did a post on Gödel’s theorem years ago. https://selfawarepatterns.com/2015/12/28/godels-incompleteness-theorems-dont-rule-out-artificial-intelligence/
            In summary, there’s no evidence that human minds aren’t constrained by it as much as any other system. (It’s actually consistent with the idea that introspection must have limitations.) And positing that they aren’t seems to require new physics (Penrose’s pitch) or magic.

            Like

          9. “In summary, there’s no evidence that human minds aren’t constrained by it as much as any other system.”

            That’s a quantum leap to make taking into consideration that humans minds are so different from any other system. It’s more like a leap of faith then based in fact…..

            “(It’s actually consistent with the idea that introspection must have limitations.)”

            Introspection has limitations, but for the most part those limitations are self-imposed; and those self-imposed restraints are fundamentally driven by deeply entrenched prejudicial biases or cognitive dissonance.

            “And positing that they aren’t seems to require new physics (Penrose’s pitch)……..”

            New physics? Exactly; and that is my pitch as well.

            Liked by 1 person

          10. Mike,
            It seems to me that you didn’t challenge my reduction of illusionism. What you essentially did I think is discuss how you like to think about illusionism. Apparently that’s consistent with my simple reduction. But let’s get explicit so that people in general might straighten this business out. Is it fair to say that illusionists do not believe in any conception of qualia incited beyond worldly causal dynamics, though do believe in the converse? That’s specifically what I figured you’d agree with.

            There’s a bonus question here as well associated with my understanding of what ultimately incites illusionism. Do prominent illusionists believe that worldly causal qualia (like Matti’s functional beer taste for example), arise by means of information processing in itself? Thus any computational device could create such an experiencer by means of the right information processing alone?

            Liked by 1 person

          11. Eric,
            I think the reasoning we use to reach a conclusion is as important as the conclusion itself. Otherwise it’s not really a conclusion, but dogma, and we end up sniping at each other from different ideological camps. (Yeah, I know it frequently ends up that way anyway, but I at least try to make it about evidence and reason.)

            I’d also note, and this gets to your bonus question, that there are non-physicalist illusionists. They agree with functionalists that introspection is not to be trusted without scrutiny, but have other positions that lead them to still see something beyond causality. Another reason I prefer the functionalist label.

            But I would say the biggest champions of illusionism (Dennett, Frankish, etc) are interested in a causal account, which is what the information processing paradigm is about. Although they’d object to using the word “qualia” in a causal sense, given all the baggage associated with the word. They think a new vocabulary is required to get pass all the remnant Cartesian intuitions.

            Liked by 1 person

          12. Yes Mike, the reasoning that we use to believe what we do is at least as important as what we believe. Sometimes we use faulty reasoning to support faulty beliefs. Or we can even use faulty reasoning to support valid beliefs, which is to say it’s possible for someone to accidentally be right. Above I alluded to primary motivation however. We should all have this, and beyond any self affirming notions like “I only seek to grasp what’s true”. Bullshit! We’re all biased…you, me, Lee, Matti, and everyone else I’m sure. But some of us should naturally be biased to more effectively display what’s true than others of us. So that’s the real question I think. Who among us have developed more effective forms of bias? For fun I’ll now get into my perception of both mine and yours. Of course this can be a sensitive subject and should unfortunately characterize me too favorably since it’s my story. I think you’ll be relatively fine with this however. You’re of course welcome to amend whatever you like!

            I was probably 15 when I realized what I consider to be my greatest epiphany. The morality crap that my parents and society in general had been feeding me had failed for too long — it always seemed inconsistent with my observations of how things actually worked. My observations of people in general finally made sense to me when I realized that we’re all self interested products of our circumstances. From the most despicable sadistic despot to the most caring and wonderful friend, I realized that each of us are motivated to feel as good as we possibly can given whatever circumstances we happen to be under. I went off to college hoping to expand my premise, though was disappointed to find that it wasn’t yet accepted in mental/behavioral science in general (or obviously philosophy where nothing’s generally accepted). I discovered that the only science that did accept my premise was the reasonably hard science of economics. So I earned a degree in that. This was mostly gratuitous however since my only occupation since that time has been construction. The passion of my youth has always been strong for me in this regard however. I’m now 54 and have been scheming since that time about how I might help general soft science harden up by accepting the premise that struck me so strongly as a kid.

            As for you, as I understand it you were extremely moved by science fiction from a young age. I think you’ve mentioned that part of the fun was imagining that these sorts of things might happen in the future. Then as you got into science you were disappointed to find how much sci-fi was just plain wrong. So here you’d mainly have to enjoy the stories for fictional entertainment rather than to also display technology that science should tend to provide humanity some day, or a second type of fun. I believe it was the failure of sci-fi as a model that helped you give up the theism that you’d been indoctrinated into, since that obviously doesn’t make any anymore causal sense than various outlandish sci-fi scenarios.

            So here’s my perception of where you’d strongly fall into the hands of Dennett. He’s widely thought of as one of the strongest supporters of atheism, as well as uses this premise to brand a sci-fi friendly conception of how the brain creates mind. The claim is that causality mandates that this be by means of information processing alone — any other option is instead classified as otherworldly. Since all sorts of sci-fi scenarios thus gain credence, I think you’re naturally inclined to such belief since it makes thinking about those scenarios so much more fun.

            That’s my current assessment of each of us in this regard. While my founding bias is that utility (or sentience, or causal qualia, or whatever) constitutes all that’s valuable to anything, anywhere, your founding bias is that utility (or sentience, or causal qualia, or whatever) exists by means of information processing alone. Mine makes sense to me and yours makes sense to you. Furthermore while they shouldn’t inherently contradict, my own naturalism has me doubting that information processing alone could causally create what we’re talking about here. I believe another step is mandated, or mechanistic instantiation of such processed information. This should be problematic for you however since it should impinge upon all sorts of the sci-fi scenarios that you’ve historically been inclined to hope aren’t false.

            One can’t moderate their biases even potentially if they don’t explicitly grasp them. Thus I’d like to know mine. But I hope you’re okay with the speculation here since your blog adds so much to my life! I’d rather add interest to it while I do my thing rather than pick on you or anyone else here. I’ve often thought of you in terms of “The goose that laid the golden eggs”, and so would be quite troubled to hinder rather than promote that production.

            Liked by 1 person

          13. Eric,
            You’ve told me your story before and I know I’ve told you mine about the role science fiction played in my worldview. And I’ll fully admit to being intrigued by the science fiction possibilities that my understanding of the mind implies. But you get the causality backwards. I don’t hold that view because of the science fiction implications, but am drawn to the science fiction that recognizes it.

            Consider that I also cut my teeth on other types of science fiction, such as Dune, Asimov’s Foundation series, A.E. Van Vogt’s stories featuring characters with mind powers, and a lot of other classic stuff along those lines, where a more exotic version of the mind fits right in. And of course, I enjoy the fantasies of J.R.R. Tolkien and George Lucas, where outright magical versions prevail. Also one of my favorite recent sci-fi series, The Expanse, (spoiler alert) features a quantum version of consciousness as a pivotal plot point. (Lamentably, the TV show never got to that point.) So there are plenty of sci-fi scenarios to get excited about with different models of the mind.

            I will admit to a bias favoring explanations over mysteries. Even speculative explanations are preferable for me than just accepting mystery. Honestly, I’m repelled by the impulse many have to just accept, or even maximize, those mysteries. As I’ve admitted before, sometimes it makes me impatient with real mysteries. But it’s a bias I’m aware of and try to guard against. I’m sure many will insist I fail on their favorite mystery, but all I can do is try my best.

            Thanks for your kind words about the blog. As I noted to Matti, it’s a group effort, a collective impulse to try to have real conversations rather than just trolling each other, which I’m grateful for.

            Like

          14. “On understanding computation but not understanding the mind, this assumes that because we don’t understand the higher level organization of something that we necessarily don’t understand the lower level principles on which it works.”

            For the sake of your readers I will modify your assessment to reflect the specificity of my point and then let it stand:

            On UNDERSTANDING computation but NOT UNDERSTANDING the mind; this “FACT OF THE MATTER” demonstrates that the reason we don’t understand the higher level organization of MIND is because we DO NOT UNDERSTAND the lower principles on which that mind works.

            However, this does not mean that physics is not involved, it simply means that this physics is not UNDERSTOOD…..

            Liked by 1 person

Leave a reply to SelfAwarePatterns Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.