Chalmers’ theory of consciousness

Ever since sharing Ned Block’s talk on it, phenomenal consciousness has been on my mind.  This week, I decided I needed to go back to the main spokesperson for the issue of subjective experience, David Chalmers, and his seminal paper Facing Up to the Problem of Consciousness.

I have to admit I’ve skimmed this paper numerous times, but always struggled after the main thesis.  This time I soldiered on in a more focused manner, and was surprised by how much I agreed with him on many points.

Chalmers starts off by acknowledging the scientifically approachable aspects of the problem.

The easy problems of consciousness include those of explaining the following phenomena:

  • the ability to discriminate, categorize, and react to environmental stimuli;
  • the integration of information by a cognitive system;
  • the reportability of mental states;
  • the ability of a system to access its own internal states;
  • the focus of attention;
  • the deliberate control of behavior;
  • the difference between wakefulness and sleep.

But his main thesis is this point.

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

My usual reaction to this is something like, “You’re holding up two puzzle pieces that fit together.  Everything you need is in what you call the ‘easy problems’!”  In Chalmers’ view, this puts me into a group he labels type-A materialists, a group including people like Daniel Dennett and Patricia and Paul Churchland.

The distinction between the two viewpoints is best exemplified by remarks Chalmers makes in his response paper to the many commentaries on the Facing paper.  Daniel Dennett in particular gets singled out a lot.

Dennett’s argument here, interestingly enough, is an appeal to phenomenology. He examines his own phenomenology, and tells us that he finds nothing other than functions that need explaining. The manifest phenomena that need explaining are his reactions and his abilities; nothing else even presents itself as needing to be explained.

This is daringly close to a simple denial – 

(Note: Dennett’s commentary on Chalmer’s paper is online.)

However, Chalmers later makes this admission:

Dennett might respond that I, equally, do not give arguments for the position that something more than functions needs to be explained. And there would be some justice here: while I do argue at length for my conclusions, all these arguments take the existence of consciousness for granted, where the relevant concept of consciousness is explicitly distinguished from functional concepts such as discrimination, integration, reaction, and report.

Here we have a divide between two camps, one represented by Chalmers, the other by Dennett, staring at each other across a gap of seemingly mutual incomprehension.  One camp sees something inescapably non-functional that needs to be explained, the other sees everything plausibly explainable in functional terms.  Both camps seem convinced that the other is missing something, or maybe even in denial.

Speaking from the functionalist camp, I will readily admit that I do feel the profound nature of subjectivity, of the fact we exist and experience reality with a viewpoint.  I don’t feel like an information processing system, a control center for an animal.  I feel like something more.  The sense that there has to be something in addition to mere functionality is very powerful.

The difference, I think, is that functionalists don’t trust this intuition.  It seems like something an intelligent social animal concerned with its survival and actualization might intuit for adaptive motivational (functional) reasons.  And it seems resonate with many other intuitions that science has forced us to discard, like the sense that we’re the center of the universe, that we’re separate and above nature, that time and space are absolute, or many others.

But are we right to dismiss the intuition?  Maybe the mind is different.  Maybe there is something here that normal scientific investigation won’t be able to resolve.  After all, we only ever have access to our own subjective experience.  Everything beyond that is theory.  Maybe we’re letting those theories cause us to deny the more primal reality.

Perhaps.  In the end, all we can do is build theories about reality and see which ones eventually turn out to be more predictive.

Anyway, as I mentioned above, I’ve always struggled with the paper after this point, generally shifting to skim mode.   This time, determined to grasp Chalmers’ viewpoint, I soldiered on, and got the surprises I mentioned.

First, Chalmers, while being the one to coin the hard problem of consciousness, does not see it as unsolvable.  He’s not one of those who simply say “hard problem”, fold their arms, and stop.  He spends time discussing what he thinks a successful theory might look like.

In his view, experience is unavoidably irreducible.  Therefore, any theory about it would likely look like a fundamental one, similar to fundamental scientific theories that involve spin, electric charge, or spacetime, while accepting these concepts as brute fact.  In other words, a theory of conscious experience might look more like a theory of physics than a biological, neurological, or computational one.

Such a theory would be built on what he calls psychophysical principles or laws.  This could be viewed as either expanding our ontology into a super-physical realm, or expanding physics to incorporate the principles.

But what most surprised me is that Chalmers took a shot at an outline of a theory, and it’s one that, at an instrumental level, is actually compatible with my own views.

His theory outline has three components (with increasing levels of controversy).

The principle of structural coherence.  This is a recognition that the contents of experience and functionality intimately “cohere” with each other.  In other words, the contents of experience have neural correlates, even if experience in and of itself isn’t entailed by them.  Neuroscience matters.

The principle of organizational invariance.  From the paper:

This principle states that any two systems with the same fine-grained functional organization will have qualitatively identical experiences. If the causal patterns of neural organization were duplicated in silicon, for example, with a silicon chip for every neuron and the same patterns of interaction, then the same experiences would arise.

This puts Chalmers on board with artificial intelligence and mind copying.  He’s not a biological exceptionalist.

The double-aspect theory of information.  This is the heart of it, and the part Chalmers feels the least confident about.  From the paper:

This leads to a natural hypothesis: that information (or at least some information) has two basic aspects, a physical aspect and a phenomenal aspect. This has the status of a basic principle that might underlie and explain the emergence of experience from the physical. Experience arises by virtue of its status as one aspect of information, when the other aspect is found embodied in physical processing.

In other words, the contents of conscious experience are built from functional neural processing, functionality which is multi-realizable, and experience itself is rooted in the properties of information.

There are two other aspects of this theory that are worth mentioning.  First, note the “information (or at least some information)” phrase.  This shows Chalmers’ attraction to panpsychism.

Honestly, if I had the conviction that the existence of experience was inherently unexplainable in terms of normal physics, panpsychism would be appealing.  Seeing nascent experience everywhere but concentrated by certain functionality, frees someone with this view from having to find any special physics or magic in the brain, providing a reconciliation with mainstream neuroscience.  Indeed, under Chalmers’ principles, it’s actually instrumentally equivalent to functionalism.

The other aspect worth mentioning is the danger of epiphenomenalism, the implication that experience is something with no causal power, which would be strange since we’re discussing it.  Chalmers acknowledges this in the response paper.  If the physics are casually closed, where does experience get a chance to make a difference?

Chalmers notes that physics explores things in an extrinsic fashion, in terms of the relations between things, not intrinsically, in terms of the things in and of themselves.  In other words, we don’t know fundamentally what matter and energy are.  Maybe their intrinsic essence includes an incipient experential aspect that contributes to their causal effects.  If so, it might allow his theory to avoid the epiphenomenalism trap.  (Philip Goff more recently discussed this concept.)

To be clear, Chalmers’ theory outline carries metaphysical commitments a functionalist doesn’t need.  However, that aside, I’m surprised by how close it is to my own views.  I have no problem with his first two principles (at least other than the limitation he puts on the first one).

The main difference is in the third component.  I see phenomenal properties as physical information, and phenomenal experience overall as physical information processes, without any need to explicitly invoke a fundamental experential aspect of information.  In my mind, experience is delivered by the processing, but again that’s the functionalist perspective.  The thing is, the practical results from both views end up being the same.

So in strictly instrumental terms, my views and Chalmers are actually in alignment.  We both turn to neuroscience for the contents of consciousness, and both of us accept the possibility of machine intelligence and mind copying.  And information is central to both views.  The result is that we’re going to make very similar, if not identical predictions, at least in terms of observations.

Overall then, my impression is that while Chalmers is convinced there is something in addition to the physics going on, at least known physics, he reconciles that view with science.  Indeed, if we interpret the non-physical aspects of his theory in a platonic or abstract manner, the differences between his views and functionalism could be said to collapse into language preferences.  Not that I expect Chalmers or Dennett to see it this way.

What do you think?  Am I being too easy on Chalmers?  Or too skeptical?  Or still not understanding the basic problem of experience?  What should we think about Chalmers’ naturalistic and law driven dualism?

This entry was posted in Mind and AI and tagged , , , , , , . Bookmark the permalink.

116 Responses to Chalmers’ theory of consciousness

  1. Lee Roetcisoender says:

    Sorry Mike, but I can’t seem to help myself. There are two suppressive antecedents which stand in the way of resolving the “hard problem”. First, there is the paradox of dualism itself and second, there is the combination problem. Both of these obstacles are huge, nevertheless, both of these paradoxes have been easily overcome by my own models. And when I use the term “easy”, I mean ridiculously easy. Basically here is the method: When one discovers that they have painted themselves into the corner of a room, how does one get out of that predicament? The easy answer: Grab yourself a can of paint with a different color and paint your way back out.

    The leading proponents of panpsychism, the likes of Phillip Goff, Sam Coleman and Chalmers are at an insurmountable impasse and unable to overcome those obstacles because they have painted themselves into a corner. Having resolved those two issues myself, I can say with an unwavering certainty: They will “never” be able to overcome those obstacle without help.

    Like

    • No need to apologize Lee. I’m glad you’re here!

      I agree that Chalmers and the rest do paint themselves into a corner. Although I have some sympathy with how they got there. Everything starts from a deep visceral conviction that functional or physical explanations don’t explain the existence of phenomenality.

      When you say “ridiculously easy” for your models to overcome the issue, do you care to elaborate?

      Like

      • Lee Roetcisoender says:

        I used to post on Goff’s website in the past. Phillip doesn’t engage much, but he asked me once how my models overcome the combination problem. I’m thinking to myself: “Does he really believe that I’m going to just hand over my trade secret so that he can use it?” So I threw him a bone, hoping he would pick up on it and be intrigued enough to engage with me as someone who just “might” have something to contribute. That was the end of the discourse. Phillip is just like every other skeptic, nobody “really” wants to know. Academics make their living debating these sort of things, they’re not interested in a resolution. But being human themselves, they would be more than eager to jump on that bus when it leaves town.

        You’re too young to remember the hippie generation of the late 60s and early 70s Mike. Hitch hiking was really popular back then. I remember seeing a sign in the back window of a VW bus that read: “Gas, grass or ass, nobody rides for free.” When it comes to the hard problem of consciousness, I’ve got the only mode of transportation in town and nobody rides for free.

        Like

        • Fair enough Lee, but if you have insights and never put them out there, no one will ever know about them. Chalmers and others like him are famous because they put their ideas out for everyone to see. Of course, they also have books for purchase, but no one would buy them if they didn’t first advertise their ideas by outlining them.

          In other words, consider hooking people with free samples.

          I am old enough to remember the “Gas, grass, or ass” sticker in Spencer’s Gifts!

          Like

          • Lee Roetcisoender says:

            I remember a Paul Harvey segment where he commented on a survey that was conducted, asking people what the two greatest threats facing America were. According to the survey, the correct answer to the question was number one, ignorance and number two, apathy. The overwhelming response to the question of the survey was: number one, “I don’t know” and number two, “I don’t care.” I find it ironic that my models make the same predictions, it wasn’t something I expected when I was working through the progression.

            Like

      • Lee Roetcisoender says:

        There’s two reasons why people paint themselves into a corner Mike. Number one, they like to paint and number two, they like the color; say green for this example. So when I say the resolutions to these type of conundrums are ridiculously easy, it works like this: I freely hand whoever it is who painted themselves into a corner a can of red paint so they can paint themselves out.

        Was that solution ridiculously simple? Yes. Will that solution actually work and get them out of the corner that they painted themselves into? Yes. But here’s the real problem Mike. Even though the answer is ridiculously simple, this particular individual in my illustration objects to the resolution because they don’t like red. This is in-spite of the “fact” that a can of red paint actually solves their dilemma. In other words, red does not correspond to their own model of what color they “want” the floor to be. In other words, simple solutions in and of themselves will not resolve the issue of a preconceived perception, a perception which is deeply ingrained by an even stronger conviction. We create our own reality Mike, and those intellectual constructions are called delusions.

        Like

        • People are definitely locked into their own preconceptions, and they don’t always take the way out when it’s provided. But be careful. It’s easy to see it in others, and very difficult in ourselves. There’s something to be said for putting our reasoning out there to see what flaws others can find.

          Like

  2. keithnoback says:

    When you say “psychophysical” you have already made a mistake. All our explanations are phenomenology, and our knowledge with associated theory, is simply reliable on the web of relata.
    It should not be difficult to anticipate some difficulty in trying to explain a brute fact.

    Liked by 1 person

    • The psychophysical term does inherently assume there are physical and non-physical laws. Maybe we should just focus on the scientifically establishable laws. But I’m actually just happy to see that a search for laws or principles is there. The labeling itself could be seen as arising from a limitation of language.

      I agree about the difficulty of brute facts. But I think everyone agrees that brute phenomenal primitives map to neural patterns that have vast underpinnings. The question is whether the existence of phenomenality in and of itself is one of those brute facts.

      Liked by 1 person

      • keithnoback says:

        If you flip the map so that it is right-side up, I think the problem is solved.

        Like

        • But which way is right-side up? 🙂

          Like

          • keithnoback says:

            The other way.
            Aren’t the contents (relevant facts) of our theories representations of the aspect of experience to which the theory refers?
            We frequently have that flipped, thinking that we experience mass, for instance, when mass is a representation of an aspect of experience.

            Like

          • I agree that considering the contents of experience separate from experience in and of itself is part of the problem. (If that’s what you meant.) For me, experience is the sum of that content, along with the processing that generates and utilizes it. If there is a brute fact here, that’s it. But that’s me being all functionally again.

            Like

          • keithnoback says:

            I still think that you, and Chalmers, have it a little backwards (if I may be so bold). To say “explain[s] the emergence of experience from the physical” is to suggest the emergence of the explanandum from the expalans, which is not our experience at all!
            If that is how one proceeds, then one ends up with an epiphenomenon on his hands, or is stuck trying to functionalize the existence of something (having experiences).

            Like

          • Definitely be bold if you think I’m wrong. (Or Chalmers.) But keep in mind that my and Chalmers views aren’t the same, at least metaphysically. I don’t think experience has to emerge from the physical. I think it is physical.

            I agree epiphenomenon is a danger in Chalmers’ model. He addresses it with the discussion about the intrinsic nature of matter and energy possibly being experential. The problem I see is that matter and energy seem to obey rigid principles without any room for volition. If that’s experential, then the experential seems identical to physics and therefore subject to being removed by Occam’s razor.

            Like

          • keithnoback says:

            Yes, it always seemed to me that property dualism saved the ‘thusness’ of our experiences from epiphenomenalism on a technicality.
            I agree with you, that’s what physicality is.

            Liked by 1 person

  3. Steve Ruis says:

    Re “Speaking from the functionalist camp, I will readily admit that I do feel the profound nature of subjectivity, of the fact we exist and experience reality with a viewpoint. I don’t feel like an information processing system, a control center for an animal. I feel like something more. The sense that there has to be something in addition to mere functionality is very powerful.”

    I suspect that this “feeling” may be a consequence of other powers. I think what we call imagination is pattern extension ability that plays a role in us being able to anticipate and deal with the future. We have just acquired a small dog. Each of us has reported bizarre thoughts concerning the dog’s safety. Me in having the dog break off of his leash and run out into car traffic with a subsequent Me carry his broken body to the hospital. My partner reported that carrying the dog on our 22nd story balcony caused an image of the dog jumping out of her arms and over the rail. I suspect that these are possible futures that we are to prevent but I find the thoughts strange.

    So, what happens if our imaginative power is directed at our own cognition? Are those “feelings” and “thoughts” to be trusted or are they just scenarios for us to consider going forward. Is a feeling of there being “something more,” a real clue or just an imagined possibility?

    This is a long post and is near my tl:dr limit, so I have only read the first half and will read the other soonish. I may have different thoughts by then … (d’ya think?).

    Like

    • Some of your remarks fit in with the conversation on the last post about sensory predictions and what effects them.

      I agree that a lot of our feelings about ourselves are predictions about that self, using a model that is adaptive for everyday life but not necessarily accurate. In other words, introspection is not reliable when it comes to what’s actually going on in the mind.

      This post is longer (1778 words) than my usual ~500 words. Sorry. I considered breaking it up, but I think the conversations on the whole might be better than small conversations on the parts.

      Like

    • Wyrd Smythe says:

      No doubt such thoughts come from the collective cultural gestalt. You’ve probably seen scenes in a movie where a dog chased a ball out a window. Or scenes where dogs escape their leashes.

      Like

  4. paultorek says:

    Somehow, Type B materialism got lost in the shuffle. That’s a shame, because I think it’s the truth. Type B materialism says that for some physical description yet to be discovered, “consciousness is (insert physical description here)” is like the sentence “water is H2O”. It expresses a necessary truth, but not an a priori discoverable truth. It ignores the bulk of Chalmers’s arguments, because those arguments highlight conceptual differences, which are beside the point. Conceptual differences only show what is (not) a priori discoverable. But you can’t do ontology using only conceptual analysis. Sorry. It just doesn’t work.

    Now, an unrelated nitpick. You say “This puts Chalmers on board with artificial intelligence and mind copying,” in response to Chalmers’s “any two systems with the same fine-grained functional organization will have qualitatively identical experiences.” In principle, OK; in practice, maybe not.

    The key words are “fine-grained”. How fine a grain do we need? If everything about the way neurons talk to each other matters, it seems unlikely that silicon devices will follow suit. In practice, AI researchers are going to find new and different ways, probably “better” ways from a purely functional perspective, to do intelligence. Airplanes don’t flap their wings the way birds do, but they can fly faster and carry more weight. Submarines don’t swim. Even at a very coarse grained look, our inventions function differently than the natural systems they vaguely resemble.

    Liked by 1 person

    • Chalmers does address type-B materialism at length, but this post was already pretty long. Here’s his assessment of it.

      Type-A materialism offers a clean and consistent way to be a materialist, but the cost is that it seems not to take consciousness seriously. Type-B materialism tries to get the best of both worlds. The type-B materialist accepts that there is a phenomenon that needs to be accounted for, conceptually distinct from the performance of functions, but holds that the phenomenon can still be explained within a materialist framework. This is surely the most attractive position at a first glance. It promises to avoid the extremes of both hard-line reductionism and property dualism, which respectively threaten to deny the phenomenon and to radically expand our ontology.

      I was attracted to type-B materialism for many years myself, until I came to the conclusion that it simply cannot work. The basic reason for this is simple. Physical theories are ultimately specified in terms of structure and dynamics: they are cast in terms of basic physical structures, and principles specifying how these structures change over time. Structure and dynamics at a low level can combine in all sort of interesting ways to explain the structure and function of high-level systems; but still, structure and function only ever adds up to more structure and function. In most domains, this is quite enough, as we have seen, as structure and function are all that need to be explained. But when it comes to consciousness, something other than structure and function needs to be accounted for. To get there, an explanation needs a further ingredient.

      http://consc.net/papers/moving.html

      All in all, I have to agree with Chalmers here. Once you give up on function, you unavoidably slide into his waters. (I do find it annoying that he thinks explaining experience functionally is not taking it seriously.)

      On the fine-grained point, I think we know we’ve reached the proper functional level when the outputs of the new system match those of the original given the same inputs. You might argue that the differences might lead to no experience, but that presupposes it’s possible to produce the outputs without experience, in other words, that p-zombies are possible. I think no matter what, to produce those outputs, the experience will be embedded somewhere in the implementation. (Admittedly this is a functional view. I’m not sure how Chalmers would answer.)

      You might also argue that the new system’s experience might be different, effected by any underlying differences. That is possible. But the new system won’t know the difference. Its memory of being the original system will be solely in terms of its current experience.

      I agree that native AI will be different. There’s no reason to suppose that the idosyncracies of human consciousness will necessarily be productive for them. They will be different. Alien. Will those differences mean they’re not conscious? I don’t think there’s a fact of the matter answer, but if they have exteroception, interoception, attention, and imagination, I think so.

      Liked by 1 person

      • paultorek says:

        “Once you give up on [structure and] function, you unavoidably slide into his waters.” That just isn’t so, not when you properly disambiguate what is being “given up.” What I’m giving up is any *conceptual* tie between consciousness and structure-and-function. What I’m not giving up is an *ontological* tie. Just as I give up any *conceptual* tie between “water” and “H2O”, without denying that water is H2O.

        “You might argue that the differences might lead to no experience, but that presupposes it’s possible to produce the outputs without experience, in other words, that p-zombies are possible.” That’s not what p-zombies are. p-zombies are not just *functionally* like a regular person, they’re supposed to be *physically* identical as well.

        However, I wouldn’t argue that the differences in (real world, not-so-human-like) AI would lead to no experience, but that it would lead to radically different experiences, ones to which I cannot relate. So I wouldn’t be interested in “mind uploading” except insofar as the resulting AI might be able to see after the welfare of my kid, and so on.

        Like

        • I’m not sure if I’m grasping the conceptual tie vs ontological tie division. Water is H2O. That seems like both a conceptual and ontological tie. And it seems like it’s a reduction of the water concept by thinking of it in terms of molecular chemistry. I’m not saying your point isn’t valid, just that I’m not grasping it.

          “That’s not what p-zombies are.”

          Sorry. I should have said b-zombie.

          “So I wouldn’t be interested in “mind uploading” except insofar as the resulting AI might be able to see after the welfare of my kid, and so on.”

          I actually don’t have any problems with that interpretation. Depending on how close it actually is, I do think someone could see the AI as them, but either interpretation is valid.

          Liked by 1 person

          • paultorek says:

            Try this sentence on for size:
            “After investigating fraud in science, you ultimately discover that many seemingly solid results were actually fraudulent, including most astonishingly, the equation water = H2O.”
            If your reaction is that’s not gonna happen then water and H2O are actually different concepts for you. If they were the same concept you would have said “huh? that last part didn’t make any sense.” The fact that the sentence makes a comprehensible claim, however firmly you reject it, shows that the concepts are different.

            Like

          • Thanks for the clarification. I also went back and reread Chalmers’ assessment of type-B. (I really had just skimmed it on the first reading.) I see what you’re saying now, although the distinction between type-A and type-B remain pretty hazy for me since any type-A I know is still very interested in correlations between subjective qualities and objective physical processes.

            Maybe the difference is that a type-A expects all the correlations between phenomenal content and objective processes, summed together, to account for phenomenality, whereas maybe a type-B expects a correlate for phenomenality itself?

            In any case, I’m curious what your thoughts are about Chalmers’ criticisms (particularly this passage)?

            Clark and Hardcastle’s answer is to augment one’s account of physical processes with an “identity hypothesis” (Clark) or an “identity statement” (Hardcastle), asserting that consciousness is identical to some physical or functional state. Now, it is certainly true that if we augment an account of physical processes with an identity statement of this form, the existence of consciousness can be derived; and with a sufficiently detailed and systematic identity statement, detailed facts about consciousness might be derived. But the question is now: what is the relationship between the physical facts and the identity statement itself?

            Neither Clark nor Hardcastle gives us any reason to think that the identity statement follows from the physical facts. When answering the question “why does this physical process give rise to consciousness?”, their answer is always “because consciousness and the physical process are identical”, where the latter statement is something of a primitive. It is inferred to explain the correlation between physical processes and consciousness in the actual world, but no attempt is made to explain or derive it in turn. And without it, one does not come close to explaining the existence of consciousness.

            This identity statement therefore has a very strange status indeed. It is a fact about the world that cannot be derived from the physical facts, and therefore has to be taken as axiomatic. No other “identity statement” above the level of fundamental physics have this status. The fact that DNA is a gene can be straightforwardly derived from the physical facts, as can the fact that H2O is water, given only that one has a grasp of the concepts involved. Papineau (1996) argues that identities are not the sort of thing that one explains; I think this is wrong, but in any case they are certainly the kind of thing that one can derive. Even the fact that Samuel Clemens is Mark Twain, to use Papineau’s example, could be derived in principle from the physical facts by one who possesses the relevant concepts. But even if one possesses the concept of consciousness, the identity involving consciousness is not derivable from the physical facts.

            http://consc.net/papers/moving.html

            Like

          • paultorek says:

            Maybe the difference is that a type-A expects all the correlations between phenomenal content and objective processes, summed together, to account for phenomenality, whereas maybe a type-B expects a correlate for phenomenality itself?

            No, because then, as a merological universalist (for properties as well as objects), I’d be both types.

            My first response to Chalmers’s argument is to question whether facts are physical things. Physics posits states, processes, objects, and fields, but whether these items compose “facts” is a tricky proposition. In fact, the most common definition of “fact” is a true proposition. But propositions are abstracta – aren’t they? – rather than physical things.

            Thus when Chalmers writes:

            This identity statement therefore has a very strange status indeed. It is a fact about the world that cannot be derived from the physical facts, and therefore has to be taken as axiomatic.

            he severely mis-characterizes the role of derivation in physics and our knowledge of physical facts. Physics is only occasionally about derivation. It’s mainly about inference to the best explanation. Raw hypotheses are all over the place, at many levels of description, and are rationally acceptable because of their role in an overall set of explanations.

            If I see Superman and Clark Kent appearing in all the same places at around the same times, I’ll say: I get it: Clark Kent is Superman! Does this hypothesis have a “very strange status indeed”? No. Nothing could be more ordinary. (Well OK, Superman isn’t very ordinary, but you get what I mean.)

            Like

          • Hmmm. I’m not sure if I’d ever heard of mereology before, or of the controversies between mereological univeralism, restrictivism, or nihilism. Interesting. Thanks!

            “But propositions are abstracta – aren’t they? – rather than physical things.”

            I don’t know what Chalmers would say here, but I think that abstract things are ultimately physical. When we talk about something like a proposition, we can either be talking about the reality it refers to, or the representation of that reality (in our brains if nowhere else). In both cases, it seems to me like we’re talking about something physical. What makes abstractions distinct is that the representation refers to another representation, a mental model. Platonists insist such things have an independent existence, but I’ve never been able to see what makes that view necessary.

            Thanks for the response to the Chalmers critique. I think his point would be that, even after a relationship is inferred, the derivation can usually be conceptualized but, he asserts, it can’t between physics and conscious experience. I personally think it can, but then I’m on the other side of this. But if it can’t, then I’m not sure how identity in and of itself rescues it. The correlation does seem like it ends up being a brute fact.

            But it seems that, operationally, there isn’t much difference between type-A and type-B materialists. Both are interested in finding neural correlates. When a physical correlate of an experience is sufficiently isolated, both will consider that mechanism to be the experience. Unless I’m missing something?

            Like

      • Wyrd Smythe says:

        “On the fine-grained point, I think we know we’ve reached the proper functional level when the outputs of the new system match those of the original given the same inputs.”

        This is where, it seems to me, computationalism begs the question. It assumes you can ever reach a point where a machine gives you the same outputs.

        Because I agree with the b-zombie implication. If you had those outputs, it seems the system must be having phenomenal experience.

        My skepticism involves whether it’s possible to get there. Whether a machine can ever produce those outputs.

        Like

  5. Wyrd Smythe says:

    (Long post = long comment!) As I’ve mentioned, I find myself fairly aligned with Chalmers, too. (Much more than with, say, Dennett.)

    “The sense that there has to be something in addition to mere functionality is very powerful.”

    Of course, given my sympathies, I see that as denial. It isn’t that there “has to be” something more. It’s that, as you’ve just admitted, there is something more. 🙂

    “The difference, I think, is that functionalists don’t trust this intuition. […] And it seems resonate with many other intuitions that science has forced us to discard,…”

    I’ve heard that before, and it’s always seemed like very weak reasoning to me. You don’t trust intuition X because many other intuitions have proven false? What about the ones that have proven true?

    Pointing out the intuitions of those who saw the Earth as central seems to deny the intuitions of those who didn’t. Or consider the story about Friedrich Kekulé’s intuition of the benzene ring. Our reality is filled with intuitions that proved true.

    Isn’t it just wishful thinking to hang a “probably false intuition” on this?

    “Maybe the mind is different.”

    😀 😀 😀

    “He’s not one of those who simply say ‘hard problem’, fold their arms, and stop.”

    You intrigued me. Who are the ones that do?

    “In his view, experience is unavoidably irreducible.”

    Seems so. How do you break down what it is like to see blue or smell onions or taste salt? These things are made of parts, but no part carries the experience. Like a school of fish, or a flock of birds, something collective and holistic emerges.

    “The principle of structural coherence. […] Neuroscience matters.”

    Totally down with this. I’d go further to suggest the physicality of neuroscience matters.

    “The principle of organizational invariance.”

    This is where Chalmers and I start to part ways. I wrote about this extensively recently. I think Chalmers doesn’t make the case that computation does, in fact, preserve the causal patterns of the brain.

    A physical emulation certainly does, and I think we all agree on a Positronic brain, but I still see major unanswered questions when it comes to a numerical simulation preserving the causal patterns.

    “This puts Chalmers on board with artificial intelligence and mind copying.”

    I’m not so sure about the latter if he sees phenomenal properties as not yet physically understood. He’s explicitly said he supports machine cognition, which can cover the AI part, but does it include mind copying?

    “The double-aspect theory of information.”

    Color me skeptical on this one. Smells of panpsychism to me.

    “If the physics are casually closed, where does experience get a chance to make a difference?”

    Indeed. Part of what makes it the Hard Problem.

    “In other words, we don’t know fundamentally what matter and energy are.”

    And we’re even less certain about time and space!

    “What should we think about Chalmers’ naturalistic and law driven dualism?”

    I think there may be something inescapably dual when it comes to the mind. There are the older forms, spiritual dualism, substance dualism, but I’ve pointed out more than once that computationalists are necessarily “algorithmic” dualists.

    It seems hard to see the brain and mind as entirely one-in-the-same. The mind emerges from the brain (somehow), so how is that not, in some sense, two things?

    Liked by 1 person

    • “Isn’t it just wishful thinking to hang a “probably false intuition” on this?”

      When all we have is the intuition, then I think it’s rational to doubt it. If we get data backing up the intuition, then that obviously changes things. I see a lot of data for functionality, but none for anything else. But as I said in the post, maybe this one thing is different.

      “Who are the ones that do?”

      A lot of people I’ve encountered online over the years. However, it’s possible the pattern of the conversation is the issue. It usually goes something like this:
      Person: Isn’t X mysterious?
      Me: Well, actually the neuroscience explains or provides insight into it by (details)
      Person: That explains nothing because (some description of the hard problem)

      Peter Hankins, who does see the hard problem, once lamented that at least the reductionists are trying: https://www.consciousentities.com/2018/08/mary-and-the-secret-stones/

      I also perceive the lack of progress on non-reductive explanations and the apparent acceptance of that by its advocates, as essentially stopping. At least in that paper, Chalmers seemed to be trying. But that was 24 years ago! It seems like the other non-reductionists largely repeat his points.

      But maybe they’re making progress I’m not aware of? I have to admit I’m not read in that literature, but if progress were being made, I’d think some of it would make the news.

      “He’s explicitly said he supports machine cognition, which can cover the AI part, but does it include mind copying?”

      He details his thoughts on copying in a paper: http://consc.net/papers/uploading.pdf

      He is open to the possibility that it may not be technically feasible. I am too for that matter. I just find most of the reasons people give against it unconvincing.

      “The mind emerges from the brain (somehow), so how is that not, in some sense, two things?”

      When discussing this, I think we have to clarify whether we’re talking about epistemic or ontological dualism, and if ontological dualism, whether we’re talking about something that already exists or could be technologically constructed.

      There’s no doubt that epistemically, sometimes it’s easier to think in terms of the mind rather than the brain. That’s just finding it productive to look at different layers of abstraction with their own distinctive models. Nothing about that necessarily implies any ontological dualism.

      Then there’s the question of whether the mind is ontologically separate from the brain in natural biology. Everything I know about neurology seems stacked against that.

      But then the question is, can the relevant functional information be copied to another substrate and executed? If so, then that seems like a technologically produced type of dualism.

      Of course, you could argue that if that’s possible, then that sort of dualism was always there. Or you could insist that the copy is not the original. I can’t see that there’s a fact of the matter answer on that. It will depend on your philosophy.

      Like

      • Wyrd Smythe says:

        “I see a lot of data for functionality, but none for anything else.”

        The perception in question is one we all have with great consistency. I should think that gives it some weight. We all perceive the sky to be a color we label “blue” — what makes this universal perception so dubious?

        What data do you mean that accounts for what you wrote in the post: “I will readily admit that I do feel the profound nature of subjectivity, of the fact we exist and experience reality with a viewpoint. I don’t feel like an information processing system, a control center for an animal. I feel like something more.”

        Without making any claim of magic for “something more” it still seems to be “something more” and that perception is universal (modulo some serious philosophical skeptics). I’m not sure it can be hand-waved away as having to be a pre-Copernican point of view. I’m not sure why the philosophical skeptics necessarily have the upper hand on this.

        “When discussing this, I think we have to clarify whether we’re talking about epistemic or ontological dualism,”

        FWIW, I rare discuss things in epistemic terms. I’m all about the ontology. 🙂

        “Then there’s the question of whether the mind is ontologically separate from the brain in natural biology.”

        Do you consider emergent properties (ones not apparent in the parts) to be ontologically distinct? Backing up a step, do you consider the mind an emergent property of the brain?

        “But then the question is, can the relevant functional information be copied to another substrate and executed? If so, then that seems like a technologically produced type of dualism.”

        This gets down in the weeds a bit, but I’ve been thinking about how there are (at least) three very distinct classes of approach here. (1) Model the brain; (2) Model the Mind; (3) Model the Functionality. It’s possible 2 and 3 amount to the same thing, but maybe not.

        Modeling the brain seeks to duplicate the biology figuring a good enough simulation should simulate all aspects, including mind, of the brain. There’s nothing really dual here. It just assumes mind is something a brain does.

        Modeling the mind seeks the putative algorithm that underlies the brain. It’s not an attempt to re-create the physicality or structure (unless, of course, that turns out to be crucial). This is, I think, the most dual, since it separates the mind algorithm from any engine that runs it.

        Modeling the functionality might be the same thing, but I see it oriented more at high-order theories of mind. It’s more an attempt to emulate the functional building blocks. This might be a bit dual in using algorithms where the brain uses physicality.

        I think the third, combined with neural networks, might be most likely to lead to intelligent robots or software agents. I believe the first will simulate living, but unconscious, meat (and the resource requirements are formidable). I’m not sure the second is possible — I’m not sure a mind algorithm exists; that would be truly dual.

        Like

        • “What data do you mean that accounts for what you wrote in the post:”

          The data of subjective experience. I don’t doubt the experience. I do doubt it’s providing accurate information.

          “I’m not sure why the philosophical skeptics necessarily have the upper hand on this.”

          I can only tell you my reasoning. A lot of psychological research shows that introspection is not reliable. I think a reasonable takeaway from that is we shouldn’t make conclusions about the mind or consciousness when introspection is the only source.

          That’s not to say that self report isn’t data. It is. But it’s data that has to be used in conjunction with other data. (A lot of good work is happening when that conjunction is used. Most of Dehaene’s results come from it.)

          “Do you consider emergent properties (ones not apparent in the parts) to be ontologically distinct? Backing up a step, do you consider the mind an emergent property of the brain?”

          Only in a weak (epistemic) sense.

          On the modeling strategies, I agree that (1) may never be feasible, and even if it is, the resulting system would be bloated and slow. Although I wouldn’t see any obstacle in principle to it being conscious.

          I suspect success would require a blend of (2) and (3), with possibly some reconstruction thrown in to fix anomalies. The result would be a constructed intelligence with the memories and dispositions of the original. Whether it is the original may always be a matter of personal philosophy.

          Like

          • Wyrd Smythe says:

            “I do doubt it’s providing accurate information.”

            Very likely, but the accuracy isn’t the point. It’s existence is the point.

            “Only in a weak (epistemic) sense.”

            So “no” to both questions?

            “Whether it is the original may always be a matter of personal philosophy.”

            Yeah, never a question that meant much to me.

            Like

          • “So “no” to both questions?”

            If you meant them in a strong emergent manner, then my answer is no. (I accept weak emergence, just not the strong variety.)

            Like

          • Wyrd Smythe says:

            What do you consider the difference?

            Like

          • Weak emergence is epistemic, about what we know. It’s a recognition that as we scale up in layers of abstraction, it’s productive to switch models to understand (predict) what is happening. And that we may not currently know how to relate the lower and upper level models.

            Strong emergence is an ontological assertion, that there is something in reality that comes into being that, even in principle, can’t be related to the lower level.

            Like

          • Wyrd Smythe says:

            My next question: Is there anything you consider ontologically emergent? Or does everything ultimately reduce to basic physics?

            Like

          • On ontological emergence, no. It seems more productive to assume that we just don’t understand the relationship between the layers yet. With that attitude, we keep investigating. There might be something else sliding in between those layers, but if so I want to see it studied.

            On everything reducing to physics, the short answer is yes. Ultimately it comes down to which theories of reality are more predictive. Assuming that things reduce down to the fundamental theories we label “physics” seems to have been pretty predictive. Assuming they don’t seems like like it’s historically been a dead end.

            Like

          • Wyrd Smythe says:

            That aligns with what I perceive to be your worldview. Also basically mine, although I’ve been pondering reduction a bit. What might be gained or lost moving up and down the layers.

            In terms of what you call weak emergence, it seems useful to discover higher-level physical laws governing system behavior at that level of organization.

            Doing the calculations for quarks in a proton is still beyond us, so calculating a large system at that level is far beyond us. We’d make no progress at all if we stuck to the lowest level of physics.

            And systems do have lawful behaviors that we can discover at higher levels of organization, so it all works out.

            Yet I ponder the degree to which “wetness” is entailed in the low-level properties of hydrogen and oxygen atoms, especially looking at them as quark-electron systems. On some level it must be, but maybe the links are too complex to ever analyze?

            Systems can tangle information in ways that are very difficult to untangle at higher levels. (I don’t mean quantum entanglement. I mean more like how a hash tangles the input.)

            Liked by 1 person

          • Sorry Wyrd. I see up above that you used the word “ontologically”. I missed it before. So the qualifications in my “no” weren’t necessary.

            Like

          • Wyrd Smythe says:

            Confused me at first! 🙂

            Liked by 1 person

    • Lee Roetcisoender says:

      I’m in agreement with Wyrd on this one: “The mind emerges from the brain (somehow), so how is that not, in some sense, two things?”

      The mind is a discrete system which is a derivative of another discrete system, the brain. This pattern is repeated all the way down the evolutionary ladder of construction to fundamental particles and into the quantum realm. Likewise, the government of the United States is also a discrete system that is derived from those individual discrete systems who make up the government, the governmental system itself being separate and distinct.

      One has to think in terms of discrete systems when addressing the combination problem, discrete systems that are themselves a form of consciousness simply because discrete systems are the software programs that run the hardware of consciousness. What makes the phenomenal realm work is that consciousness itself is a continuous, linear system, a system that is fully capable of accommodating an unlimited array of discrete systems, systems which all display a feature of the continuous, linear system of consciousness itself, aka panpsychism…….

      Like

  6. Mike, first, thanks for doing all the ground work.

    It seems to me you are taking the approach equivalent to the well known approach in physics: shut up and calculate. It’s one thing to explain all of the causal path from seeing the red apple to announcing “I’m seeing a red apple”. It’s another thing to explain why seeing the red apple feels the way it does. The hard problem is explaining the feeling. Philosophers like me want the explanation.

    Of course I think I have that explanation. I will try and summarize here, and then run it through Chalmer’s criteria as described above.

    The short version is this: an experience is the interpretation of a symbolic sign.

    Here is the slightly longer version:

    Everything that happens can be described in terms of
    Input —> [mechanism] —> Output
    An experience is one of these where the input is a symbolic sign (semantic information) and the mechanism is generated for the purpose of recognizing that sign and producing output which (ideally) creates value relative to the meaning of that sign.

    This is not yet a full explanation, but I want to run this much through Chalmers’ components.
    1.The principle of structural coherence. I think it is fairly straightforward to match this model up with neural function. Object if you think otherwise.
    2.The principle of organizational invariance. As the nature of [mechanism] is not specified, multiple realizability is inherent in the model.
    3.The double-aspect theory of information. Here is where my model shines. Semantic Information is inherently double-aspect. There is necessarily a physical aspect. In semiotics terms that would be the sign vehicle. And there is the “meaning” aspect. (In semiotics terms, the object.) This latter aspect is not determinable from the physical aspect of the sign vehicle, but it is physically determinable from the causal history of the sign vehicle plus the causal history of the mechanism. Again, there must be causal coordination between the generation of the sign vehicle and the mechanism which interprets it, but that coordination happens in the past relative to the experience (making it a kind of prediction :)).

    And this is where subjectivity makes a difference. The system which contains the interpreting mechanism may have the capability to generate a reference to the event in memory, but more importantly it may be able to generate a reference to that kind of event, i.e., an event with approximately the same physical input. Every such event (approximately same physical input) will have one thing in common relative to that mechanism, namely, the “meaning” associated with that kind of event. Furthermore, the system may be able to assign/associate labels to each kind of event. For example, a system could associate “red” with a given kind of event. It could associate “blue” with a different kind of event.

    The point of all this is to say that such a system will have access to a reference to an event, which reference is most directly associated with a non-physical (semiotic) object: “meaning”. That system will not have access to references to the physical causes that generate that “meaning”. Thus, the physical nature of the reference will be ineffable because of the lack of references available to the system. The system will have references to different kinds of events, but will not have references to the nature of the difference. The system will only be able to reference the fact of the difference. The system will not be able to say why “red” is different from “blue”. It will only be able to say “this is experience is like a red experience, but that experience was like a blue experience”.

    Bottom line: Such a system will have something it is like to be that system.

    *

    Like

    • Thanks James! Glad you found it useful.

      Shut up and calculate? I suppose that is my attitude to some degree. Maybe it comes down to preferring a path that seems to be making progress. Although to be fair to the hard problem folks, Chalmers predicts that progress won’t be easy.

      “It’s another thing to explain why seeing the red apple feels the way it does.”

      I actually don’t see it that way. To me, the feeling is a mental state whose causality we can discover and study. We know many of the key regions involved. When a visual pattern of the apple forms in the visual cortex, it sends signals throughout the networks, some of which trigger the limbic system, which in turn trigger brainstem networks. All of which get integrated into a unified feeling of seeing the apple, most likely in fronto-parietal network (and associated thalamus regions).

      Thanks for the write up, and for reconciling it with Chalmers’ theory outline.

      I’m finding it a bit abstract and hard to conceptualize. I wonder if you could take a percept, like seeing the apple, and trace through it with your model. Another might be the goat you mentioned yesterday. Or any other examples you’d be comfortable with.

      Like

  7. I think the thing that I get hung up with again and again — and it’s not just with consciousness — is information. What is it in a physical sense? I’ve read some things here and there, but never felt I really understood it. I guess I need to read some more, because until I really grok this, I’m bound to keep scratching my head.

    Liked by 2 people

    • Information is a tricky thing. A lot of people refer to Shannon information theory. But getting a non-technical explanation of it is challenging.

      Personally, I think of information as patterns that are useful due to their causal history. So tree rings are information if you know how trees grow. But information itself potentially has causal effects. That makes it essentially a causal nexus of sorts, and information processing systems concentrated causality.

      That said, my own views on this aren’t well formed yet. I might give a very different account next year.

      Liked by 2 people

      • Wyrd Smythe says:

        What do you think of the first line on the Wiki page for Information: ‘Information is the resolution of uncertainty; it is that which answers the question of “what an entity is” and thus defines both its essence and nature of its characteristics.’

        Not bad; not horribly technical.

        Like

        • The part before the semicolon sounds like Shannon information, although I’ve more often seen the phrase “reduction of uncertainty” instead of “resolution”. The part after seems weirdly specific. The history shows tinkering with the beginning over the last month. Might be someone injecting some personal philosophy there.

          Like

          • Wyrd Smythe says:

            That second clause is pretty odd, isn’t it! Not wrong, exactly, but kinda weird. How about: “Information is what we can know about something.”

            Like

        • Hmmmm, the problem here is that this definition seems to imply a knower. Information seems to be defined in relation to a knower, but for it to be a purely physical thing, it needs to be independent of a knower. This is especially true if information is considered the substrate of consciousness. To thus define information in terms of a knower is to define it in terms of a consciousness which then makes the definition circular.

          Like

          • BIAR,
            I’m assuming you’re replying to Wyrd’s definition. If so, that’s my reaction too.

            It is hard to escape the need for information to have a user, but that user can be a mechanism, such as cellular machinery using DNA information, or my laptop making use of arp addresses. Of course, regardless of whether there is a user, a lot of natural information still exists as physical patterns, and retains its potential causal influence.

            Liked by 1 person

          • Wyrd Smythe says:

            “Information seems to be defined in relation to a knower, but for it to be a purely physical thing, it needs to be independent of a knower.”

            Of course. The phrase “what we can know” should be understood to refer to a putative or virtual observer (or as Mike points out, system). The key is that the information content of something is all that we, any system, or reality itself, can potentially know or find out about a system.

            It is not defined in terms of the knower. It’s defined in terms of what can ever be known.

            Like

      • I think I agree with you, Mike, but I also think you can be more rigorous in your definition. How about: Information is a pattern measurable in a physical substrate. Because that substrate has a causal history, that pattern can be associated with some part or aspect of that causal history. Which part of that causal history is associated with the measurement pattern depends on the causal history of the mechanism doing the measurement.

        *

        Like

    • BIAR,
      There are no true for false definitions for “information”, but rather only more and less useful definitions for the term (that is, according to my EP1). But what’s a useful definition for the term? Try this:

      Information is stuff that a machine is able to interpret.

      Thus if I can’t read your words for whatever the reason, then it’s not “information” to me. If I can however then it is. Similarly when I press a key on my computer, this will be informational if it’s interpreted by the machine somehow, though not otherwise. And is a rock that falls to the ground and so makes a divot in the earth, informational in the sense that the earth (as a machine) has “interpreted” that information? Well it might be taken that far, though I don’t like to.

      Furthermore as a perfect naturalist I’ll say that information doesn’t exist beyond the causal dynamics by which it’s displayed. Mathematics doesn’t exist except by means of a causal medium, such as a person who counts things. This is yet another language which has evolved in the human, though far more recently and actively pursued than natural languages such as English.

      I wouldn’t be too concerned if you aren’t quite getting Chalmers. It could be that he’s essentially yanking our chains. Just because he’s prestigious and talks in a fancy way, doesn’t mean he’s got important things to actually say. Indeed, I suspect that much of his success is the result of the many different ways that what he says can be interpreted. Until science has generally accepted principles of metaphysics, epistemology, and value from which to work, I believe that our soft sciences shall continue to reward this sort of thing.

      Like

      • Thanks for explaining. It still seems like something is needed to interpret the information, so we’re back to square one. When words like interpret, represent, know, etc… are present, it suggests that the definition is using consciousness, which renders it ineffective as a way to explain consciousness.

        Even if there isn’t anything to what Chalmers is saying, the notion of information has come up so often that I can’t help but think there’s something I’m missing. I see others suggest looking at consciousness through the lens of information, and even physicists suggesting that information might be the fundamental property of reality.

        I just want to make sure I haven’t missed anything. With so much talk about information and me wondering what it’s all about, I can’t help but wonder if it’s just me…

        Like

        • Here is my working understanding of information (repeated from a comment above):

          Information is a pattern measurable in a physical substrate. Because that substrate has a causal history, that pattern can be associated with some part or aspect of that causal history. Which part of that causal history is associated with the measurement pattern depends on the causal history of the mechanism doing the measurement. Different mechanisms can lead to different interpretations, depending upon which part of the causal history of the input is being associated with the pattern. This association is determined by the causal history of the mechanism.

          How’s that?

          *

          Like

    • BAIR,
      There is no truth here to discover, unlike what’s implicitly presumed in general. Either define “information” as you’d like us to understand you, or whoever you’re considering must define it as they’re using it in order for you to understand them. And what definition does Chalmers himself use? Hell if I know! As I’ve implied, he seems slippery to me in general.

      There are more computer geeks around here than I can shake a stick at, so it’s strange to me that you guys, and even Wikipedia, seem to generally interpret the term in reference to a conscious receiver. Aren’t technological computers generally considered “information processors”? If so then I’d have you consider the definition that I presented once again — stuff that a machine is able to interpret. Thus here the machine is the interpreter, human or otherwise. If you don’t think it’s generally useful to consider technological computers as “information processors” however, I can go with that as well.

      According to my theory, the non-conscious brain outputs three varieties of “information” as input to the conscious form of computer. These are “senses” (like sight), “valence” (like pain), and “memory” (or past consciousness which remains). If anyone can think of any other forms of consciously processed information, I’d love to consider adding a new variety of conscious input.

      Liked by 1 person

    • Peter Martin says:

      How about information is ‘a difference that makes a difference’.

      Physics doesn’t really have anything to say about the significance of patterns (particular arrangements) of composite things, which is what the mental engages with. The map from the mental onto physics is that what we consider to be information (eg a particular firing pattern of neurons) is actually characterised by infinitely many slightly different solutions of the equations of physics. It just happens that we choose to deal with the patterns that are maximally predictive of similar infinite bunches of solutions of physics at a future time.

      Not sure what work is going on to relate physics to everyday experience in this way. I guess reduced dimensionality modelling of physical systems is in a similar direction.

      Liked by 1 person

  8. James Cross says:

    Chalmers:

    “The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, …”

    This seems to be the core of the “hard” problem.

    So an interesting side question would be whether your “red” is in any way similar to my “red”?

    There is a good chance it is not and there certainly is no requirement that our mutual perception of “red” be similar.

    “An experiment with monkeys suggests color perception emerges in our brains in response to our experiences of the outside world, but that this process ensues according to no predetermined pattern.

    But the monkey experiment had another profound implication: Even though neurons in the monkeys’ brains were wired to receive signals from green cones, the neurons spontaneously adapted to receiving signals from red cones instead, somehow enabling the monkeys to perceive new colors. Neitz said, “The question is, what did the monkeys think the new colors were?”

    The result shows there are no predetermined perceptions ascribed to each wavelength, said Carroll, who was not involved in the research. “The ability to discriminate certain wavelengths arose out of the blue, so to speak — with the simple introduction of a new gene. Thus, the [brain] circuitry there simply takes in whatever information it has and then confers some sort of perception.”

    When we’re born, our brains most likely do the same thing, the scientists said. Our neurons aren’t configured to respond to color in a default way; instead, we each develop a unique perception of color.”

    https://www.livescience.com/21275-color-red-blue-scientists.html

    This is somewhat related to an experiment in 1896 reported by Donald Hoffman in Visual Intelligence. George Stratton had volunteers wear prisms that made the world appeared upside down. After wearing the glasses several days, the volunteers adapted and everything looked normal. Removing the glasses also took several days of adjustment for everything to look normal.

    Like

    • That’s interesting. The monkeys gaining the ability to perceive new colors is very interesting. What are they seeing? Usually edges of shapes are signaled by lateral inhibition. That implies they would see different colors on either side of the line. But what determines what the new color would be? (What determined the original color?)

      If it’s an arbitrary protocol thing, what prevents it from shifting over time? Or why don’t we have cases where lesions cause people’s colors to get jumbled up? There are lesions that can knock color perception out entirely, but I’m not aware of any that, say, cause red and green to flip, or merge green and blue, or knock out any one color.

      I’m going to have to have to go back and do some re-reading on the visual system.

      Thanks!

      Liked by 1 person

      • James Cross says:

        My general sense of this is that we have some inbuilt, evolutionary rules for making sense of the world but the external appearance of things – the qualia – are somewhat arbitrary. However, the qualia still need to be there for ease of manipulation. Hoffman uses a computer desktop analogy with icons as arbitrary symbols for taking actions on a computer.

        Liked by 1 person

        • I agree that the individual colors in and of themselves may be arbitrary. But once there, they seem to provide a placeholder for all the affective feelings and intuitive dispositions, both innate and learned, associated with them. As the article indicated, our reds may be different, but they have very similar meanings within us.

          And the nervous system may be more aimed at the contrasts than the colors in and of themselves. I read this article the other day that the shape and color processing happen in the same region.
          https://www.sciencenews.org/article/brain-vision-cells-detect-both-color-and-shape

          Like

      • Wyrd Smythe says:

        “If it’s an arbitrary protocol thing, what prevents it from shifting over time?”

        I think the only thing arbitrary is the semantics associated through training with whatever qualia is engendered by seeing the color red.

        Given visual stimulus causes certain neurons to react. That’s hardwired, so I don’t think that changes. It can be damaged or corrupted, as you say, but you can’t “spin the color wheel” so to speak. Nothing to spin.

        “What are they seeing?”

        I speculated in my reply to James. Speculating in more detail, the original experiment would have to have used red dots that reflected enough green light to not appear dark (or black) to the monkeys.

        With red receptors, they’d get additional signal from those same dots due to reacting to the red photons. So, given the dots must contain green plus red, they must now appear brighter.

        It would be interesting to see if they tried using truly red dots in the second test matched in luminance to the background. If the monkeys were seeing red photons as just more green, they should lose the ability to discriminate. If they don’t, then they may be seeing red as something different.

        Like

        • I’d imagine they kept the different colors at the same brightness to remove that as a factor. So there wouldn’t have been any indication coming from the rod cells or melanopsin indicating there was anything different in the red area.

          Prior to the change, the monkeys would have seen the green image with nothing decreasing the lateral inhibition of signal from the cones receiving the red photons to indicate that anything had changed. Once the virus was done, the the red sensitive cones and green sensitive cones on the boundary between the colors would have seen a dip in lateral inhibition from not having another cone of the same color firing, signaling an edge.

          So much I understand. The question I have, and I probably just need to look up the actual paper, is did the virus have any effect on the processing layers behind the cone cells, the horizontal, amacrine, bipolar, and ganglion cells that determine what actually gets transmitted to the brain? More generally, how and where did the change of some cone cells from green sensitive to red sensitive get registered in the nervous system?

          But the broader question I think no one yet has an answer to is, where does the actual color get determined? I have this suspicion that the contrast thing may be telling us something very counter-intuitive here, but I’m not sure what it is. That dress color thing a while back comes to mind.

          Like

          • Wyrd Smythe says:

            “I’d imagine they kept the different colors at the same brightness to remove that as a factor.”

            They’d have to, and the more I think about it, the more it seems an interesting proposition.

            Assuming monkeys have visual systems similar to ours, they have both rods and cones. So red photons would have luminance, but no color. Red objects that reflect no blue-green must look gray to them.

            Which means it could be more than matching luminance, because why wouldn’t monkeys discriminate a gray object against a color background? It almost seems to require the dots contain the background+red, but then reduced to account for the added red luminance? It still seems there would be enough tone variation to discriminate, so I’m puzzled.

            “Prior to the change, the monkeys would have seen the green image with nothing decreasing the lateral inhibition of signal from the cones receiving the red photons to indicate that anything had changed.”

            But wouldn’t the loss of green signal trigger something?

            (I’m not so certain edges play such a great role here. We can discriminate colors with fuzzy boundaries, so I would assume monkeys also don’t need a hard edge to see two regions as colored differently.)

            “More generally, how and where did the change of some cone cells from green sensitive to red sensitive get registered in the nervous system?”

            I’m curious enough I want to read the paper, too. The impression from the linked article is that new cells in the cones picked up red photons and dumped them into the same visual channels used by the green-detecting cells. The impression I have is that, effectively, they allowed the monkeys to see red photons as if they were green ones.

            Like

          • Wyrd Smythe says:

            Ah, it helps to read the article more carefully:

            “Although their brains were not wired for responding to signals from red cones, the monkeys soon made sense of the new information, and were able to find green and red dots in a gray image.”

            In a gray image. This makes much more sense. Before the cone mod, the red dots would have looked gray, but the green dots would stand out. After the mod, the red dots would register (as green, I suspect) because cones (plus rods) are now picking up a color signal.

            Like

          • I dug up the paper. Turns out that in this species of monkey, the males are all color blind, being dichromatic, but the females have all three cone types. For some reason, color blindness, missing the gene for red sensitive cones, propagated in the male population.

            That removes a big source of confusion for me. I was flabbergasted how a new color could be added to a nervous system that wasn’t evolved to handle it. But the males would already have had the neural machinery to interpret red, they just didn’t get the gene that leads to red sensitive cones on the front end. When the virus is introduced, they’re suddenly able to exercise machinery they’ve never been able to use before. (And the fact that it hadn’t atrophied or been preempted for some other function is significant, and noted by the paper.)

            Ok, this study isn’t upsetting my understanding of nervous systems nearly as much as I thought it might.

            The precise nature of how colors are established in the brain remains a mystery though. Is there any necessity that makes red, the red we get? Or is it established in some arbitrary manner during development? (And hence different for everyone.)

            Like

          • Wyrd Smythe says:

            Why would red-sensitive cells in green cones cause anything in the monkey’s mind other than a green signal? When a light-sensitive cell is triggered, it just sends a signal — there’s nothing red in the signal.

            How do signals from formerly green cones get into any putative ancient red visual processing circuitry?

            Like

          • Good question. I wish I knew. Obviously something changes. It’s not just about what the axons connect to. It’s worth noting that a lot of processing happens in the retina. Cones synapse to bipolar cells which synapse to ganglion cells (with horizontal and amacrine cells in the mix). Ganglion cells are the one who project toward the brain. By the time the signal gets to them, it’s been processed heavily. The color signal has to make it through that somehow.

            Maybe the neurotransmitters used in the chemical synapse change. Or maybe the virus alters some of the backlayer neurons. I also saw the bipolar neurons are unusual in having “graded potentials”; maybe that’s affected.

            I’m going to have to do some digging. I hope it doesn’t involve organic chemistry.

            Like

          • Wyrd Smythe says:

            “I’m going to have to do some digging.”

            I hope you do, because I think this is actually pretty simple, at least based on how I read this part:

            Neitz and several colleagues injected a virus into the monkeys’ eyes that randomly infected some of their green-sensitive cone cells. The virus inserted a gene into the DNA of the green cones it infected that converted them into red cones. This conferred the monkeys with blue, green and red cones. Although their brains were not wired for responding to signals from red cones, the monkeys soon made sense of the new information, and were able to find green and red dots in a gray image.

            It says the “converted” green cones to red cones, so cones that used to respond to green photons now respond to red ones. It doesn’t mention any other change.

            We have three “color channels” (RGB) the monkeys have two (GB). I think all the experiment did was usurp some of the green detectors, convert them to red detectors, but left them still feeding their signal into the green channel.

            You know how color cameras work, yes? Three filters, red, green, blue, in front of three monochrome cameras? Consider a system with just two channels, blue and green. Now add a red-filtered camera, but feed its signal into the green channel.

            I think that’s all it is. It just means red objects go from being gray or black to being green. And of course the monkeys could then see red dots against a gray background.

            Like

          • Wyrd,
            “I think all the experiment did was usurp some of the green detectors, convert them to red detectors, but left them still feeding their signal into the green channel.”

            If that’s true, then how are the monkeys discriminating red from green? If it’s just the red signal being added to a green channel, then why wouldn’t they just not look like more green dots?

            There is substantial overlap between which wavelengths excite green vs red cones, maybe the signal from the red cones, as you proposed, actually looks like a different shade of green to them. (Or yellow, if that’s what they’re seeing.)

            “You know how color cameras work, yes?”

            I’ll defer to your knowledge on that. And I think there may be insights here. But caution is warranted. A camera is designed to preserve and transmit an image. The nervous system is concerned with extracting meaning from the signals, and that extraction begins in the retina.

            Like

          • Wyrd Smythe says:

            “If that’s true, then how are the monkeys discriminating red from green?”

            Did they? The LiveScience article just says that, after modification, they were able to discriminate red and green dots against a gray background.

            The monkeys “were able to find green and red dots in a gray image.”

            Does their paper say they discriminated red dots from green dots? Or just that they were able to see red dots after the mod?

            “There is substantial overlap between which wavelengths excite green vs red cones,”

            Sure, but remember that, per the LiveScience article, all that changed was adding red-sensitive cells to existing green-sensitive cones, thus converting them to red-detecting cones. But those cones were still wired to the original neurons in the visual channel.

            Once a red, green, or blue, photon triggers a cell sensitive to it, there is no “colorness” to the resulting neuron signals. All the color detection happens in those color-sensitive cells. After that, it’s just signals in neurons.

            “A camera is designed to preserve and transmit an image.”

            Don’t take the analogy too far! The only comparison I was making was with the tri-color nature of color cameras and the tri-color nature of human vision.

            I hope you do pursue that paper. The article about it may not have included important details!

            Like

          • From the paper abstract:

            Red–green colour blindness, which results from the absence of either the long- (L) or the middle- (M) wavelength-sensitive visual photopigments, is the most common single locus genetic disorder. Here we explore the possibility of curing colour blindness using gene therapy in experiments on adult monkeys that had been colour blind since birth. A third type of cone pigment was added to dichromatic retinas, providing the receptoral basis for trichromatic colour vision. This opened a new avenue to explore the requirements for establishing the neural circuits for a new dimension of colour sensation. Classic visual deprivation experiments1 have led to the expectation that neural connections established during development could not appropriately process an input that was not present from birth. Therefore, it was believed that the treatment of congenital vision disorders would be ineffective unless administered to the very young. However, here we show that the addition of a third opsin in adult red–green colour deficient primates was sufficient to produce trichromatic colour vision behaviour. Thus, trichromacy can arise from a single addition
            of a third cone class and it does not require an early developmental process. This provides a positive outlook for the potential of gene therapy to cure adult vision disorders.

            However, in the paper, they make this admission.

            As an alternative to the idea that the new dimension of colour vision arose by acquisition of a new L versus M pathway, it is possible that it exploited the pre-existing blue-yellow circuitry. For example, if the addition of the third cone class split the formerly S versus M receptive fields into two types with differing spectral sensitivities, this would obviate the need for neural rewiring as part of the process of adopting new colour vision.

            They sum up toward the end with this.

            Treated adult monkeys unquestionably respond to colours that were previously invisible to them. The internal experiences associated with
            the marked change in discrimination thresholds measured here cannot be determined; therefore, we cannot know whether the animals experience
            new internal sensations of red and green. Nonetheless, we do know that evolution acts on behaviour, not on internalized experiences,
            and we suggest that gene therapy recapitulated what occurred during evolution of trichromacy in primates. These experiments demonstrate that a new colour-vision capacity, as defined by new discrimination abilities, can be added by taking advantage of pre-existing neural circuitry and, internal experience aside, full colour vision could have evolved in the absence of any other change in the visual system except the addition of a third cone type.

            Not sure if I entirely buy that these latter assertions obviate the alternative explanation.

            Liked by 1 person

          • Wyrd Smythe says:

            It really only says they had a new discrimination behavior, but doesn’t say whether they could distinguish red from green. That would be an important experiment, I think, in telling the difference between “whether the animals experience new internal sensations of red and green.”

            So far I still think the monkeys were seeing the red dots as green, because I can’t see what else adding red-sensitive abilities to existing green cones could do.

            Even so, the gene therapy still could have value for color-blindness in humans since we presumably have the existing visual cortex to process the information, but (as far as I know) color-blind people suffer from issues with their cone cells. The ability to selectively modify those could be very useful.

            Like

          • I tend to agree. It’s a shame they didn’t explicitly test for discrimination between L (red) and M (green or yellow). Although even if they had, as you noted, the monkeys might have been able to do so purely based on different intensities of M. The vividness between them probably requires the additional oppositional axis (green-red in addition to blue-yellow). Unless that was somehow already embedded in their bipolar-amacrine-ganglion retinal layers, it’s hard to see how converting some M cones to L ones would add it.

            On the other hand, the paper does imply that it took some time (20 weeks) for the monkeys to develop their additional discrimination abilities, maybe enough time for some cortex rewiring to have taken place? The question then would be, what in the signaling would lead to that rewiring? As I mentioned above, at least there’s reason to assume the species has the necessary machinery since females can be trichromatic.

            Overall, the results seem…underdetermined. 🙂

            Like

          • Wyrd Smythe says:

            “On the other hand, the paper does imply that it took some time (20 weeks) for the monkeys to develop their additional discrimination abilities, maybe enough time for some cortex rewiring to have taken place?”

            What would be the motivation for the cortex to change? What would it even notice, since signals are coming from formerly green-sensitive cones. Nothing in the signal says, “Hey, I’m the result of a red photon!” It just says, “Hey, my light-sensitive cell was triggered.”

            I wonder if the 20 weeks involved the genetic mod kicking in? Did they have a way of tracking the effectiveness of the mod? I mean, were they able to say, “Okay, we know it’s working now,” and then it took 20 weeks?

            Or 20 weeks from application of the virus?

            “Overall, the results seem…underdetermined.”

            They do.

            Like

          • Neurons that fire together wire together. What would motivate any of the layers to rewire would be new firing patterns. Initially, the new signals might look like a different shade of green. But it would be a new differentiation that didn’t exist before. Such a differentiation might be in new shapes and result in new perceptions. Such as noticing a ripe fruit.

            Over time, this might change the associations at various levels of the nervous system. Maybe initially the wiring of the retinal layers so that the ganglion cells end up firing in different ways, different frequencies, etc, which might result in new patterns in the cortex, V1-V4. Which might in turn lead to new higher level associations.

            We have associations for red, different associations for green, and yet different ones for blue. The dichromatic monkey might have associations for its version of M (medium wavelength colors). But as there starts being differentiation between M and L, it might result in the associations getting rewired. As more associations pile up for L (red), maybe dormant L channels get recruited and M channel connections weaken.

            Obviously I’m speculating aggressively. On the other side of this, it’s worth noting that the treated males were never as sensitive to red as the innately trichromatic females. That might be because the differentiation stayed in their M circuits, as we’ve been thinking, but also maybe that the number of L cones just weren’t brought up to the same number that the females had, that they needed more time for adaptation, or that they suffered from having to do the adaptation as an adult.

            I have an urge to read about the neuroscience of color perception, but the books on it, the authoritative looking ones anyway, appear to be textbook expensive.

            Liked by 1 person

          • Wyrd Smythe says:

            “Obviously I’m speculating aggressively.”

            😀

            “On the other side of this, it’s worth noting that the treated males were never as sensitive to red as the innately trichromatic females.”

            That ties in with what I’ve been wondering about gene mods: to what extent did the virus “convert” a green-sensitive cone to a red-sensitive one? Entirely? If partially, how partially?

            That could make testing red-green discrimination tricky, since you’d need a good model of the new red cones to determine how to make color patches of green and red that have the same value if red is sensed entirely as green, but have different values if red is seen as a new color of some sort.

            If green cones and red cones are sufficiently distinct, just green and red patches of the right luminance should work. Monkeys might see all equal green patches or green and something else.

            But if the new red cones are still also detecting green, it complicates that a lot. You’d have to know exactly what the response curve was. Even so, red and green dots of equal luminance would be an interesting test — can they discriminate between them.

            Mostly it would tell us something in failing. Without knowing exactly how the new red cones behave, it’s almost impossible to account for what mechanism would allow them to successfully discriminate red/green.

            Liked by 1 person

          • Wyrd Smythe says:

            Last night I was poking around a little, seeing what else was published about the Neitz experiment, and found this link to the Nature article and this Science Daily article.

            I’ve learned, for instance, the experiment involved two squirrel monkeys, Dalton and Sam, that were selected for their red-green color blindness (common in male squirrel monkeys; most females have tri-color vision). (I did see a short video clip of Dalton in action.)

            Also learned that it was 20 weeks after the gene mods that behavior changed, apparently essentially overnight: “Nothing happened for the first 20 weeks,” Neitz said. “But we knew right away when it began to work. It was if they woke up and saw these new colors. The treated animals unquestionably responded to colors that had been invisible to them.”

            They tested Dalton and Sam over a period of 18 months discovering the monkeys could discriminate 16 hues against a background of gray dots. (Nothing in anything I read specifically says they tested color discrimination among colors, just compared to gray.)

            So it’s not clear exactly what the monkeys saw (and we can’t very well ask them), but I’ve changed my certainty they saw red as green. The more I think about it, the more I think it’s possible they saw something different.

            Consider the original population of green-sensitive cones (call it G) — it affects some population of neurons in the visual cortex (call them N[G]).

            After gene mod, G divides into one subset of converted red-sensitive cones (call them G-R) and a remaining subset of unaffected green-sensitive cones (call them G-G).

            This necessarily results in the visual cortex neurons also dividing into N[G-G] and N[G-R]. So when the monkey looks at green, only a subset of the original neurons processing green in the visual cortex fire. And when he looks at red, the other subset of neurons fires.

            So the monkey’s perception of green should also have been affected — “turned down,” so to speak, due to the reduction of green-sensing cones. They’d also be seeing a different “shade of green” when looking at red (because those were originally neurons triggered by green cones).

            The question isn’t just what did the monkeys see for red. It’s also what they saw for green!

            Maybe it does just amount to different perceptions of green. But maybe not. Who knows!

            Like

          • Wyrd Smythe says:

            p.s. The main thing is, on the basis of this, I would expect the monkeys could discriminate between green and red.

            Like

          • Based on what I’ve been reading in my used neuroscience textbook, a lot of this may come down to the ganglion cells. Cone cells do not communicate directly with the brain. They communicate with bipolar cells and horizontal cells, which in turn communicate with amacrine and ganglion cells. All communication with the brain comes from ganglion cells.

            There are different types of ganglion cells, but many of them appear to primarily be triggered by differentiation, differentiation in lighting levels and differentiation in wavelength signals. The latter type are often called color-opponent cells. There are two types: blue-yellow and red-green.

            Here’s what I’m currently thinking. The male monkeys were born without red sensitive cones, but probably were born with red-green color-opponent ganglion cells. (Remember that the species generally has trichromacy capabilities. It’s just that the males are color blind due to a missing gene.) That means that much of the adaptation could have happened in the retina, without color channels deep into the brain necessarily needing to be overhauled, or needing to be overhauled far less than what I was thinking yesterday.

            (Bizarrely, the cones and rods are actually behind the bipolar cells which are behind the ganglion. Light passes through these layers to stimulate the cones and rods, with the light sensitive parts of these cells at the very back. Also bizarrely, cones are stimulated by darkness and inhibited by light. It makes sense if you think about the importance of shadow detection.)

            Still reading. My thinking might be different later.

            Like

          • Wyrd Smythe says:

            “The male monkeys were born without red sensitive cones, but probably were born with red-green color-opponent ganglion cells.”

            I still have a question about how those (presumably dormant) cells got wired into the cell system that had been processing green all along. It’s still the case that part of the green visual system was converted to reacting to red photons.

            The green visual system was bifurcated. Why would part of it interact with the red-green opponent system? How does the system even know red photons are now involved? These are just former green cones now sensitive to red photons.

            I’m still leaning towards the monkeys “seeing” two shades of green — shades that they could distinguish — rather than adding some new color. But who knows!

            “Bizarrely, the cones and rods are actually behind the bipolar cells which are behind the ganglion.”

            That always seemed weird to me, too, although technology offers some similar situations. I think some of the wiring for LED or OLED panels is on the front or otherwise in the optical path but the wires are so fine they can’t be seen. I think some auto windscreens contain embedded “invisible” FM antenna wires, too.

            Like

          • “I still have a question about how those (presumably dormant) cells got wired into the cell system that had been processing green all along. It’s still the case that part of the green visual system was converted to reacting to red photons.”

            I can’t say I have a good handle on this myself. The opponent cells would not have been dormant. It’s just their firing patterns would have been different. Every ganglion receives signals from more than one cone cell. It’s the combination of signals which influence their firing rate.

            (Both bipolar and ganglion cells have circular receptive fields, with both a center and surround. The center can be one cone, in the fovea, or numerous on the periphery. The surround is input from the surrounding cones. How the ganglion cell fires depends on the pattern received from cones in the center and in the surround: https://en.wikipedia.org/wiki/Receptive_field#Retinal_ganglion_cells )

            But how does the ganglion know which is which? Good question. But here’s the thing. How did it learn which was which during development? I suspect differentiation in the signal patterns, differentiation which if it comes later might alter things. But I’ll admit I don’t have a clear conception on how this works.

            And it might be that the treated monkeys never perceive the vivid distinctiveness between red and green as much as one who had the right circuitry through development.

            Like

          • Wyrd Smythe says:

            “How did it learn which was which during development?”

            Right. The difference I’m pointing at is that the ganglion cells had inputs from those green cones all during development (and none from red cones because there weren’t any). After the gene mod, those same ganglion cells are still receiving inputs from those same cones — but now the firing pattern is bifurcated into seeing green or seeing red.

            We agree they could distinguish red dots from green dots. I continue to lean towards their perception of those red dots as some shade of green, though, rather than as a newly experienced color.

            But that is just my guess. [shrug]

            Liked by 1 person

          • Wyrd,
            Thought I’d share this with you. It’s from a sidebar in the textbook on the genetics of color vision. The last few sentences caught my attention in relation to what we’ve been wrestling with here.

            Recent research has shown that, precisely speaking, there may not be such a thing as normal color vision. In a group of males classified as normal trichromats, it was found that some require slightly more red than others to perceive yellow in a red–green mixture. This difference, which is tiny compared to the deficits discussed above, results from a single alteration of the red pigment gene. The 60% of males who have the amino acid serine at site 180 in the red pigment gene are more sensitive to long-wavelength light than the 40% who have the amino acid alanine at this site. Imagine what would happen if a woman had different red gene varieties on her two X chromosomes. Both red genes should be expressed, leading to different red pigments in two populations of cones. In principle, such women should have a super-normal ability to discriminate colors because of their tetrachromatic color vision, a rarity among all animals.

            Bear, Mark F.. Neuroscience: Exploring the Brain (p. 317). Wolters Kluwer. Kindle Edition.

            It implies that the color opponent ganglion cells aren’t triggering on which axon the signal’s coming from, but from variances in the signal itself. Some of this might be related to the fact that cone and bipolar cells don’t fire the normal on-or-off action potentials, but graded ones, although I couldn’t find it stated anywhere that was how the ganglion tell which color is which.

            (This also reminded me that I know someone with tetrachromacy, although I don’t know where in the spectrum his extra receptors are.)

            Liked by 1 person

          • Wyrd Smythe says:

            Cool stuff, thanks for sharing!

            Liked by 1 person

          • A different possibility is that there is no “green channel” and “red channels”. There are only channels, some of which are hooked up to red, others of which light up to green. When you change a green one to respond to red, there is no residual greenness. That’s simply now a red channel. In theory, you could make some of them respond to ultraviolet. The point is, the brain just gets used to whatever comes in over any given channel. That’s why a person wearing the upside down glasses gets used to them. And that person has to get used to not wearing them.

            That’s why my red is exactly the same as yours. We’re just talking about my channels attached to red and your channels attached to red. If we surgically invert my red and green channels, after a time I will get used to it. And then again, my red will be exactly the same as yours: just a reference to the channels attached to red input.

            *

            Like

          • James,

            “The point is, the brain just gets used to whatever comes in over any given channel.”
            It seems like you’re burying a lot under “gets used to” here.

            I do think there’s a lot of merit to the idea that red is in the galaxy of associations rather than whatever raw protocol signal comes out of the visual cortex.

            Like

          • Wyrd Smythe says:

            A few points about human vision…

            We have four visual “channels” — red, green, blue, and luminance. The first three come from three different sets of cone cells, the last one from rod cells.

            We only have cone cells in the center of our vision, but rod cells are spread across the retina. While the cones cells respond to color, the rods are more sensitive to light. This is why we don’t see color in low-light situations. It’s also why star-gazers are taught to look at dim stars off-center.

            Our brain paints in the color of our peripheral vision in a way somewhat like how it paints over our blind spots. So a lot of our color “knowledge” is from internal processing.

            Some of it comes from contrast, expectations, and light-level, hence the whole blue-yellow dress thing. Those of us old enough might remember “indoor” versus “outdoor” film. What we easily see as “white” light covers a large range of frequencies. Film and cameras need to have their “white balance” adjusted because they can’t adjust like we can.

            For an example of color contrast that will blow your mind, see this recent blog post by Phil Plait.

            But with regard to the monkeys, from the LiveScience article, it seems all that happened was “converting” some green-sensitive cones to red-sensitive ones. Just the cone cells — existing cone cells that formerly reacted to green photons.

            Now some of those react to red photons. That’s the only change. So cells that used to trigger when receiving green photons now trigger when receiving red photons. Those cells will still send neuron signals down the same neural pathways they used to, and those signals will be processed by the visual cortex exactly as they used to be.

            Unless the ScienceLive article got it wrong or left out something important, it sounds like all that happened is the monkeys now see red as green (rather than gray).

            There’s no woo-woo Mary’s Room thing going on here.

            All that said, it is a very interesting question how the visual cortex does process those three color channels plus the luminance channel. When we look at red, and our red-sensitive cone cells are activated, and those neural pathways are activated, what’s going on in our visual cortex compared to looking at, say, blue?

            Like

          • Wyrd, your summary is not quite complete. It’s not that green cone cells absorb green photons. All cone cells absorb all (visible range) photons to some extent. It’s just that a green cone cell is more likely to be activated by a green photon than a red or blue, and likewise with the others. Here (I hope) is a chart of their respective absorbance spectra:

            The genetic change that turns green to red simply shifts the peak absorbance spectra of that cone cell to the right (longer wavelengths).

            As for corroboration of my comment, I just came across this talk by Jeff Hawkins speculating how cortical columns work. Well worth the time if you have it. In the question/answer session he essentially makes the point I made.

            *

            Like

          • Wyrd Smythe says:

            Yes, I know. That’s part of what makes it hard to test whether the monkeys can discriminate red and green. (See the last few comments between Mike and I on this thread.)

            Like

          • [ok, just google “cone cells spectra”]

            Like

          • [sigh]
            [also google “Does the Neocortex Use Grid Cell-Like Mechanisms to Learn the Structure of Objects?” ]

            Like

    • Wyrd Smythe says:

      So they let the monkeys out of Mary’s Room!

      It is an interesting question what the monkeys saw (and possibly a book title). In brains wired to discriminate blue-greens perhaps it was a new shade of blue-green. Or brightness variations. Adding red-detecting cells can only send more signals (when receiving red photons) to the same visual processing system.

      It might amount to a red dot on a green background looking to them like a brighter green dot on a green background. (More likely a darker one, depending on the shades used. What we see as white light is 30% red, 59% green, 11% blue. (Making yellow a whopping 89%, hence it’s use in highlighting, warning tape, and some fire engines.))

      I quite agree color is a learned response — we’re taught what “red” is; the qualia is associated with semantics — so we all agree a red thing is “red” (and on what that might imply: an exit sign, a stop sign, an apple, etc).

      Color carries information so it’s not surprising evolved systems would make use of it. Handy for spotting ripe fruit from a distance. Useful in mating displays (red monkey asses, peacock plumage). Think what color discrimination mantis shrimp must have!

      I was fascinated to learn about melanopsin. I didn’t realize that, “The reason we feel happy when we see red, orange and yellow light is because we’re stimulating this ancient blue-yellow visual system.”

      So color qualia, initially, can be context-free if we’re picking up the emotional side from a completely different system. Associations are entirely cultural. (It’s long fascinated me that the Chinese associate white with death, whereas in western cultures, it’s black. Both make sense.)

      Aliens might use blue and orange for go and stop (rather than our “universal” green and red).

      Like

  9. Peter Martin says:

    Coincidentally I was today listening to Chalmers on the Sean Carroll Mindscape podcast. I was thinking how boring it must be rolling out the same arguments in response to the same questions over decades.

    Liked by 1 person

    • Thanks for commenting!

      I’m guessing it’s the life of a philosophy professor. Progress is very slow. (Some say there isn’t any.)

      Like

    • I’ve had a listen to that Chalmers and Sean Carroll podcast that Peter Martin just mentioned. https://www.preposterousuniverse.com/podcast/2018/12/03/episode-25-david-chalmers-on-consciousness-the-hard-problem-and-living-in-a-simulation/ He didn’t seem so bad to me in this one, and probably because he was speaking normal English rather than slippery academic speak. (I was far less impressed with that “Meta hard problem” discussion of his that we considered here in April. I know exactly why we think there’s a hard problem of consciousness — because we remain clueless about it. Duh!)

      So instead of a substance dualist, Chalmers says that he’s a “property dualist”. He asserts for example that we tried to reduce electromagnetism back to fundamentals like space, time, and mass, but couldn’t. Thus we were forced to leave electromagnetism as “basic” as well. He says something not unlike this will happen with consciousness.

      With one strong caveat, I can agree with this. (He might even grant me this one.) It’s that things like space, time, mass, and so on are not ontologically fundamental, but rather epistemically so. I wouldn’t expect us idiot humans to reduce them any further, though that doesn’t mean they don’t all reduce back into associated causal forms of function. And given this caveat, I have no reason to suspect that consciousness (by which I mean sentience, value, valence, utility, hedons, subjectivity, agency, qualia, what it’s likeness…) is something we’ll ever grasp as a logical product of other things.

      To be sure, I do believe that this dynamic is a product of other things, though I doubt that it’ll ever be clear to us why the stuff that creates it, should indeed create it — and even if we are some day able to build it ourselves. While to me this position seems justifiably humble, the converse seems arrogant. Arrogance is something that I’d expect from a “many worlder” like Sean Carrol. Conversely Chalmers actually calling his own position “property dualism”, seems like a humble “own goal”.

      (By the way, I’m still working on some things for you Peter.)

      Like

  10. Michael says:

    Hi Mike,

    Apologies in advance for what is likely to be a long note here. I’m finding this subject matter mildly addictive and have enjoyed reading all of your recent posts as well as the discussion.
    The first thing I want to do is to try and restate your position as I understand it, as succinctly as I can. You made a comment above saying, “I don’t think consciousness has to emerge from the physical. I think it is physical.” You’ve also written, and I don’t have it in front of me, something to the effect that our subjective experience essentially is the information processing. They are not two different things for you. And I think that is similar to the axiomatic identity that you and Paul discussed.

    So in an effort to explain this, my tendency is to turn to analogues. I’ve often thought that a decent analogue for consciousness is energy. Let’s say kinetic energy. It is only matter in a particular type of motion relative to other matter around it that gives rise to what we describe as kinetic energy. So where did this come from? Did it “emerge from matter?” Well, that’s a hard thing to say. Is there some sort of duality between matter and energy? Depends on how you look at it, but let’s say no. So, as an analogy, albeit an imperfect one perhaps, when you say that “consciousness is physical” I can relate to it best as the notion that matter behaving in particular ways has properties we call consciousness.

    If we grab hold of a piece of matter and stop it from moving and fix it to the table, the kinetic energy is gone. But when we tried to grab hold of it we probably felt a little jolt, depending on how much energy it had, so we know matter is energetic from its effects, even though, generally speaking, we can never measure kinetic energy directly. And even though, when we pin matter to the table, we can’t find this kinetic energy stuff it previously possessed anywhere. So kinetic energy is just a name for a quantity we abstract from matter moving relative to other matter, and that we can calculate using measurements of matter itself. Even matter with very high kinetic energy has no kinetic energy relative to its own reference frame, so in a sense it’s in the eye of the beholder. Just like I believe you see consciousness being… although for perhaps different reasons, like the lack of a clear and meaningful working definition.

    But, I’m suggesting that when you say consciousness is physical, it could be something like this kinetic energy thing. And if we try to dig into that, we could say that kinetic energy only reveals itself when two chunks of matter moving relative to one another interact. Otherwise, each brick flying through empty space really has no kinetic energy relative to itself. So, to belabor the point just a bit, the effects of kinetic energy are only made observable through certain types of interactions between two chunks of matter in relative motion.

    But there’s one more piece to this, and that is inertial mass. When we talk about human scale things like bowling balls and bricks and paperbacks, I believe (someone correct me) that if these bulk objects didn’t have inertial mass, there would be no kinetic energy. And to the best of my knowledge, the empirical equivalence between gravitational mass and inertial mass still holds—so we can further say that kinetic energy would not exist without this inertial-gravitational mass.
    And how does this mass come into existence? The present notion is that it achieves expression through the coupling of otherwise massless particles with a field that is purported to exist in every known position in the universe. Is there a duality between the massless particles and the field with which they interact to “take on” mass? If you don’t like to suggest a duality, then you could say the field and the particles are both physical, but the fact of the matter is that the field cannot (to my knowledge) ever be observed, except where it interacts with another particle. It is only this interaction that reveals the field’s existence. But we don’t say the field doesn’t exist, right? The field itself cannot be observed, but certain expressions of it can, and that’s good enough for us.

    So if consciousness is physical, why couldn’t it be something like this? Why couldn’t it be a field that exists everywhere in the universe, and why couldn’t that field achieve expression by coupling with matter in a very specific way, or with matter that exists in a particular type of state? This is about the point where Wyrd finds me to be trading in unicorns, but every aspect of such a theory would be consistent with the best physical theories we have. Is it dualistic? I don’t know. But if gravitational inertia is not the result of a duality, then neither is this.

    But there is one last piece that is interesting. This consciousness field, to keep with the analogy, is nothing like what we call consciousness—just as the Higgs field is nothing like what we call kinetic energy. There is a bootstrapping thing here, where a fundamental coupling of the Higgs field with otherwise massless particles creates the conditions that are necessary for what we then call kinetic energy to achieve expression. Likewise, we could argue that some fundamental field called consciousness couples with particular types of matter, and that matter that has undergone this couple is then able to produce what we call consciousness, through some sort of interactions with itself.

    If I were to make something akin to a testable hypothesis using this analogy, I’d generally look to what is unique about what we call living matter, because so far we are only aware of the existence of consciousness in this type of stuff. One aspect of living matter that is unique is that it exists in quantum coherent states that extend for macroscopic times and distances within the organism. These quantum coherent states have been observed directly using a particular microscopy technique, and were first hypothesized (to my knowledge) by the solid state physicist Herbert Frohlich. Frohlich felt that the metabolic pumping of all of the dielectric/dipolar molecules in a living organism could very well induce laser-like states, or coherent states over long distances and time scales. Such coherence has been directly observed in the laboratory.

    So to close this out, and again, just exploring the analogy here, and working somewhat in reverse of the order in which I’ve presented information here, we could note that a) one physically observable condition that distinguishes living matter from non-living matter is the presence of persistent quantum-coherent domains in the tissue of the organism; b) consciousness as we tend to describe it, to date, has only ever been observed in living organisms; and we could then hypothesize that i) consciousness is a coupling of a field everywhere present in the universe with particular forms of matter, analogous to the coupling between the Higgs field and gravitationally-interacting particles; ii) that the quantum coherent domains in living organisms are a possible candidate for study in terms of identifying a unique form of matter that couples with this consciousness field, because they are present in living matter and not in non-living matter; and iii) that what we call consciousness is the product of some sort of interaction, relationship or relative movement between those quantum coherent domains, the conditions and properties of which would need to be studied.

    This could suggest that what we call subjectivity is not some inexplicable, non-physical quantity like a soul, but a basic physical property of the universe that achieves expression in living matter, where the conditions are appropriate. The only way this works is to say the underlying conscious field I’ve hypothesized is physical, but I think that is fundamental to your view anyway, Mike.
    My apologies for the long post, once again.

    Michael

    Like

    • Hi Michael,
      I appreciate the detailed model you carefully lay out here. Do you have that as post anywhere? If not, you should.

      As I noted in the post, if I were convinced that consciousness was irreducible in the way Chalmers and many others are, then I would find panpsychism pretty compelling. And many aspects of your description would strike me as very plausible.

      My issue is that I don’t find it necessary to regard it as irreducible. Certainly we can only reduce phenomenal qualities so far. These qualia are subjectively irreducible, but for me, that’s a system not having direct access to much of its internal processing, particularly the low level processing. The brain doesn’t have sensory neurons in it. It evolved to perceive the world, not itself. What model we do have of our mental self is simplified and tuned for adaptive behavior rather than understanding the architecture of our mind.

      But the nice thing about panpsychism is that a panpsychist and I, at least in the variety that reconciles with science, can live in instrumental harmony. We’ll make similar, if not identical predictions about any observations. It’s just that panpsychists will have metaphysical commitments I don’t share.

      The only danger I’ve long felt might exist with panpsychism is it might prematurely terminate our curiosity about how mental activity works. But for a panpsychist who accepts Chalmers’ first principle, that seems remote. They shouldn’t have any reason to resist or dismiss neuroscience.

      “One aspect of living matter that is unique is that it exists in quantum coherent states that extend for macroscopic times and distances within the organism. These quantum coherent states have been observed directly using a particular microscopy technique, ”

      Do you have links to mainstream scientific studies on this? I haven’t seen anything like it in the mainstream science news. I’ve seen quantum effects hypothesized to play a role in photosynthesis, magnetoreception in birds, and possibly in olfaction, but that’s a long way from what I’m understanding you to be saying here. And the assertion that quantum coherence has been observed directly…doesn’t fit with my understanding of what quantum coherence is.

      I’ve read microbiology, and everything there focused on classical physics, chemistry, electricity, etc. So, I have to tell you my skeptic alarm is ringing pretty loud. But I also know you’ve read a lot about quantum physics, so maybe there are aspects of this I’m missing?

      Liked by 1 person

      • Michael says:

        Hi Mike,

        First off, when you note that qualia are subjectively irreducible, I assume the implication is that while they may seem irreducible to us, they are in fact reducible to the functioning of smaller-scale matter in the organism. If I’m reading you correctly, that’s not inconsistent with what I proposed, as the type of consciousness that you are describing would be analogous to kinetic energy in the analogy, and that would certainly be dependent upon underlying structures. In what I wrote, the field analogous to the Higgs field that permits the development of what we call consciousness would be irreducible, but whatever that field is, it’s probably nothing like what we mean when we describe consciousness. The “brute fact (conjecture)” in what I wrote is that there is something irreducible we haven’t identified yet, and that is related to what we call consciousness. That is true. But I’m not suggesting the consciousness we experience and talk and wonder about isn’t reducible, or that it is not reducible to interactions of physical matter.

        As to the existence of extended domains of coherence, you asked for “mainstream scientific studies” and I don’t entirely know what that means–meaning I don’t know what you’re criteria may be. But let me give it a go. The Italian physicist Emilio del Guidice published a paper in 1995 in the International Journal of Modern Physics B, entitled “QED Coherence and the Thermodynamics of Water.” I don’t have access to this paper, but I attended a conference at which del Guidice spoke about ten years ago, and at the time knew a bit more about when he published what. I think this is the paper in which he derived the specific heat and critical point density of water from the basic principles of QED. At any rate, that is one of his achievements. He showed that because of the dipolar moment of water, there is the propensity of water to form self-resonating coherent domains (like all the rowers on the crew team, rowing in unison) that are assailed by the thermal effects of the environment and “broken down.” Colder water has more coherent domains, and warmer water has less. So water consists of two quantum phases all the time: normal-disorganized water, and domains of coherent water. It is the additional energy storage associated with the formation and deformation of the quantum coherent domains that accounts for the anomalous specific heat of water. Without this solid state QED effect, my understanding is that the anomalous thermal properties of water cannot be theoretically explained.

        The pictures I referenced of living organisms showing living tissue with extended coherent domains, are best view on the cover of a book entitled “The Rainbow and the Worm” by Dr. Mae-Wan Ho. Look up the 3rd edition on Amazon–pink cover with a grid of images. The images are of living small and unicellular organisms, taken using a birefringent microscopy technique that is also used by geologists to identify particular types of atomic alignments associated with different crystalline structures in minerals. The book “The Rainbow and the Worm” has a ton of references, and my copy is actually at my office, not at home, so I’ll look up a few more later this week if you wish. Long story short, I don’t know if it’s mainstream or not—but the book was published by World Scientific, who has published Nobel winners. I’ll let you decide as to its validity.

        I should say that I don’t believe the sort of coherence being discussed is the same thing as entanglement—at least I don’t think so. We’re not talking about the formation of pairs that are in an indeterminate state until some measurement is made. We’re talking about a collective resonant condition in which degrees of freedom of individual particles are traded for a collective order—which allows ensembles of molecules to essentially act in unison. You don’t need quantum entanglement to generate a laser beam, but you do need coherence I believe.

        As to doing a post, maybe one day!

        Michael

        Liked by 1 person

        • Hi Michael,
          “I assume the implication is that while they may seem irreducible to us”

          You could put it that way, although it seems to imply that a fast one of some type is maybe being pulled on us. I think phenomenal qualities can only be reduced so far phenomenally. I equate it somewhat like the concept in software of a bit. A bit is pretty much the most basic unit of information in software: 1 or 0, true or false, etc. Within software design and operation, you can’t reduce it any further than that. No amount of effort in software will enable reduction (at least not directly). However, in terms of hardware, a bit maps to a transistor (or equivalent hardware). And that hardware can very definitely be reduced further.

          Along the same lines, a phenomenal primitive can’t be further reduced within phenomenal operations. But it maps to a mechanism (neural correlates, etc), which can definitely be further reduced. Indeed, a phenomenal primitive is going to be more like a block in Tetris than a bit, but within Tetris the block is the primitive.

          None of this is to say that software can’t have a model of a transistor, just as we can have a mental model of neural circuits. But software can’t inspect the transistors it’s working on, just as we can’t inspect the mechanisms of our experience. The wiring simply isn’t there.

          “it’s probably nothing like what we mean when we describe consciousness”

          This gets to something I often wonder about panpsychism. Most panpsychists are not, in fact, strictly panpsychists, but what Chalmers calls panprotopsychists. Consciousness isn’t everywhere, but the building blocks of it are. Those fundamental building blocks are nothing like our folk conception of consciousness.

          But here’s what I wonder. How is this different from straight physicalism? A physicalist sees consciousness being composed of matter and energy. Its building blocks are the building blocks of matter and energy. The information processing of an elementary particle is nothing like the information processing of a brain. These relations seem very similar to each other.

          Of course, you’re positing a new thing, the consciousness field, to be added to the ontology, the pandualistic form of panpsychism. But some panpsychists are naturalists and appropriate physical concepts, like quantum spin, as the fundamental building block. The pandualism approach has additional metaphysical commitments, but operationally it seems equivalent to naturalistic panpsychism, both of which seem operationally equivalent to straight out materialism.

          Or am I missing something?

          On the quantum coherence – biology stuff, I hope I won’t offend you with this, but it seems to be pseuchoscience. Mae-Wan Ho in particular seems to have a reputation for being pseudoscientific. I wasn’t able to clearly establish that for Emilio del Guidice in my brief Google searches, but his stuff definitely seems to excite a lot of pseudoscientific speculation.

          Quantum physics seems to be a magnet for people looking to explain mysteries with other mysteries. I advise strong skepticism when it starts getting used to explain things people just have a hard time accepting as physical processes. That’s not to say that quantum mechanics aren’t involved in those things. They’re involved in everything. But their involvement is according to well known principles. It’s not the magic card many seem to hope it is.

          Liked by 1 person

          • Michael says:

            Hi Mike,

            On your first point about bits and transistors, I don’t understand how we disagreed, or even if we did. I follow and acknowledge what you wrote there and would view it as consistent, in a particular way, with what we both said previously.

            When you asked naturalistic panpsychism being operationally equivalent to straight out materialism, I don’t know that you are missing anything, at least as far as the discussion we’re having here. I do think there are functional differences between the two systems of thought, but that’s a story for another day. Recall that at the outset of this thread I expressed my intention to better understand how you see this issue of consciousness being physical, but not being emergent from physical processes. I tried to conceive of a model that was physical, or at least behaving like what we know of matter and energy today, that satisfied this idea. In short, I was working to find ways of achieving some model of reducibility so I could envision how this approach might work.

            When you say phenomenal primitives map to neural mechanisms, you’re implying through your other arguments that those neural mechanisms are consciousness itself, no? Or do I misunderstand? I guess in part I’m asking where in the reducibility of consciousness does consciousness cease to exist, as it must if it is reducible to non-conscious subsystems?

            As to your last point regarding the references I gave you, the contesting of an observation or idea by suggesting that the individuals involved are impostors, basically, without addressing the observation, idea or claim, is close-minded. I’ll just address one point: you said that “you advise strong skepticism when [QM] starts getting used to explain things people just have a hard time accepting as physical processes.” I couldn’t agree more. It’s an ironic statement as written, given that QM is a physical process, but I know what you are saying. People use the peculiarities and seeming spookiness of QM all the time to make leaps that are just not obviously justified.

            That said, you asked me to give a decent reference for the idea that biological organisms may contain extended coherent domains. M.W. Ho wrote a whole book on this idea, so it’s hard to summarize in a paragraph, but in her book she describes a particular microscopy technique that she used, which uses polarized microscopy to detect regions in living tissue where there was birefringence, or “coherently aligned anisotropies in the molecular structures of the tissues.” This novel technique that she and her colleague discovered was subsequently published in a paper in the Journal of Microscopy, that she co-authored with another scientist who she had discovered had also found a closely-related use for the polarizing microscope (apparently particular settings are required to obtain the images I referenced above, and this configuration of the instrument was a novel experimental set-up). A separate research paper that she and her graduate student wrote derived a theoretical basis for the relationship between the intensity of colored light observed using this technique and the degree of coherence in the sample, and this was empirically tested, and those results and work were also published.

            I don’t agree with every word she wrote, but as far as I can tell this is how scientists do it. I appreciate that this somehow isn’t your idea of what science actually is, but there’s really no basis for you to be so dismissive without even attempting to offer a factual or theoretical reason for doing so.

            Michael

            Liked by 1 person

          • Hi Michael,
            On the bits and transistors, I might have misread your statement above. Sorry. Sounds like we’re on the same page here.

            “where in the reducibility of consciousness does consciousness cease to exist, as it must if it is reducible to non-conscious subsystems?”

            That’s the big question. I actually don’t think there is a fact of the matter answer. It’s a bit like asking where in the reduction of the program Microsoft Windows, does Windows cease to exist? It depends on how we define “Windows”. For a typical end user, if we strip away the user interface, it’s not going to feel like Windows to them, although there actually are versions of Windows without the UI.

            In the case of consciousness, this always brings me back to the hierarchy I often throw up. Each layer requires and builds upon the lower layers.
            1. Reflexes, goal oriented fixed action patterns
            2. Perception, models of the environment, increasing the scope of what the reflexes react to.
            3. Attention, prioritizing what the reflexes react to
            4. Imagination / Sentience, simulations and assessment of various action scenarios, planning, for far into the future on for the next second. Assessment is driven by how the reflexes react to each scenario. Here, the reflexes become affects (hunger, fear, pain, joy, etc), dispositions for action rather than automatic action.
            5. Metacognition, self reflection, awareness of our mental life

            I think human subjective experience is all five layers from the inside. (At least for healthy mentally complete humans.)

            Which are crucial to subjective experience in general? Define “experience.” What is necessary and sufficient for us to call it “experience”? I think most of us, if we give it thought, would require at least 4. Many would insist on 5. But it’s very easy to see 1 in another system and project all five layers unto what we’re seeing, whether or not they’re there.

            On the quantum biology stuff and my closed mindedness, all I can say is we all have limited time. And few of us have the relevant expertise to really evaluate any particular novel scientific claim. So the time investment to fairly evaluate it is simply not an option in most cases. My approach is to require that the claimant convince some substantial portion of the relevant field that their claims have merit. If they do, their ideas usually get mentioned in mainstream scientific books, articles, and talks. If they don’t, then they tend to be ignored or, if egregious, accused of pseudoscience.

            Like

    • Lee Roetcisoender says:

      Michael,

      Agree or disagree, your ontology is well thought out. Nevertheless, your model is built on a Cartesian architecture which is reflective of substance dualism, an ontology which postulates two kinds of “stuff”, consciousness and matter. I applaud your creativity.

      Liked by 1 person

      • Michael says:

        Thanks, Lee. I don’t disagree with the substance dualism note, and this isn’t how I ultimately think things work. To be fair, I have no idea how things work. But here I was trying to begin with the starting point that consciousness is physical, and develop an idea around how that might look using notions analogous to what we already know. The point about dualism is certainly valid, as well as interesting to consider in light of quantum field theory, for which there are precisely these dualities throughout. For instance, particles with mass interact with certain fields, and particles without mass do not interact with those fields. It’s as if the twain never meet. Is that duality? Sure! Dark matter is so-called because it doesn’t interact with the electro-magnetic field, so it has no charge per se, and no way to give or receive light, yet it has mass! So there’s a sort of duality there, too, in terms of having different fundamental types of stuff. But maybe not the same as I’ve baked into a mess here…

        Michael

        Like

  11. Wyrd Smythe says:

    “Indeed, if we interpret the non-physical aspects of his theory in a platonic or abstract manner, the differences between his views and functionalism could be said to collapse into language preferences. Not that I expect Chalmers or Dennett to see it this way.”

    I think Chalmers would disagree. I’m a little surprised you think he agrees with you when here he’s really gone after type A materialism. He is explicitly saying functionalism can’t provide an answer. It really isn’t a language thing.

    After reading the paper and his reply, I think you’re with Dennett staring across that gap of mutual incomprehension. Chalmers here starts with a strong argument favoring the hard problem and, in his response paper, very strong arguments against type A materialism.

    Put it this way, after a close reading of both papers I agree with every word except his information dual-aspect hypothesis. (He’s dubious about that one, and I’m more dubious. 🙂 )

    FWIW, rather than go long here and abuse your hospitality, I wrote my own post as a response. (I might need to write another to really get into that dual-aspect thing. Not sure I said all I wanted to say there.)

    Like

    • Wyrd,
      I tried to make it clear in the post that the agreements between Chalmers and I are at the instrumental level. Definitely he sees something fundamentally unexplainable in functional terms and I don’t. But the way he works it into the scientific ontology, we end up agreeing operationally on things like AI, mind uploading, and similar issues.

      But I don’t deny that I’m much closer to Dennett than Chalmers in how I see all this stuff. I think my view reaches the same operational conclusions and is simpler.

      I’ll check out your post.

      Like

      • Wyrd Smythe says:

        I got that you were focusing on the instrumentalist view, although even there I’m not sure the correlation is as strong as you see it. (FWIW, to me an instrumental view is about focusing on the measurements and not caring about what causes them. In this case, what are the “measurements” and doesn’t Chalmers care deeply about underlying causes?)

        “But the way he works it into the scientific ontology, we end up agreeing operationally”

        Okay. I got the (perhaps wrong) impression you were holding him at arm’s length with:

        Such a theory would be built on what he calls psychophysical principles or laws. This could be viewed as either expanding our ontology into a super-physical realm, or expanding physics to incorporate the principles.

        And:
        “To be clear, Chalmers’ theory outline carries metaphysical commitments a functionalist doesn’t need.”
        And:
        “The main difference is in the third component. I see phenomenal properties as physical information, and phenomenal experience overall as physical information processes, without any need to explicitly invoke a fundamental experential aspect of information.”
        Which is a pretty big disagreement with what Chalmers is saying, isn’t it?

        No matter, really. It’s just that after reading the paper and seeing his strong statements about the hard problem and type A materialism, I was surprised. So much of what he wrote is opposed to your views as I understand them.

        Like

        • “I got that you were focusing on the instrumentalist view, although even there I’m not sure the correlation is as strong as you see it.”

          Where, at an instrumental level,do you see the divergence?

          “Okay. I got the (perhaps wrong) impression you were holding him at arm’s length with:”

          That quote was me just respectfully conveying Chalmers’ view. But you’re right that it isn’t my view.

          “Which is a pretty big disagreement with what Chalmers is saying, isn’t it?”

          They are, which is why I mentioned them.

          “So much of what he wrote is opposed to your views as I understand them.”

          We have definite disagreements. I don’t think I was murky about that. I also noted where we agree, and I think I was also clear on the limitations of that.

          Honestly, I get the impression you’re annoyed that I found agreements with him.

          Like

          • Wyrd Smythe says:

            “Where, at an instrumental level, do you see the divergence?”

            See the part in parentheses in previous comment. I take it you mean metaphorically.

            “Honestly, I get the impression you’re annoyed that I found agreements with him.”

            Ha! Not at all. I was LOL a couple times reading the papers, though. It’s more like I wondered if I’d finally managed to seduce you to the dark side just a bit. 😛

            No? I guess I’ll just have to keep trying. 😉

            Like

          • Wait, I thought type-A materialism was the dark side.

            Dammit, why doesn’t somebody tell me these things. o_O

            Liked by 1 person

  12. Pingback: The Hard But Unserious Problem of Consciousness | Broad Speculations

  13. Pingback: The difficulty of subjective experience | SelfAwarePatterns

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.