What is it about phenomenal consciousness that’s so mysterious?

I learned something new this week about the online magazine The Conversation.  A number of their articles that are shared around don’t show up in their RSS feeds or site navigation.  It appears these articles only come up in searches, although it’s possible they show in in the site’s email newsletter, which I’m not subscribed to.  What seems to be unique about these stories is that they’re contributed by people at The Conversation’s “partner” institutions.  Being a partner appears to be about providing funding, which seems to make these articles advertisements of a sort.

Most of these articles are reasonably competent, although they don’t seem to meet the usual standards for the articles the site does make a stronger claim of ownership to.  One, which I learned of by Sci-News republishing it, by Steve Taylor with Leeds Beckett University, is an article on consciousness that takes a pretty strong panpsychism stance.  The fact that the article is an introduction to his book only makes the advertisement aspect feel stronger.  (Although admittedly, such intros are fairly common in other magazines.)

I’ve noted before that panpyschism can be divided into two broad camps.  The weaker stance, which I’ve called naturalistic panpsychism, simply defines consciousness in such a deflated manner, such as it only being about interaction with the environment, that everything is conscious, including rocks and subatomic particles.

The stronger stance is pandualism.  Like substance dualism, it posits that consciousness is something above and beyond normal physics, a ghost in the machine, but in the case of pandualism, the ghost pervades the universe.  It exists as a new fundamental force in addition to ones like gravity or electromagnetism, and brains merely channel or “receive” it.

It’s not unusual for individual panpsychists to blur the distinction between these two stances, often using rhetoric evoking pandualism, but retreating to the more conservative naturalistic variety when challenged.  (One prominent proponent retreated to the fundamental force being quantum spin.)

I think naturalistic panpsychism isn’t necessarily wrong, but it isn’t particularly productive either.  But I do think pandualism is wrong, for the same reasons that substance dualism overall is wrong.  It posits an additional fundamental force of some type for which there simply isn’t any evidence.  The proponents often cite consciousness itself as evidence, but that’s begging the question, assuming that only their preferred solution explains subjective experience.

Taylor’s article puts him firmly in the pandualism camp, and somewhat to his credit, his language seems to make clear he has no intention of retreating to the naturalistic camp if challenged.  He uses a very common argument as a launching point for his position:

Scientists have long been trying to understand human consciousness – the subjective “stuff” of thoughts and sensations inside our minds. There used to be an assumption that consciousness is produced by our brains, and that in order to understand it, we just need to figure out how the brain works.

But this assumption raises questions. Apart from the fact that decades of research and theorising have not shed any significant light on the issue, there are some strange mismatches between consciousness and brain activity.

The point of the last sentence is virtually a mantra among people who want to take an expansive view of consciousness and evoke the types of things Taylor does.  In this view, science is utterly helpless before the problem of consciousness and has made zero progress on it.  The thing is, this is simply not true.  Science has made enormous progress in understanding how the brain and mind works, including in the cognitive capabilities that trigger our intuition of consciousness.

I’m currently reading Stanislas Dehaene’s book on consciousness, Consciousness and the Brain, where he discusses one empirical study after another nailing down the neural correlates of conscious perception.  It’s in line with what I’ve read in many other neuroscience books.

Of course, the work of Dehaene and his colleagues is in terms of what Ned Block calls “access consciousness”, which includes David Chalmers’ “easy problems”, the aspects of consciousness, the specific functional capabilities, that are accessible to science, such as content being accessible for verbal report, reasoning, and decision making.

I suspect Taylor and Block would argue that Dehaene isn’t studying “real” consciousness, essentially phenomenal consciousness, the redness of red, painfulness of pain, the “what it is like” aspect of experience.  Dehaene in his book makes clear that he’s in the camp that doesn’t see the distinction between phenomenal consciousness and access consciousness as productive, so the “omission” doesn’t bother him.

While I do think the distinction can be useful in terms of discussing subjective experience, I agree with Dehaene and many others that we shouldn’t see it as a failing of his work that he only addresses phenomenal consciousness in terms of our access to it.  In fact, I wonder what explanation phenomenal consciousness needs that isn’t explained by access consciousness.

It seems to me that phenomenal consciousness only exists with access consciousness.  They are two sides of the same coin.  Without access, phenomenality is simply passive information, inert data.  Access consciousness is what breathes life into the ineffable qualities that phenomenal consciousness provides.

All of which brings me to the reason for this post.  Many people see phenomenal consciousness as somehow an intractable problem, one that science can’t solve, one that many people cite as driving them towards various forms of dualism or the expansive types of panpsychism that Taylor advocates.

My question is, what am I missing?  What is it about the raw experience of red, or pain, or any of the other examples commonly cited, that requires explanation beyond our ability to access and utilize it as information for making decisions?

114 thoughts on “What is it about phenomenal consciousness that’s so mysterious?

  1. Okay. We still do not have a firm grasp on what “consciousness” really is. We are getting somewhere, but where that is is not visible as yet. On top of this we are discussing whether the universe is conscious, whether a rock is conscious, etc. Don’t you just love philosophy? I am a philosophy buff (have read a lot, got a minor in it in undergrad school, etc.) and I would really like it if philosophers would be a little more hesitant about the topics chosen to discuss. As a minimum, would we not want some proven premises to work from? But, no, philosophy has indulged in wild speculation forever. To be fair, so has science (science being a former branch of philosophy) but science has an arbiter (hint: nature) to settle disputes. Philosophy does not.

    So, with regards to panpsychism, pandualism, etc. my response is “potentially interesting but … too soon to tell.” I feel the same way about free will. We have been chewing on the topic for millennia and now, maybe we are starting to accumulate a few premises that we can base an argument upon … “potentially interesting but … too soon to tell.”

    Liked by 3 people

    1. Right Steve. You’re looking to hear from Mike about this, though I’ll provide my own answer.

      Yes it is too soon, and traditional philosophers seem all to comfortable delaying a time of reckoning perpetually. What’s required will be to develop a respected community of specialists armed with its own generally accepted principles of metaphysics, epistemology, and axiology, from which to better found the institution of science. Here our soft sciences should finally be able to get somewhere.

      Consciousness really is… nothing more than a word. And as such it has the potential to be defined in any way at all. What we need however is a “useful” definition for the term. But without this accepted principle of epistemology from which to grasp the nature of our terms, scientists keep looking for what consciousness “truly is”. And of course in that case “everything” can’t be ruled out.

      So naturalistic panpsychists are right — everything it conscious, when defined that way. But when will a respected community of philosophers armed with such epistemology come along and say, “That definition is simply not useful”? Mike and I do. What’s so hard about making such an admission for philosophers in general? The “ordinary language” camp tried to overcome this problem, and it failed.

      Then regarding the pandualists, this gets into metaphysics. We need a respected group of professional who state that to the extent that causality fails, nothing exists to discover. If phenomenal experience is not causally manufactured of this world, then fine, the supernaturalists are correct. Good for them. But given this principle of metaphysics they’d be exploring a supernatural variant of science, while we’d be exploring a natural variant. Wrong though we might be, they’d be banished from the natural science club. They might even be termed “pseudo scientists”. Problem solved!

      Like

    2. That’s one of the issues with philosophy. We all agree that a substantial portion of it is utter bunk. The problem is we disagree on which portion that is.

      On free will, I actually don’t have much feeling if mystery toward it. It seems clear to me that we don’t have any meaningful contra-causal free will. (It’s possible quantum randonmness enters into the picture, but not in a way that provides any meaningful freedom.)

      But at the same time, I’m skeptical of notions that we need to dump social responsibility. We have the ability to foresee the possible consequences of our choices. If one of those consequences is the possibility of punishment, even in an utterly deterministic world, it will affect our actions.

      Like

  2. This topic is a bit outside my area of expertise, but I will say this: I am extremely suspicious of any claim that science will never be able to explain something. Historically, there are many examples of things that could not be explained by science that have since been explained by science. How does the Sun keep burning? Science was totally clueless on that before we knew about nuclear fusion.

    I’m not saying science will definitely solve all our questions about consciousness. I’m just saying that just because science has not figured something out yet does not mean science will never be able to figure it out.

    Liked by 1 person

    1. I totally agree. One of my favorite examples is knowing what the stars are composed of. A 19th century scientist opined that we’d probably never know, a few years before spectral absorption lines were discovered.

      The only areas where I think science may never solve things is where issues are poorly defined. Although even there, if science solves several specific variations of the definition, eventually most of us will regard it as solved.

      Or sometimes it shows that the problem itself is ill defined. The mystery of life was once a major scientific mystery, with many people positing an elan vital. Today we know there is no elan vital, so science will never “solve” it. Instead we have a vast and complex collection of chemical reactions which we label “life” and “metabolism”.

      Consciousness strikes me as a mix of these two conditions. There are specific definitions that not everyone accepts, which will likely be individually solved. Other definitions will be more like elan vital, poorly defined concepts that don’t exist.

      Liked by 1 person

  3. Mike,
    I’m happy that you’ve opened this one up right now, and given that a recent article on your twitter feed helped me understand the severity of the problem here. (It was this one: https://www.technologyreview.com/s/613637/brain-signals-can-reveal-how-awake-a-flys-brain-is/ )

    Things seemed hopeful initially — researchers are figuring out how to differentiate between conscious and non-conscious brain activity through differences detected between normal and anesthetized fruit flies. Good deal. Sounds amazing. But then I noticed praise for Giulio Tonini and his integrated information theory. Oh my!

    So essentially they found that anesthetized flies have less “information integration” going on, and therefore that these creatures are less conscious than normal flies. How convenient! And they masqueraded this whole thing as “testable scientific progress”. Thus while they want you to believe that they’ve been able to differentiate between phenomenal brain activity and non-conscious brain activity (and so might be able to quantify how much pain something feels), in truth they have no clue about such matters, though the narrative itself is left to serve their purposes. Brilliant ploy! So maybe plants don’t “integrate” much information, and so aren’t very conscious (even given all that they do in a molecular sense?), though flies do integrate lots of information, but when awake the human integrates an amazing amount.

    If I hadn’t already heard about IIT then I suspect that I wouldn’t have noticed what was happening here.

    Liked by 1 person

    1. Eric,
      I know what you mean about that fly study. The reference to IIT didn’t sit well with me either. But I still shared it because the experimental work seemed solid. It’s just the theorizing which was questionable.

      And actually IIT does have some uses. It could help in determining whether an animal known to have conscious states is currently conscious. It only gets in trouble when it starts asserting that integration in and of itself is consciousness. Integration seems crucial, but not sufficient.

      Dehaene in his book in describing the global neuronal workspace theory, actually references Tononi at one point, because GWT itself depends heavily on integration. Of course, Dehaene’s position is that consciousness is the global workspace. His work may also be useful while not ultimately correct.

      So I wouldn’t dismiss the fly study because they referenced IIT. Even if, as we both suspect, IIT is wrong in its grandest assertions, the results of the study might still hold up.

      Liked by 1 person

      1. Mike,
        I was thankful that you shared this link. People used to berate Massimo for less than savory links from time to time, and he’d then say something like “These are not my endorsements, but rather things that I’ve provided for my readers to think about and comment on if they choose.” And of course the ones that we were able to bitch out the most were always the most fun for us anyway. (I miss those days. I don’t care about his adopted life strategies. But quoting Stoic thinkers to us in general as if they’re revered prophets, and yes providing “weekly meditations”…)

        I’m not saying that there aren’t notable differences in neuron firing between things that are and aren’t anesthetized. (Should something like a tree or computer ever be referred to as “anesthetized”?) I’m saying that using such an obvious circumstance to define the level of consciousness that something generally considered “conscious” has (and perhaps flies don’t even harbor phenomena?) to support the notion that consciousness is productive to define as a gradient of “information integration”, is a shrewd way to help establish a bullshit definition. It’s a way to hijack science given the discipline’s current epistemological and metaphysical vulnerabilities.

        (At some point I will get to your post itself.)

        Liked by 1 person

        1. Eric,
          My sharing criteria isn’t nearly as lofty as Massimo’s. I usually share articles if I found them interesting enough to read all the way through. (At least, if it’s a topic I normally share on.) Although if it’s an article I have a lot of disagreement with, I’m more likely to share it via a blog post.

          Liked by 1 person

      1. True.

        What I’ve found interesting is the often fierce attacks coming from theologians when presented with the ideas of panpsychism (as loose as they are). It can be quite an irrational reaction, which tells me they fear it not simply as an explanatory model, but as an actual (tangible) alternative to their theology, where Penrose and Tegmark are a far more attractive couple to Augustine and Aquinas.

        Liked by 2 people

        1. I can definitely see many Christian theologians having issues with panpsychism. It’s too far from their preferred narrative. If we’re all part of some group soul, what goes to heaven (or hell as the case might be)?

          Although I often think theologians miss opportunities to reconcile their views with various outlooks. Just as I can see ways for them to reconcile with physicalism (maybe we all get uploaded into a higher dimensional computer called “heaven”), I could see ways of reconciling with panpsychism. (Of course, the easiest way to reconcile with everything is to regard the whole theological endeavor as misguided.)

          Liked by 2 people

  4. “This soup tastes like chicken,” I said.
    My wife looked at me across the restaurant table. “You’re mistaken – it tastes like turkey.”
    The nerve! I’m talking about my own experience here, and surely that makes me the expert! Only – the memory is fresh in my mind, and wait a minute … I take another sip. Dammit, she’s right. It does taste like turkey.
    Had my wife not spoken up, I would have gone right on thinking, wrongly, that my taste experience had been one of chicken. In other words, the information flitting about my global workspace would have been inconsistent with the data recently in my phenomenal consciousness. Which is only possible if those are two different things.

    This answers the question you wrote. But let’s take a step back, and ask a still better question. What is it about the taste of turkey that seems to defy one kind of scientific explanation, in a way that access consciousness does not? Or to quote a famous philosopher, what is the Meta-Hard Problem of consciousness?
    The kind of scientific explanation in question is conceptually transparent linking. Meaning, you read the explanation and you say “Of course!” *Of course* very fast-moving air molecules (very hot air) would start a chain-reaction of oxidation of phosphorous sulfide molecules thereby causing the match to light! It’s all completely and transparently clear! And along with other similar examples, it thus becomes clear that the heat of a gas is the mean molecular kinetic energy.
    Whereas, looking at grey matter, or at neurons firing and stimulating each other, does nothing whatsoever to bring the taste of turkey to mind.
    Now the really important question is, what does a non-dualistic metaphysics of consciousness predict, about whether looking at grey matter would bring the taste of turkey to mind? Hint: it would predict that it would not bring the taste of turkey to mind. It would predict exactly what we observe. But (sloppy) philosophers typically assume the opposite, overgeneralizing from other successful scientific reductions (like temperature = mean kinetic energy). Thus creating the alleged problem.

    Liked by 3 people

    1. Interesting scenario with the soup. But it raises an interesting question. What makes a particular percept conscious vs unconscious? If your wife hadn’t said anything and you’d never realized that the taste of turkey was there, would you ever have been conscious of the taste of turkey if you never accessed it?

      As it happens, I just read a section of Dehaene’s books where he discusses this. He makes distinctions about different types of unconsciousness. On the one hand are subliminal perceptions, perceptions that will never be conscious because they’re too brief, or masked with later perceptions that block that particular percept from crossing the threshold of consciousness.

      But then there’s preconscious content. These are percepts that you could be conscious of, but aren’t attending to at the moment. If they fade before you attend to them, then they never enter your consciousness, at least not consciousness you can access. Your initial taste of turkey strikes me as falling in that category. For whatever reason, a similar percept made it into your wife’s consciousness, and her telling you that the taste of turkey was there was enough to make you attend to it and bring it into your own consciousness.

      So, is the distinction between access and phenomenal consciousness, or between conscious and preconscious content? Is there a fact of the matter answer on this?

      On the meta-problem, I think you’re definitely right that one of the reasons people struggle with this stuff are the stark differences between their model of the mental, and their model of the physical brain. We don’t intuitively see ourselves when we see a brain. We only see a gross looking organic thing. Intuitively, the two models seem irreconcilable. Of course, science has a long history of showing that our intuitions are not to be trusted.

      Liked by 1 person

      1. I like the analysis of “preconscious” content, but not the word. Let’s put it this way: the phenomenal experience is “pre-[semantic-conscious]”. Given that Dehaene just *means* the bracketed by “consciousness”, I completely agree with him (and you) about the substance, but I feel a different wording is more explanatory.

        Liked by 2 people

        1. If we use “pre-access-consciousness” we might have a term that both Block and Dehaene could accept. Of course, that still leaves the issue of whether pre-access-consciousness is still phenomenal consciousness.

          Liked by 1 person

    2. I’d like to give that one a try as well Paul. As I see it you interpret the inputs from tasting the soup, both sense information and valence that feels good/bad, and match them up with a third form of input, or memory of what to call those other inputs when combined. (Memory being a degraded recording of past conscious experience that’s accessed given associated stimuli.) Initially you came up with “chicken”. But then your wife added extra input to the mix by suggesting “turkey”. So you interpreted those inputs yet again given her new input, and this time (whether rightly or not) match things up with her “turkey” suggestion.

      This sort of thing happens to me all the time. I’ve learned not to put too much credence into whatever it is that I think things that I don’t really care about are called, such as the taste of turkey versus chicken. She has yet to convince me that I don’t feel what I do in ways that I do consider important however.

      Liked by 1 person

  5. What are you missing?

    People expect to be able to detect the state of themselves and having assumed they have, they have built up their lives on that assumption being more solid than the earth itself.

    I think if we’re going to do science explaney stuff on consciousness it needs to be science the kids can do at home, rather than an MRI stuff or whatever.

    One of the simplest I’ve heard of is the one or two chop sticks poked on the back and brought closer together until the subject cannot tell if one or two chop sticks are being poked. That’s pretty understandable in general. Then analogise it to the brain – at a certain point the brain lacks the resolution of sense to detect what is happening with it.

    But in the end, much like Lucy, we got some science ‘splainin’ to do.

    Liked by 1 person

    1. Stuff the kids can do at home? Well, there are some, like the one you described. There are also all the visual illusions, and the trick of closing one eye and focusing the open one on a particular point, while moving your finger at arms length across your field of vision until it disappears, which shows the blind spot we all have on our retina, but which consciousness edits out.

      But when it comes to finding the neural correlates and pathways of consciousness? I’m afraid that will be in the lab with expensive equipment. It’s unavoidable, in the same manner that testing particle physics now requires particle colliders or astronomy ever bigger and more sophisticated telescopes.

      In the end, I suspect, just like in physics, most people will be convinced by what neuroscience produces for their daily lives, such as medical treatments, or further down the road, enhancements. Although there will always be a camp that refuses to accept anything but their preferred interpretation of reality.

      Liked by 2 people

      1. Yeah, I think that’s a problem – it starts falling into faith (what fish and bread neuroscience can provide) and opposing faith. No one knows how it works (or they deny it works). Science worked because people could at least do the fundamentals at home to test it themselves. And the super advanced stuff like detecting a black hole out in space, it became abstract to daily life and kinda didn’t matter. But when we get to the brain it’s critical to daily life but getting into the ultra advanced science – there is a massive explanatory gap involved (with explanation not just being told ‘it works so shut up’, but the ability to test something at home). In the hands of a capitalistic system in charge of how neuroscience is delivered it’ll quickly go to deep madness.

        Never mind ‘enhancements’ – no one suggests to go enhance the mona lisa because it’s a bit stale. But the brain? It’s almost a symptom of how we can’t detect our inner selves that that seems up for grabs in a kind of dualist way, as if enhancements just affects the ‘brain’ and wouldn’t really change the ‘me’ from before the surgery to something else – like ‘me’ is an eternal soul and the brain is just a workhorse or something.

        Liked by 2 people

        1. Enhancements definitely have the potential to change the self. But even if the change results in different selves, the self before the change will look forward to enjoying the post-change enhancements even though it won’t be the one experiencing them, and the post-change self will remember a pre-change existence, even if it isn’t the one who experienced it.

          It’s kind of like how I remember being a boy in 1975, even though nothing of that boy remains anymore (at least outside of some bone structure).

          Of course, a lot of my memories from that period are false, concoctions my brain cobbled together over the decades, perhaps conflating multiple events, or maybe even conflating things I heard about over time. Childhood memories from decades ago are the least reliable ones.

          Liked by 1 person

          1. But even if the change results in different selves, the self before the change will look forward to enjoying the post-change enhancements even though it won’t be the one experiencing them, and the post-change self will remember a pre-change existence, even if it isn’t the one who experienced it.

            Well, that’s pretty Lovecraftian

            If with the 1975 boy thing you’re referring to the idea the body changes all its cells every 7 years or so, the brain stops growing new cells at about age 21 (or it’d be far easier to recover from brain injuries as an adult), so…one of those statements has to go and I’m pretty sure it’s the 7 year one (in regards to the brain). Your younger self is absent about as much as the inner rings of a tree are absent – which is to say not at all. Sure, overlapped by latter growth and internalised to that growth (showing why early growth is so important to be healthy), but there.

            It makes no sense for a pre change self to look forward to experiencing something it wont be there to experience. It’d be like getting a clone who is going to live a happy life while you get shot in the head – as if you’re going to experience the clones life. Like treating the clone as a sort of weird child of the original and wishing that child the best in future, that’d make some kind of sense. But as is it’s like those insects whose eggs hatch inside the insect and the baby eats the mother. Ie, something Lovecraft would appreciate.

            Liked by 1 person

          2. Lovecraftian? Well, it’s all in how you look at it.

            “the brain stops growing new cells at about age 21”

            There is some possibility of the hippocampus growing new neurons, but you’re right that the brain generally doesn’t replace its neurons. (I’m not sure about glia though.) But each neuron has its own maintenance processes, where all the lipids, proteins, and overall cellular structure is constantly being replaced. The neurons you have today, at an atomic level, aren’t the neurons you had ten years ago, so the 7 year thing still holds. We are waves of information. It’s just that each neuron is its own wave.

            The problem with being shot in the head is you have to experience being shot in the head. But if I go to sleep never to wake up, and my clone with all my memories starts living from then on, I can choose to view it as me dying and someone else taking my place, or just going to sleep and waking up in a new form.

            It does get Lovecraftian if old-me is kept around in some form watching new-me take over my life. We should try to avoid that scenario. 🙂

            Liked by 1 person

          3. Neurons are the platform for connections – it’s the connections that are primarily you.

            I can choose to view it as me dying and someone else taking my place, or just going to sleep and waking up in a new form.

            Well I disagree on the latter. It’s not you any more than a fairly faithful molecular recreation of the Mona Lisa IS the Mona Lisa once the original is burned. Or if your partner dies, what is Lovecraftian is treating it as if she never did because of generating a copy. What a story that’d make, where there’s hints of her being killed over and over but the reader only gets hints with no hard confirmation and she just turns up the next day in a cognitive dissonance triggering way, because we’re being let into the mind of the other partner who is setting up the duplicates.

            Liked by 1 person

          4. “it’s the connections that are primarily you.”

            I agree (mostly since neural structure does vary between individuals). But the proteins, lipids, and neurotransmitter and neuromodulators in synapses are also constantly being recycled. There’s just no getting around it; we are information.

            “Or if your partner dies, what is Lovecraftian is treating it as if she never did because of generating a copy.”

            A lot here depends on what you think happens when someone dies. If you think they’re in an afterlife, then it might indeed seem Lovecraftian. Then the question becomes who is actually in the afterlife? Only the original person? The latest copy? All of them?

            But if you don’t believe in an afterlife, then the dead know nothing. They won’t be sitting anywhere bemoaning that a copy is living in their place, since they won’t exist anymore. They have no more existence than the nine year old boy I once was, or even the 20 year old college student I once was.

            Liked by 1 person

          5. What voltage and pressure refer to strike me as patterns. And I don’t know any way to express, transmit, record, or know about the scalar value itself other than through patterns. Who knows about gravity in and of itself, but is there any way to receive its information other than its effects on things and their relationships to each other, i.e. patterns?

            Liked by 1 person

          6. “What voltage and pressure refer to strike me as patterns.”

            How so? What’s the pattern of a given voltage or pressure?

            (I think how such physical values are expressed, transmitted, or recorded, is a separate matter.)

            “[I]s there any way to receive [gravity’s] information other than its effects on things…”

            What is the pattern of having weight?

            Liked by 1 person

          7. Voltage is a gradient or electric potential between two points.

            Pressure is a force exerted by something like the speed of motions of large numbers of gas particles.

            Weight is the force of attraction between something and the earth.

            Seems like each of these is about the relations between multiple things.

            Liked by 1 person

          8. The lipids and such are a table cloth/the platform – at best any change in their recycling might influence further connections in some way. But influence isn’t the same as being integral. You can pull the table cloth out and the stuff that was on top stays there, it wasn’t intimately attached. A new table cloth doesn’t mean none of the stuff that sat on it exists/none of that is there anymore.

            On the dead I don’t know where you’re going – a living person secretly copied wont be bemoaning that a copy is walking around just as much as a dead person wont be. Unless you want to, Dr Manhattan style, draw an equivalency between the living and dead I don’t know where you’re going with this? There doesn’t need to be an afterlife here, just an intellectual integrity on the matter – which does (or can) exist while other humans are alive – most pointedly if it the person who is alive is someone who was copied without their knowledge or consent. They aren’t bemoaning it but it’s certainly not some continuation of them. The tree was cloned and the two trees now grow new branches in subtly different ways to each other.

            And a younger self at age 10 for example is not somehow dead in the same organism at 40 as much as if you’d shot the younger self in the head one second after they became a 10 year old. I could believe trauma could make for a break – amnesia is probably an example of that, especially when the former identity is not rediscovered. But sans severe trauma no former self has died as much as if the brain was pulped – I think that’s pretty much a scientifically supportable fact.

            Then again Scott Bakker had briefly insisted on his blog once that his younger self is dead, so he seems convinced of it as well. He’s pretty smart but I don’t think he’s correct or it insists on some very rigid notion of staying exactly the same as you were when 10, for the rest of your life, or that part of you is as dead as if a gunshot victim. It’s like thinking of character development/character growth as character death.

            Liked by 1 person

  6. “there are some strange mismatches between consciousness and brain activity.”

    The next paragraphs elaborates.

    “For example, as the neuroscientist Giulio Tononi has pointed out, brain cells fire away almost as much in some states of unconsciousness (such as deep sleep) as they do in the wakeful conscious state. In some parts of the brain, you can identify neurons associated with conscious experience, while other neurons don’t seem to have any affect on it. There are also cases of a very low level of brain activity (such as during some near death experiences and comas) when consciousness may not only continue, but even become more intense.”

    This would seem to be a good example from Tononi himself of why IIT theory isn’t right.

    It also tells us that the brain may be doing a lot that is unconscious.

    So I don’t see anything “strange” about this.

    Liked by 2 people

  7. “It posits an additional fundamental force of some type for which there simply isn’t any evidence.”

    I’m not sure I agree that asking whether consciousness is evidence of a force we don’t yet understand begs the question. Most of the forces science has discovered came from asking what caused an effect we didn’t understand.

    We don’t understand phenomenal experience, so it’s not entirely unreasonable to question whether some undiscovered force is responsible.

    My rejection of the idea is based more on the question of why it only manifests in brains. The other forces we’ve discovered are far more ubiquitous.

    “What is it about the raw experience of red, or pain, or any of the other examples commonly cited, that requires explanation beyond our ability to access and utilize it as information for making decisions?”

    That’s a very instrumentalist take on it. 🙂

    But some want to understand Why?

    It’s possible the program of investigating neural correlates dead ends at some point leaving the question of PE unanswered. Maybe, as I suspect, all our attempts to create machine consciousness fail, and then the question of Why? becomes crucial.

    It could also go the other way, some advance in neuroscience cracks the nut, but given all the competing theories, and given all the philosophy and talk for all these centuries, I do think it really is the Hard Problem.

    I take that view, in part, one a belief (which I know you don’t share) that human consciousness is the most startling and amazing thing the universe has evolved. But whether it’s good enough to figure itself out is proving to be an open question.

    Liked by 2 people

    1. I hope you’ll have patience with these questions Wyrd. I realize after typing it that it might come across as badgering, but it isn’t intended that way. You just have the ability to write-prompt me productively. 🙂

      “I’m not sure I agree that asking whether consciousness is evidence of a force we don’t yet understand begs the question.”

      I think the issue is whether some explanation involving new physics (or that is super-physical) is the only one that can explain what we observe. If we have multiple explanations for a phenomenon, should we prefer the exotic ones that expand physics? Or the more mundane ones that work within what we already know? It seems like we should first eliminate the more cautious explanations, and I don’t perceive that we’ve done that. Actually, many seem promising.

      “We don’t understand phenomenal experience,”

      Right. I think this gets at what I was asking in the post. What don’t we understand about it? How it comes about? Why we have it? The exact neural mechanics that produce it? What are we looking for that we don’t already have at least plausible theories for already?

      “That’s a very instrumentalist take on it. 🙂”

      I am an instrumentalist, but also a functionalist. I can be convinced that I should approach it in some other manner though. But instrumentalism seems epistemically humble, and functionalism seems like the only way to approach actually understanding the mind. Or maybe I should ask, is there another way we can understand it, at least scientifically?

      “But some want to understand Why?”

      I’m totally on board with understanding why. But what are we asking why about? Are we asking why it exists? It seems like there are plenty of evolutionary reasons we can point to. Or are we asking why about certain qualities of it? Or about something I’m missing?

      I think when pondering this, we have to ask whether a system without phenomenal experience, in other words, one that couldn’t perceive colors, feel pain, etc, could nevertheless tell which fruit is ripe or whether a body part is damaged. Are there aspects of phenomenal experience that can have no functional explanation? If so, what are they?

      “But whether it’s good enough to figure itself out is proving to be an open question.”

      My own suspicion on this is that this is difficult because it’s us. It seems to me that the best way to study this is as though we were studying some external system. Once we understand how the system receives information, processes it, and produces output (including self reports), then we’ll have insights into the hazy collection of attributes we label “consciousness”. (Although some will always insist that we still don’t really understand.)

      Liked by 1 person

      1. “we have to ask whether a system without phenomenal experience, in other words, one that couldn’t perceive colors, feel pain, etc, could nevertheless tell which fruit is ripe or whether a body part is damaged.”

        Go ahead and ask, but don’t read too much into the answer. To return to my favorite analogy, you can build a car without an internal combustion engine, that accelerates and cruises just fine. But that doesn’t mean you can take the internal combustion out of *my* car and have it work just fine.

        Like

        1. That’s an interesting answer. It implies that the internal combustion part is crucial. But if I have a fully electric car, which can perform all the key functions of the gas powered version, are we missing anything vital? Certainly some people may miss the roar and feel of the gas engine, but if someone who only ever drove an electric car, who then has no sentimental attachment to gas powered versions, missing any crucial aspects of the experience?

          Along the same lines, if a robot has mechanisms to detect different colors, mechanisms which are different from ours, but it can still discuss the quality of redness, is it missing anything essential? If so, what? Or would you say it couldn’t have the ability to discuss redness unless it had our type of mechanism?

          Like

          1. To me a basic question is does consciousness require a self? In other words, does there have to be a sense of being something in order to be conscious? Using Nagel’s definition of “something that it is like for the organism to be itself”, it would seem that some sense of self is required., So, if neurons are firing and there is no sense of self involved, it is unconscious. If there is a sense of self, it would be conscious.

            So that leads directly to Cleeremans radical plasticity theory that “conscious experience occurs if and only if an information processing system has learned about its own representations of the world”.

            However, if the brain/information processing system has learned about its own representations of the world presumably this learning would exist in actual neural pathways.

            Perhaps this observation from LSD research might provide a clue about where.

            “Rather, decreased connectivity between the parahippocampus and retrosplenial cortex (RSC) correlated strongly with ratings of “ego-dissolution” and “altered meaning,” implying the importance of this particular circuit for the maintenance of “self” or “ego” and its processing of “meaning.”

            “https://www.ncbi.nlm.nih.gov/pubmed/27071089

            Like

          2. The question then becomes, what is entailed in a self? Does a system merely having a representation of itself count? How detailed does the representation need to be? Is it sufficient to have a representation of its body? Does it have to be both internal and external? Or does it need to include the information processing in its control center? Does it need to have any preferences about the state of the represented thing?

            Like

          3. If you haven’t read Cleereman’s paper, here’s the link again:

            https://www.researchgate.net/publication/51232873_The_Radical_Plasticity_Thesis_How_the_Brain_Learns_to_be_Conscious

            He makes several points. I will quote a bit to give a flavor.

            …it is difficult to see what experience could mean beyond (1) the emotional value associated with a state of affairs, and (2) the vast, complex, richly structured, experience-
            dependent network of associations that the system has learned to associate with that state of affairs. “What it feels like” for me to see a patch of red at some point seems to be entirely exhausted by these two points”.

            To hint at my forthcoming argument, a camera could, for instance, keep a record of the colors it is exposed to, and come to “like” some colors better than others. Over time, your camera would like different colors than mine, and it would also know that in some non-trivial sense. Appropriating one’s mental contents for oneself is the beginning of individuation, and hence the beginning of a self.”

            …..second point about experience that I perceive as crucially important is that it does not make any sense to speak of experience without an experiencer who experiences the experiences. Experience is, almost by definition (“what it feels like”), something
            that takes place not in any physical entity but rather only in special physical entities, namely cognitive agents.

            Consciousness thus not only requires ability to learn about the geography of one’s own representations, but it also requires that the resulting knowledge reflects the dispositions and preferences of the agent. This is an important point, for it would be easy to
            program a thermostat that is capable not only of acting based on the current temperature, but also to report on its own states. Such a talking thermostat would constantly report on the current temperature and on its decisions. Would that make the thermostat conscious? Certainly not, for it is clear that the reporting is but a mere additional process tacked on the thermostat’s inherent ability to switch the furnace according to the temperature. What would go some way toward making the thermostat conscious is to set it up so that it cares about certain temperatures more than about others, and that these preferences emerge as a result of learning.

            Like

          4. Thanks for the paper reference. I actually agree with him on his description of “what it feels like”. At least with a grounded interpretation of that phrase. Put in terms of associations with emotional feelings, it seems like a much more grounded issue: what is the neural structure and processes that triggers those associations.

            I also agree with him that much of what we label as consciousness is about prediction. Indeed, the main things brains bring to the overall nervous system is prediction. The spinal cord can provide all the straight programmatic reflexive reactions. But brains take in information and predict what the reflexes might need to react to in the future, either with distance sense information and simulating action scenarios.

            Skimming the paper, his overall thesis seems like a variant of HOT involving the formation of meta-representations. Consciousness, in his view, is the formation of these meta-representations through learning. Perhaps. On the one hand, it seems right, at least to some extent. But I’m a little nervous by the implied blank slate of the meta-representation space. I suspect it’s more of a mix. But maybe I’m wrong.

            Like

          5. I agree about the “blank slate” comment and I am not sure where Cleereman is on that.

            Personally I tend to think that the experiencer/self is constructed from innate biological attributes combined with learning.

            Quoting Wikipedia:

            “It has also been suggested that retrosplenial cortex may translate between egocentric (self-centred) and allocentric (world-centred) spatial information, based upon its anatomical location between the hippocampus (where there are allocentric place cell representations) and the parietal lobe (which integrates egocentric sensory information).”

            I think the sense of self might begin in part with this spatial differentiation and this is likely innate biological. But key to this idea is that consciousness is learned through the interaction of the innate biological with learned experience with the environment, physical and social.

            However, what might be what really differentiates biological consciousness from artificial attempts to simulate it is Cleereman’s item 1 – “emotional value”. I am not exactly sure “emotional” is exactly the right term but there is a sort of elan vital to biological organisms that provides motivation, intent, or drive that underlies the self. I hope you understand by using the term “elan vital” I am not subscribing to any unscientific theories about life but only trying to grasp in a single concept a multitude of biological processes.

            Liked by 1 person

          6. I don’t know much about the retrosplenial cortex. Based on Wikipedia, it’s function is apparently not well understood. Hope someone’s doing research on it.

            On emotional value, if I understand the term correctly, it seems similar to Antonio Damasio’s biological value, although Damasio’s concept is broader, including impulses that predate consciousness, such as single celled organisms reflexively changing direction based on particular stimuli. Or it could just refer to valance, the aspect of value judgment included in affects.

            I understood your use of “elan vital”, but given the term’s history, I think clarity requires the disclaimer you added. Personally, I prefer to just talk about the functional organization of biological systems. It seems like the word “consciousness” in these discussions already causes enough confusion.

            Like

      2. “You just have the ability to write-prompt me productively.”

        My solemn duty as an iconoclast! 😀

        “If we have multiple explanations for a phenomenon, should we prefer the exotic ones that expand physics?”

        Absolutely, but the whole point is we don’t have a mundane explanation (at least, yet).

        I quite agree with investigating the mundane first. My only point is that I don’t see it as begging the question to ponder the possibility that consciousness is something new (but entirely physical) we haven’t discovered yet.

        (Perhaps because we insist the explanation must be mundane?)

        Brains are unlike anything else in reality, and consciousness is also unlike anything else. It’s not out of the question something unlike anything else is involved in making it all happen.

        “What don’t we understand about [phenomenal experience]?”

        Why it should occur. Why should there be “something it is like” to be conscious? Why is it such a rich and textured experience?

        Because…

        “I think when pondering this, we have to ask whether a system without phenomenal experience, in other words, one that couldn’t perceive colors, feel pain, etc, could nevertheless tell which fruit is ripe or whether a body part is damaged.”

        Exactly. I believe there already are machines that categorize and sort fruit and eggs and such. Networks are capable of detecting “damage” and routing traffic around the problem.

        They don’t appear to need PE to do these things. Why do we have it? What, if anything, does it mean that we have it? Why aren’t we zombies?

        Perhaps more to the point: Does having self-reflective consciousness create moral obligation? What, if anything, do we owe each other? (I’ve suggested that having consciousness as a grounds for morality because it’s an equalizer.)

        “It seems like there are plenty of evolutionary reasons we can point to.”

        As far as evolving intelligence, certainly. But why this rich subjective personal movie? It often seems to cause more problems than it solves, so what’s the real advantage of all our inner mental turmoil?

        “Or are we asking why about certain qualities of it?”

        I think that’s the heart of it. Why is it like it is?

        Mental illness, autism, insanity, compulsive behaviors, obsessions,… Doesn’t it seem more trouble than it’s worth? Is it just that such a powerful tool is bound to be fragile? (Like a race car, maybe?)

        “Or maybe I should ask, is there another way we can understand it, at least scientifically?”

        Every way possible. I wouldn’t discard any means of investigation.

        My question is whether it’s actually too complex for us to ever fully understand. (Or, more likely, too complex to ever emulate with a numerical model. The data requirements alone are staggering.)

        “My own suspicion on this is that this is difficult because it’s us.”

        And because it’s just plain difficult! Most complex object ever evolved!

        Like

        1. “Absolutely, but the whole point is we don’t have a mundane explanation (at least, yet).”

          Actually, from what I read, we do. But it’s admittedly all in terms of access consciousness. The question is whether an explanation of access is also an explanation of phenomena. I’m trying to understand why it might not be.

          “Why it should occur. Why should there be “something it is like” to be conscious? Why is it such a rich and textured experience?”

          Again, for me, I don’t see that we need to go much further than evolution. We have these things because they’re adaptive (on balance). A “rich and textured experience” seems necessary for us to make the discriminations of the environment we need to make to survive.

          “They don’t appear to need PE to do these things.”

          So there are already machines that process the data, meaning it’s not the data flow itself that provides PE. Then what does? Might it be the surrounding functionality? The access part of consciousness? If we could put those access mechanisms in the machines, would they have PE? I personally suspect they would. (Access functionality is actually very hard to reproduce, despite Chalmers labeling it the “easy problems”, but even philosophers like Chalmers and Block can see us eventually doing it.)

          “Does having self-reflective consciousness create moral obligation? What, if anything, do we owe each other?”

          I actually think this question goes to the core of our intuition of consciousness. We regard conscious systems as subjects of moral worth. My contention is that we regard subjects we decide as being of moral worth as conscious. The two seem inextricably linked. Imagine an AI system that had every aspect of our mental life, the ability to discriminate colors, assess damage, navigate the world, etc, but had no self concern. Would we regard it as conscious?

          “Mental illness, autism, insanity, compulsive behaviors, obsessions,… Doesn’t it seem more trouble than it’s worth?”

          This seems similar to asking whether computers are worth it even though they sometimes blue screen, have bad chips, hard drive crashes, or otherwise malfunction. We have to asses the system in its functional state. If every conscious entity was insane, it seems clear consciousness would quickly get selected out of the gene pool.

          “Every way possible. I wouldn’t discard any means of investigation.”

          Definitely. But what would those be?

          Like

          1. “But it’s admittedly all in terms of access consciousness.”

            How well is that understood and what is really involved? I’m not sure it’s a explanation so much as an avenue of exploration.

            To the extent “access” just means accessing memory (such as a computer might do), it offers no explanation I can see for phenomenal experience. To the extent “access” involves phenomenal experience, we don’t know what’s going on.

            “We have these things because they’re adaptive (on balance).”

            But what is adaptive about the profound awe I felt when I looked through a telescope and photons from Saturn entered my eye? What is adaptive about the joy I feel over a good joke or great piece of music or the words of Shakespeare or other great writers?

            Or, while it’s a great evolutionary adaptation, how exactly did we develop the ability to improve tools — a trait that escapes the animal kingdom? Why did only one species evolve so far in terms of intelligence and consciousness?

            “Then what does [provide PE if not data flow]?”

            Well that’s the $64,000 question, isn’t it. I think it arises from the collective operation of the whole physical brain.

            “If we could put those access mechanisms in the machines, would they have PE?”

            It depends on exactly what you mean. Some machines, even some programming languages, can access themselves and their processes and report on them. So that much also doesn’t seem to provide PE.

            At a higher level, it seems to involve phenomenal experience, at least as recalled memory. (I tend to agree with you that AC and PC are two aspects of one thing.)

            “My contention is that we regard subjects we decide as being of moral worth as conscious.”

            I’m not sure I agree. That suggests only conscious beings can have moral worth (not acts or laws or ideas). My gut sense is that ⟨moral worth⟩ is a larger category than ⟨conscious being⟩.

            I do regard the latter as a subset of the former, though.

            “Imagine an AI system that had every aspect of our mental life, the ability to discriminate colors, assess damage, navigate the world, etc, but had no self concern.”

            I’m not sure such a thing is possible. We discussed this before in terms of terminal and instrumental goals. The belief is that a system smart enough to highly intelligent — let alone possessing an inner life — would necessarily see self-survival as an instrumental goal.

            “This seems similar to asking whether computers are worth it even though they sometimes blue screen, have bad chips, hard drive crashes, or otherwise malfunction.”

            As far as the more severe issues, true. There is also that pretty much everyone is bad at math, especially bad at statistics, prone to all sorts of personal filters and twitches.

            It’s as if every computer was cranky, moody, and prone to fits of emotional fancy. Given how badly our brains work, it’s a testament to their advanced capabilities that we’ve survived this long. 😀

            (Which, in point of fact, isn’t all that long, and our future very much remains to be seen.)

            “But what would those be?”

            I don’t know how much whole-brain approaches are in favor, but I’d like to see research into what the brain’s EMF environment does (if anything at all).

            I would imagine almost any approach I can imagine is being pursued by someone. I doubt I can come up with anything original. It’s a matter of something finally paying off, or perhaps hitting some limits to what we can discern.

            Like

          2. “How well is that understood and what is really involved?”

            That’s a broad question. What I can say is that we have insights into how our visual systems recognize lines, shapes, shadows, colors, and textures. There are specialized neurons for each that only get excited when their particular attribute is present. We can also map where faces are registered, and even to the point of particular faces. We can see activations in the parietal regions when test subjects recognize a particular object or concept in a multi-modal fashion.

            Of course, there isn’t yet any kind of complete accounting. That’s a long way off. The picture is blurry. There are a lot of details to be learned. But the idea that we have to go exotic to understand how the mind works seems increasingly unwarranted with each passing year.

            “To the extent “access” involves phenomenal experience, we don’t know what’s going on.”

            But again, what specifically about phenomenal experience do we not know what’s going on?

            On the adaptiveness of awe, funny jokes, or art appreciation, I can’t really say that they are adaptive (although they might be). The modern environment has a lot of stuff in it that weren’t in our ancestral environments. As a result, it seems clear we have a lot of behavioral spandrels, reactions to stimuli that happen because they resembled something in our past where that reaction was adaptive, but may or may not be anymore.

            “Or, while it’s a great evolutionary adaptation, how exactly did we develop the ability to improve tools — a trait that escapes the animal kingdom? Why did only one species evolve so far in terms of intelligence and consciousness?”

            I don’t know that anyone can answer these questions authoritatively, but there are numerous plausible theories. How do you see this relating to phenomenal consciousness?

            “I think it arises from the collective operation of the whole physical brain.”

            Do you mean the word “arises” here figuratively or literally? If figuratively, then I think we all agree that’s true, but like the “consciousness is emergent” explanation, it isn’t satisfying. What are the mechanics? How does it arise? And what is being missed by the investigation into access consciousness?

            “Some machines, even some programming languages, can access themselves and their processes and report on them. ”

            Remember, access consciousness is not equivalent to only self reflection. It enables self reflection but is broader than that. It includes any use of sensory data for imagination, decision making, as well as self report. Certainly self reflection of a system that doesn’t have all the other functionality isn’t going to trigger our intuition of a conscious system.

            “The belief is that a system smart enough to highly intelligent — let alone possessing an inner life — would necessarily see self-survival as an instrumental goal.”

            The phrase “inner life” here seems a bit loaded. The “life” part heavily implies self concern since that’s a central aspect of life. If it has internal maps of its environment and itself, but without self concern, is that a conscious system? How would this relate to something like a self driving car?

            “but I’d like to see research into what the brain’s EMF environment does (if anything at all).”

            I think that environment has been studied. It’s how we have things like EEG, MEG, fMRI, and TMS, and other technologies. But I suspect you might be interested in things like ephaptic coupling, the ability of neurons to communicate other than through synapses. There is a contingent of neuroscientists investigating that. I highlighted one experiment a few months ago. But the findings have always been limited to artificial environments with the myelin and other insulating glia stripped away. And I’m not sure it would enable the kind of action at a distance aspects you might be looking for.

            If the brain did make use of EMF for cognition, it seems like being around any moderate electromagnetic field would be mind altering. Being in a strong one, like an MRI machine, seems like it would destroy cognition. But it takes something like a TMS pulse right by the head to cause that kind of disruption.

            Like

          3. “Of course, there isn’t yet any kind of complete accounting. That’s a long way off.”

            Indeed, and the things you listed have also been accomplished by deep learning neural networks. They fall under the “lookup” functionality I mentioned. (Which is why DL NNs are sometimes called glorified search engines.)

            But it doesn’t say why there is something it is like to recognize lines, shapes, shadows, colors, textures, faces, or familiar objects.

            “But the idea that we have to go exotic…”

            I don’t feel I’m thinking in terms of anything exotic. Strictly within the bounds of physicalism (which for me includes many emergent systems). At the same time, I don’t think brains, or minds, are just any old thing — I think they’re the most interesting and special thing evolution has created.

            I wouldn’t at all doubt new physical principles might be involved.

            “But again, what specifically about phenomenal experience do we not know what’s going on?”

            Why is there something it is like to be a brain?

            As far as we know, other than brains, nothing else that processes information, or has network complexity, has phenomenal experience. So why does our information processing complex network have it?

            We don’t know.

            “The modern environment has a lot of stuff in it that weren’t in our ancestral environments.”

            Certainly photons from Saturn, but a key trait in early intelligence is decoration. It has no survival value, so it reflects something from a creature’s “idle reflective mind” (so to speak). Not only does decoration itself have no survival value, it takes time and materials, so has a cost.

            Humor, likewise, is thought to be pretty ancient, and some animals do seem to have some rough sense of humor (often in the form of playing tricks on others). I read a theory once that suggested language evolved from a desire to tell jokes and stories.

            The point is, these depend on phenomenal experience. Without the sensations associated with art or humor or stories, why would a system bother? That we develop these speaks to the richness of our phenomenal experience.

            “Certainly self reflection of a system that doesn’t have all the other functionality isn’t going to trigger our intuition of a conscious system.”

            LOL! Shoe’s on the other foot here. Normally you’d be pointing out to me all the ways a computer system actually really is like a brain system, what with the I/O and data processing.

            And I would have to agree with you that, yes, some computer systems do have such capabilities (but, of course, no sign of consciousness). OTOH, all our computer systems are pretty crude currently.

            “The phrase ‘inner life’ here seems a bit loaded. The ‘life’ part heavily implies self concern since that’s a central aspect of life.”

            I was replying to your line, “Imagine an AI system that had every aspect of our mental life,…” I meant the same thing by “inner” — and the word “every” is pretty important there, hence my answer.

            If the question is the much lesser:

            “If it has internal maps of its environment and itself, but without self concern, is that a conscious system?”

            (As in a self-driving car.) No, I would not call such a system conscious.

            “And I’m not sure it would enable the kind of action at a distance aspects you might be looking for.”

            The fact that, from outside the skull, we can pick up EEG and other types of signals from the brain demonstrates there is a clear signal to be detected. And if we can detect it outside the skull, the signal must be stronger inside the skull.

            I’ve mentioned the possibility of standing waves formed inside the skull cavity. I’m not saying its fundamental. I’m suggesting that, given the brain evolved in its own EMF soup, I don’t see it as unreasonable it leverages that somehow. Maybe in something as simple as timing.

            I’ve also mentioned that frequency would have a lot to do with coupling. High RF wouldn’t couple very well to brain tissue, so would have little effect. Low frequency, in the region the brain uses, might be another story, and there are those who complain about living by high power electrical grid.

            Further, how do we know living in the modern bath of EMF hasn’t affected us in some way? Is there something perhaps literal to the sense that people are “crazy” these days? Does the sense of peace one gets in the deep wilderness have a bit more to it?

            I wonder if we could get some people, in the interest of science, to spend a few years isolated from EMF…

            Like

          4. I wonder if you’d be willing to elaborate on what you see the phrase “something it is like” meaning. It seems to me that, since Nagel, this phrase gets thrown around far too much, and everyone nods that it’s saying something concrete and specific, but I have a suspicion people mean different things by it.

            My own, perhaps overly deflated interpretation, is that it means all the emotions associated with a particular set of sensory input. If so, then it seems like we’re in association territory. The sensory stimuli triggers associations which in turn trigger certain feelings. The resulting melange of sensory and emotive feelings form the “something it is like”. A lot of search engine activity 🙂

            Or do you mean something different by the phrase?

            “LOL! Shoe’s on the other foot here.”

            Hmmm, I might have totally missed your point here, or perhaps you missed mine. There was nothing in my comment about conscious self reflection resting on many other capabilities that necessarily preclude any of those capabilities from being implemented in a computer system. Really, I was just referring to the hierarchy I sometimes discuss. (1-reflexes, 2-perception, 3-attention, 4-imagination / sentience, 5-self reflection). My point was that 5 without 1-4 doesn’t provide a system we’d think of as conscious.

            “I wonder if we could get some people, in the interest of science, to spend a few years isolated from EMF…”

            Ha! We’d need to find those people who live in Faraday cages or walk around in tin foil hats. Although truly avoiding electromagnetic fields of any kind is extremely difficult. Ask the people trying to build quantum computers.

            Like

          5. “I wonder if you’d be willing to elaborate on what you see the phrase ‘something it is like’ meaning.”

            I mean what I take most to mean by it: in a word, qualia. It has nothing to do with emotions, but what it is like to see a color, to taste sugar, to touch silk, to hear a chord, or to smell peppermint.

            It’s not about looking anything up. It’s about our response to inputs.

            “Hmmm, I might have totally missed your point here, or perhaps you missed mine.”

            Always possible. Let’s rewind…

            You1: “[W]e have to ask whether a system without phenomenal experience, in other words, one that couldn’t perceive colors, feel pain, etc, could nevertheless tell which fruit is ripe or whether a body part is damaged.”

            Me1: “Exactly. I believe there already are machines that categorize and sort fruit and eggs and such. […] They don’t appear to need PE to do these things.”

            You2: “So there are already machines that process the data, meaning it’s not the data flow itself that provides PE. Then what does? Might it be the surrounding functionality? The access part of consciousness? If we could put those access mechanisms in the machines, would they have PE? I personally suspect they would.”

            Me2: “It depends on exactly what you mean. Some machines, even some programming languages, can access themselves and their processes and report on them. So that much also doesn’t seem to provide PE. At a higher level, it seems to involve phenomenal experience, at least as recalled memory.”

            You3: “Remember, access consciousness is not equivalent to only self reflection. It enables self reflection but is broader than that. It includes any use of sensory data for imagination, decision making, as well as self report. Certainly self reflection of a system that doesn’t have all the other functionality isn’t going to trigger our intuition of a conscious system.”

            Me3: “Normally you’d be pointing out to me all the ways a computer system actually really is like a brain system, what with the I/O and data processing.”

            You4: “My point was that 5 without 1-4 doesn’t provide a system we’d think of as conscious.”

            Seems like the conversation wandered from your original question: My answer to which is, yes, a system without phenomenal experience absolutely can tell if fruit is ripe or if it is damaged.

            “We’d need to find those people who live in Faraday cages or walk around in tin foil hats.”

            Or maybe just live in a very isolated region for a few years. It might even be productive to compare brains of people who’ve lived in isolated regions with those of city dwellers.

            Like

          6. “I mean what I take most to mean by it: in a word, qualia. It has nothing to do with emotions, but what it is like to see a color, to taste sugar, to touch silk, to hear a chord, or to smell peppermint.”

            Let’s maybe approach this in a slightly different way. We’ve already established that the raw sensory input itself doesn’t establish phenomenality, qualia, what-it-is-likeness, etc. And you’ve ruled out our feelings about that input. So, in order to keep this as theory neutral as possible, purely in terms of phenomenal language, what is left to account for? If qualia isn’t accounted for by the raw information and how we feel about it, what is missing? (I actually don’t see anything’s missing, but maybe I’m the one missing it.)

            “Or maybe just live in a very isolated region for a few years.”

            The issue I think is there are too many natural electromagnetic fields for that to be all that instructive. We’re always in the Earth’s magnetosphere, the Sun’s, and the galaxy’s, if nothing else. And any thunderstorm, volcano, or many other natural events can affect the electromagnetic environment. That, and it’s not like the world was particularly sane and rational before we had artificial sources of it 🙂

            Like

          7. “If qualia isn’t accounted for by the raw information and how we feel about it, what is missing?”

            You keep asking the same question, and I keep giving you the same answer: What it is like to experience qualia.

            Nothing we know of — other than brains — does this. (I know you don’t want brains to be special, but I think it’s self-evident they are. On this count alone, I’d say, not to mention the whole intelligence thing on top of it. And then the imagination thing…)

            Consider the photocell behind a red filter and wired with a circuit that moves an indicator saying how many photons it detects. Absent panpsychism, this circuit has no experience of redness or even of photons. There is nothing it is like to be this circuit.

            But there is something it is like for a brain to look at red.

            In part, because we attach meaning to it (that’s the AC aspect), but also because we experience it (that’s the PC aspect). And because we can remember it, and because we can imagine it. All of these factor in to the experience.

            Ultimately, the point is there is something it is like to have that combined experience.

            Perhaps that’s just what it’s like to be something with that level of complexity and real-time processing. But it still raises the Hard Question question of: Why should it feel like anything at all? There is no precedent in anything we know, other than advanced brains.

            As you’ve pointed out several times here: There seems nothing in the physics to account for it.

            “The issue I think is there are too many natural electromagnetic fields for that to be all that instructive.”

            It would offer some data regarding the modern environment. We evolved in the ancient one, so whatever interference it causes would presumably be compensated for. The modern environment is considerably brighter in RF photons. That some people do seem to experience effects from low frequency EMF fields suggests it’s worth taking a careful look at.

            It would be enough there, I should think, to just compare brain function of people raised in isolated areas with people raised in modern cities. (The trickier part is that the cultural signal is likely much stronger. People in isolated areas just lead different lives.)

            “That, and it’s not like the world was particularly sane and rational before we had artificial sources of it”

            True, but when I compare the literature of the past with that of today, there seems something different about the collective gestalt. Probably accounted for by general cultural shift, but again, because some do seem sensitive to EMF, it seems worth investigation.

            As a comparison, we used to disdain people who suffered from eating wheat, but fairly recently have come to recognize gluten issues as a legitimate problem. (I believe fibromyalgia was also once disdained as unlikely to be real.)

            Like

          8. “You keep asking the same question, and I keep giving you the same answer: What it is like to experience qualia.”

            Here’s my issue. I’m not satisfied with words like “experience”, “qualia”, or phrases like “what it’s like”. I’m looking to have an account of these terms that doesn’t reference the term itself, or just use synonyms. The reason I’m pressing on this is that I suspect part of the issue here is people’s unwillingness to dissect these terms, to reduce them. But maybe I’m wrong. I’m trying to see if there’s something there besides my own understanding of the terms, which I know many would consider deflationary.

            But my question is, why is it deflationary? What’s missing? Emphasizing the terms, bolding them, repeating them, etc, doesn’t really help. I know the words and phrases very thoroughly by now. What I’m trying to understand is the full concept you and others see behind them. Maybe there’s some aspect that’s glaringly obvious to you and others that I’m missing?

            “The trickier part is that the cultural signal is likely much stronger. People in isolated areas just lead different lives.”

            Yeah, that’s the main difficulty. We can look at natives from the African savanna, interior of Papua New Guinea, or South African rain forests, but their lives are so radically different, teasing out what differences result from culture, diet, other environmental factors, etc, from electromagnetic ones, seems like a lost cause. Maybe we could sneak some hidden radio towers into their habitat and see how it affects them. I’m sure the IRB will be fine with it 🙂

            Like

          9. “I suspect part of the issue here is people’s unwillingness to dissect these terms, to reduce them.”

            I don’t know that it’s unwillingness so much as inability. We are talking about something irreducible, so it can’t be defined in terms of simpler concepts. It can only be described.

            There does seem a divide between those who find the phrase something it is like to be X entirely evocative whereas others find no real meaning in it. You join Sabine Hossenfelder in the latter group, so you’re certainly in good company.

            “What I’m trying to understand is the full concept you and others see behind them. Maybe there’s some aspect that’s glaringly obvious to you and others that I’m missing?”

            Perhaps the unwillingness is on your side? You don’t seem happy with the idea the brain or consciousness is special (yet still within physicalism). In a recent discussion you denied consciousness was an objective property of reality.

            You and Sabine also share stances on Free Will and Theism, and even though she’s a theoretical physicist, I think she shares your instrumentalist stance as well.

            I think (and may well be completely wrong) you and Sabine see reality as a machine that, ultimately, can be taken apart and fully understood in terms of its pieces. Sabine, I know, fully believes in reductionism. In her view, consciousness will be fully understood (to the extent there is anything to understand — she dismisses the Hard Problem) in terms of the physics.

            You’ve lived in your own head all your life. You surely aren’t denying there is something it is like to be you. You’re having the experience right now of reading these words. The life-long personal movie starring you. Your autobiographical narrative.

            You know what that is. The issue seems more you want it to be something else. In particular, something ordinary and not special. I think that desire is doomed — that’s what I think you and Sabine are missing.

            Brains, and consciousness,… something special is going on there. 🙂

            “I’m sure the IRB will be fine with it”

            Heh! That’s the problem with experimenting on humans, isn’t it. Other humans object because of some silly “moral” rules. Almost as if humans were special, or something…

            Like

          10. “I don’t know that it’s unwillingness so much as inability.”

            Certainly sensory experiences are ineffable. The experience of yellow can’t be described, only labeled and referred to. And when I see a yellow thing, such as a banana, it triggers a lot of memories (most of which aren’t conscious) that leads to affects, many mild or nuanced, a melange of feelings about seeing the banana.

            Both the sensory impression of yellow and any associated affects are ineffable. They can’t be described because they’re in the layer that forms the foundation of all descriptions. It’s like trying to spell the letter ‘A’.

            How should we respond to that ineffability? It seems like we could account for the various ineffable sensations and how they are accessed and used, which is where people like Dehaene are. The question is whether there is anything else about the ineffable items that need explanation.

            “Perhaps the unwillingness is on your side?”

            Perhaps. Maybe people like me subconsciously don’t want to see it. But then, perhaps you and the others subconsciously do want to see it. I’d say maybe we could look at which position is more emotionally comforting to get at who is engaging in motivated reasoning, but I suspect we’d just end up disagreeing on which one that is. If there’s a way to close this gap, I’m not seeing it.

            “In a recent discussion you denied consciousness was an objective property of reality.”

            I did. A position I elaborated on in a past post: https://selfawarepatterns.com/2019/01/27/consciousness-lies-in-the-eye-of-the-beholder/

            “Almost as if humans were special, or something…”

            Humans are definitely special…to humans. 🙂

            Like

          11. “It seems like we could account for the various ineffable sensations and how they are accessed and used,… The question is whether there is anything else about the ineffable items that need explanation.”

            I’m wondering about your introduction of the word “ineffable” — a word I take to mean “beyond description.” I do think “irreducible” and “very difficult to define” but a great deal has been said describing these things.

            If you just mean mysterious and too complex to fully understand, then totes!

            I’d also like to ask exactly what you mean by “could account for” — account for in what way? I have the impression the Big Problem is accounting for the why of “something it is like.”

            Those two questions aside, you seem to describe the application of PC and AC (or just C). The Hard Problem, as I understand it, is the why — specifically why it happens at all when (A) it arguably doesn’t need to and (B) it doesn’t seem to happen in any complex systems other than brains.

            “Maybe people like me subconsciously don’t want to see it. But then, perhaps you and the others subconsciously do want to see it.”

            Absolutely!

            “I’d say maybe we could look at which position is more emotionally comforting to get at who is engaging in motivated reasoning, but I suspect we’d just end up disagreeing on which one that is.”

            No doubt. 😉

            I can say that, from my side, there seems something missing. I suppose, from your side, I’m full of it. 😀

            “If there’s a way to close this gap, I’m not seeing it.”

            No, there never really has been.

            “Humans are definitely special…to humans.”

            What is speciesist about noting that only one group of beings: (1) discovered and made use of the electron; (2) discovered and made use of digital information; (3) discovered and made use of computing; (4) discovered and invented powered flight, including spaceflight.

            These are all highly unnatural — and therefore very special — things.

            Like

          12. I’m just using “ineffable” in the sense of being indescribable. While a great deal has been written about experience (although much of it maddeningly vague), I’ve never seen a description of the raw sensation of something like yellow, or a toothache. If you know of any, I’d be very interested to read it.

            By “could account for”, I just mean to understand the role of sensory perception and affects, why they’re adaptive, how they’re used in overall cognition, their role in the overall causal framework. I know you disagree, but to me this covers the “Why?” question. It’s simply a question of why it evolved and how it’s biologically implemented. There remains mystery in the implementation details, but that doesn’t strike me as anything we can’t learn about. I just can’t see the deep intractable problem.

            On humans, there’s no doubt we’ve gone far beyond our original ecological niche. But it seems to all have been enabled by differences in extent rather than sharp distinctions. The closest I’ve seen to a distinction, as we’ve discussed before, is symbolic thought, including language. The human brain is an evolutionary continuation of primate, mammalian, and vertebrate brains. And we shouldn’t forget the role of dexterity, our ability to manipulate the environment; the hand is as important as the word.

            Like

          13. “I just can’t see the deep intractable problem.”

            As I said earlier, it seems to be another of those questions that bifurcates people. There was a similar debate on Sabine’s blog when she wrote about “the Hard Problem.”

            [shrug] It’s certainly the hard problem for now.

            It may boil down to those of us who already apprehend the ineffable mysteries of physics may be more disposed to imagine the brain (the most complex thing of all) may retain some ineffability.

            “The closest I’ve seen to a distinction, as we’ve discussed before, is symbolic thought, including language.”

            Again: Electrons, digital, computation, powered flight, space traveling robots.

            The best anything else has managed is sticks. Not even the wheel.

            Like

          14. “It may boil down to those of us who already apprehend the ineffable mysteries of physics may be more disposed to imagine the brain (the most complex thing of all) may retain some ineffability.”

            Maybe, although my take is that it’s the very counter intuitive results of physics that make me suspicious of any problem that only exists because of our intuitions. Maybe we should be exploring why we have those intuitions instead of trying to explain what they intuit.

            On physics and mysteries, I’m much more impressed with the mysteries of quantum mechanics than I am with the mysteries of consciousness. The former is forced on us by empirical data.

            Like

          15. Maybe, although I’ve never given much weight, nor saw much value, in intuitions. They can offer starting points, but as you’ve said often, they can’t be entirely trusted.

            QM has odd aspects, but it’s pretty well understood in terms of how to use it. I don’t think we have anything near the understanding of consciousness that really allows us to judge how mysterious it might be. (It may, after all, turn out to use some of those QM effects. On some level, all physical things do.)

            Liked by 1 person

  8. ”What is it about the raw experience of red, or pain, or any of the other examples commonly cited, that requires explanation beyond our ability to access and utilize it as information for making decisions?”

    In a word, what requires explanation is meaning. We can understand and explain accessing and utilizing information by computers because we understand the source of meaning embedded in that information. But we don’t yet understand the source of meaning in our own information access and utilization mechanisms, and because the only thing we can use is those selfsame mechanisms, the only thing we can access are the meanings, via semiotic symbols. So we get the meanings, which some call “raw feels”, without explanation, which feels, um, magical.

    *
    [of course, some of us think we do understand the source of meaning in our own information processes. Okay, a really small number of us. Possibly just one of us.]

    Liked by 2 people

    1. Actually, while I don’t know if we know meanings with certainty, I personally don’t find that aspect of it all that mysterious. It seems to me that meaning is provided by access, by how the raw information is used, but what it enables.

      So there are at least two of us who think we understand. Maybe the ones who think we’re mistaken will explain why?

      Like

      1. So you are describing functionalism, and while I agree that meaning and experience is provided by certain functions, I humbly suggest they are not explained by those functions. This is Searle’s argument about the wall performing any given function. Any function can be mapped to the wall.

        So to put it into my terms, input —>[mechanism]—>output, a complete understanding of the mechanism and the output will not explain how the input “means” red. The explanation will have to refer to the mechanism(s) that created the input and the mechanism(s) which created the “perceiving” mechanism, and the explanation will involve a purpose (teleological or teleonomic). These are what differentiate us from Searle’s wall.

        *
        [still with me?]

        Like

        1. I am describing functionality, but it seems to me that you are too. It seems like you’re saying that the causal history of the various components is what provides meaning. If so, I agree. But ultimately the significance of seeing red is what it enables us to do (see ripe fruit, etc).

          I’m not particularly worried about Searle’s wall. I think he’s right about the wall, but what he fails to understand is that it applies to brains too. There’s nothing about a cluster of neurons that gives it meaning, except its interfaces to its environment, ultimately to the body and world. In other words, evolution provides an interpretation of the nervous system, but it’s still an interpretation.

          So I think I’m still with you, but do you perceive I’m not?

          Like

          1. No, it seems you’re with me, but have you then answered your question? That question being: What is it about the raw experience of red, […], that requires explanation beyond our ability to access and utilize it as information for making decisions?

            *

            Like

          2. I can’t speak to Taylor, but I’ve read Chalmers enough to know he’s aware of it. He’s written responses to Searle, Putnam, and Bishop on accounts of computation. From the citation section of the Stanford article on computation in physical systems:
            Chalmers, D. J., 1995, “On Implementing a Computation,” Minds and Machines, 4: 391–402.
            –––, 1996, “Does a Rock Implement Every Finite-State Automaton?” Synthese, 108: 310-333.
            –––, 2011, “A Computational Foundation for the Study of Cognition,” Journal of Cognitive Science, 12(4): 323–57.
            https://plato.stanford.edu/entries/computation-physicalsystems/

            But maybe I’m looking at this as being too obvious and it’s not obvious to him? Maybe he has all the pieces and just hasn’t put it together? Seems hard to imagine, but then it seems like something is being missed. (I’m just trying not to assume it isn’t me.)

            Like

          3. You’ve apparently read more of Chalmers’ papers than I. Do these papers address raw experience, “what it’s like”, as semantics (meaning)? Because that would be a solution to the hard problem, and I don’t think he has acknowledged any solution to said problem.

            *

            Like

          4. Not that I can recall, but it wasn’t my chief concern when I went through them. And I’m pretty sure if he saw it as a solution to the hard problem, we would have heard about it 🙂

            Like

          5. Man, I’m bummed I haven’t seen that paper before. This is now my go to paper for anybody who says a simulated brain is not conscious for the same reason that simulated rain is not wet. (Hi Wyrd 🙂

            And for the purpose of our current discussion, here is my takeaway line: “Of course, there is still a substantial question about how an implementation comes to possess semantic content, just as there is a substantial question about how a brain comes to possess semantic content.” This is the question I have been pondering and I think I have some things to say about it. But the next step, which is a step Chalmers hasn’t taken, is to say that this semantics is sufficient to provide the “raw experience”, the “what it’s like”, the qualia. I think I may shoot him an email and ask about it.

            *

            Like

          6. Good luck. I’d be curious to know what he says. In particular, what about semantics (which seems like the “problem of intentionality”, another thing philosophers wring their hands over) is so mysterious, that isn’t explained via causal histories and convergences?

            Like

          7. “This is now my go to paper for anybody who says a simulated brain is not conscious for the same reason that simulated rain is not wet. Hi Wyrd”

            Hi James. FWIW, I think Chalmers does a very poor job when it comes to the objection, “But it’s just a simulation!” [Section 3.4] In fact, after reading this paper again, I think he shot himself in the foot.

            He writes: “The property of being a hurricane is obviously not an organizational invariant, for instance, as it is essential to the very notion of hurricanehood that wind and air be involved. […] There is no such obvious objection to the organizational invariance of cognition …”

            That last assertion, in my view, is poorly grounded. I think his assertions about “organizational invariance” need some analysis. It may very well be the case that consciousness depends on physical mechanisms just as hurricanes do. That seems, to me, a pretty obvious objection.

            Note that he admits a simulated hurricane is not at all actually a hurricane. His assertion is that consciousness — alone of all physical things — doesn’t supervene on its physicalism. I increasingly find that assertion suspect. (And dualist.)

            The deeper problem to me is that his analysis is “based on the idea that a system implements a computation if the causal structure of the system mirrors the formal structure of the computation.” [emphasis mine]

            I agree completely! This is why he can say Searle’s wall doesn’t compute, nor do Putnam’s Pixies. The putative algorithm behind those computations is not, in any way, reflected in the causal structure of the supposed computation implementation.

            (I’ve asserted much the same in saying a putative computation needs to be the lowest energy account of the supposed implementation. Slightly different approach, but it amounts to largely the same thing.)

            But when it comes to computation, there is a disconnect. The causal topology of a computer reflects its operation as a computer. Other than the abstraction represented by certain bit patterns (i.e. a program), there is no causal topology in a computation (other than the topology of the computer being a computer).

            The computer doesn’t know, or care, what computation it’s running. OTOH, the “computation” performed by our brains does have a casual topology that matches the “algorithm” (as does any physical mechanism).

            This is precisely the difference I try to explain when Mike asks why I think analog and digital are so different. In a phrase: the casual topology.

            The causal topology of a physical process maps directly to its “program.” The casual topology of a numeric process only involves the causality of the numeric process itself. In a digital computation, there is no representation of the causality of the simulated process. (After all, it is only the interpretation of the output bits that give any meaning at all to the generated data.)

            Which means a Positronic brain should work — it has the right causal topology — but a simulation may be doomed to fail.

            Like

          8. I think Chalmers’ paper is a solid defense of computationalism, but I would since I agree with him on this 🙂

            On causal topology, I don’t see the argument against the simple fact that any actual implementation of a computation is a physical system, with a causal topology, albeit a transitory one when implemented in a general purpose computer.

            On analog vs digital, did you see Chalmers separate discussion on discreteness and continuity in section 3.4? He shares my and DM’s conclusion that a sufficiently high resolution discrete system can implement everything that matters in a continuous system, but he also mentions continuous models of computation, citing this paper:
            MacLennan, B. 1990. Field computation: A theoretical framework for massively parallel analog computation, Parts I – IV. Technical Report CS-90-100. Computer Science Department, University of Tennessee.

            Like

          9. “I think Chalmers’ paper is a solid defense of computationalism,”

            Because?

            “On causal topology, I don’t see the argument against the simple fact that any actual implementation of a computation is a physical system,…

            Agreed.

            “…with a causal topology, albeit a transitory one when implemented in a general purpose computer.”

            If you mean the program the computer is running, there is no causal topology for it in the computer. The computer itself has a definite casual topology in how it operates, but the bits it manipulates mean nothing to it.

            The data fetch, data store, and bit manipulations, are identical to the CPU regardless of what program is running. Chalmers even says computation is strictly syntactical.

            Data always requires interpretation. (As I think you’ve asserted yourself.)

            “On analog vs digital, did you see Chalmers separate discussion on discreteness and continuity in section 3.4?”

            Yes, and this is not about precision. It’s about the casual topology of a physical system verses the (lack of causal topology) in a numerical system.

            Digital always disconnects us from physical reality. Just compare the pits on a CD with the grooves on a record.

            Like

          10. “Chalmers even says computation is strictly syntactical.”

            From the paper (section 3.4, Syntax and semantics):

            While programs themselves are syntactic objects, implementations are not: they are real physical systems with complex causal organization, with real physical causation going on inside. In an electronic computer, for instance, circuits and voltages push each other around in a manner analogous to that in which neurons and activations push each other around. It is precisely in virtue of this causation that implementations may have cognitive and therefore semantic properties.

            It is the notion of implementation that does all the work here. A program and its physical implementation should not be regarded as equivalent – they lie on entirely different levels, and have entirely different properties. It is the program that is syntactic; it is the implementation that has semantic content.

            “Because?”

            I think he gives a good account and addresses the major objections, as least as of the early 90s when it was written. (It does give short shrift to Godel, but then I don’t think that objection merits much shrift.)

            Not that I expected you, or any other passionate anti-computationalist to be convinced. If you had, I would have had to ask who are you and what had you done with the real Wyrd? 🙂

            Like

          11. “‘It is the program that is syntactic; it is the implementation that has semantic content.'”

            Agreed. The implementation is, indeed, crucial. I spent time thinking about it last night and realized Chalmers really does shoot himself in the foot (although I like a lot of what he says; always have). It’s gonna take a whole blog post to explain, though, so stay tuned.

            “Not that I expected you, or any other passionate anti-computationalist to be convinced.”

            I’m only a “passionate anti-computationalist” to the extent you’re a ‘passionate computationalist.’ Can we please stop assigning this to our emotions and let it turn on our rational analysis?

            My gut sense is that computationalism is utterly wrong, but I set that aside because I acknowledge the potential bias there. My rational analytical view, based on reading, discussion, and thought, is (as I’ve long said) that computationalism is an extraordinary claim that requires extraordinary proof. And I’ve explained, in detail, why I think that. (I’ve always acknowledged that it may well prove to be right, so I can’t be as passionately anti- as you suppose. 😉 )

            Like

          12. Chalmers’ kind response:

            hi — your view sounds like what philosophers call reductive
            representationalism, explaining consciousness wholly in terms of
            content. i gave arguments against that view in “the representational
            character of experience” (which you can find online), though i like a
            nonreductive version of the view (roughly where qualia have semantics
            but the semantics doesn’t wholy explain the qualia). see also e.g.
            the SEP article on phenomenal intentionality.

            *
            [doing some reading now]

            Liked by 1 person

          13. Wyrd, the following exchange suggests to me that you are missing something:

            Mike: “…with a causal topology, albeit a transitory one when implemented in a general purpose computer.”

            Wyrd: “If you mean the program the computer is running, there is no causal topology for it in the computer. The computer itself has a definite casual topology in how it operates, but the bits it manipulates mean nothing to it.”

            Chalmers, and Mike and I, are saying that the causal topology is a combination of the hardware and software. A computer running program A has a causal topology. That same computer running program B has a different causal topology. Computer number 2 running program A has the same causal topology as Computer number 1 running program A.

            If you, Wyrd, are saying that a computer has a causal topology separate and unrelated to the program it is running, then you are using a different concept of “causal topology”.

            *

            Liked by 1 person

          14. “Chalmers, and Mike and I, are saying that the causal topology is a combination of the hardware and software.”

            Yes, I know. I think the view is incorrect. It’ll take a whole blog post to explain why, so stay tuned. (Maybe by Monday.)

            The short form is problems with the definition of “causal topology.”

            “If you, Wyrd, are saying that a computer has a causal topology separate and unrelated to the program it is running, then you are using a different concept of ‘causal topology’.”

            I think the concept is being misapplied when it comes to the hardware+software combination. A running computer (running any program) has the very strong causal topology of being a computer. (Bits of it, for instance, are full-adders, which have a very strong and clear causal topology.)

            Chalmers claim of causal topology of the software is a much weaker claim, and I think it clearly violates his own principle of “organizational invariance” — to wit, the organization and causal topology of the putative “mental states” in the computer is nothing at all like the putative mental states of the brain.

            Like

          15. Wyrd, here is a diagram of a causal topology:

            Input1->[mech1]->output1->[mech2]->output2->[mech3]->output3

            Each mech(anism) here is performing a specific function. Each mech could be replaced by a different mech, and as long as the new mech performed the same function, the causal topology would be unchanged. The following, however, would be a different causal topology:

            Input1->[ubermech]->output3

            A system is instantiating a causal topology if you can identify each individual part of the topology. So if you look at a given system and can identify input1, mech1,mech2, mech3, output1,output2,output3, and they have the proper causal relations, then that system instantiates that causal topology.

            Now when I say “each individual part”, that does not necessarily mean the parts are physically separate. It may be that a single physical component acts first as mech1, then as mech2, then as mech3. Ditto with inputs/outputs. That’s okay as long as those parts match in causal order and the functions performed match as described. If it appears that the component acts as mech2, then mech3, then mech1, that would not match the causal topology in question.

            Does that change anything?

            *

            Like

          16. I agree with what you say about causal chains. My view is that the causal topology of the brain (or mind) are necessarily significantly (if not completely) different from that of any algorithm.

            The key difference being the brain reifies its “algorithm” directly and physically. But the computation is indirect and informational — its primary causality is the mechanisms of the computation. What is reified is the computational mechanism. It’s only in the abstract that algorithm emerges.

            As you said, the semantics lie in the combination of the program, the machine it executes on, and the design of that machine. As you’ve also said, there is a direct semantics in the neural flow of our vision wiring (let alone at higher levels).

            I find the difference crucial. Brains have semantics, computers don’t, but computers+brains do, therefore it seems computers can’t do what brains do.

            Like

          17. Wyrd, a causal topology is substrate independent, by definition. It doesn’t make sense to say the causal topology of the brain is different any more than it makes sense to say the number of cats in two cats is different from the number of nails in two nails.

            There are (infinitely?) many causal topologies that can be mapped to a brain, but there is one which is fairly straight forward and intuitive. That’s the one where each mech represents a neuron or other cell. In an actual brain, each mech is instantiated by a neuron, but (in theory) any given mech could be performed by some computer controlled robot. If such a thing were instantiated, it would have the exact same causal topology. Similarly if every mech were replaced by a such a robot. Similarly if each robot was controlled by a single computer. Similarly if every input and output was replaced by its equivalent in said computer. Again, the causal topology is everywhere the same.

            *

            Like

          18. “Again, the causal topology is everywhere the same.”

            Only at the highest level, which computationalism assumes is okay. I don’t.

            The moment you remove the physicality of the network and neurons, the moment you replace that with software, the moment you simulate physicality, you have an entirely different principle causal topology in play.

            Computationalism assumes that’s okay. I don’t.

            Like

  9. Hi Mike,

    I’ve recently been toying with (for me) an interesting idea of panpsychism you may enjoy punching holes in. I’m not far enough along to argue for the idea’s validity per se; I’m just intrigued by the notion for the moment.

    The notion is that if there is anything to panpsychism and the idea that consciousness pervades all of nature, then perhaps our conception of this idea at present is upside down. Perhaps, anthropomorphically, we place human consciousness at the apex of known forms of awareness, and the idea is that we are mistaken in this assumption. Perhaps the simplest forms in nature actually possess the most expanded, or highest, forms of consciousness, and what emerges with biological complexity is actually a narrowing and specialization of this broader field. The intriguing thing for me is that this could be so without detracting in any way from the scientific research that has discovered so many correlations between neurological states and conditions of awareness; nor does it necessarily posit a duality as far as I can tell. It simply suggests that what emerges in biological life is a narrowing and specialization of what already exists in a more or less spacious or unbridled form.

    This supports the notion that a “self” is actually associated with this narrowing/specialization, as increasing biological complexity simply conditions the primordial consciousness into increasingly bound states. Thus a subatomic particle may possess nearly infinite consciousness. This may seem absurd on the surface, as from our vantage they don’t do all that much… but we do know that Richard Feynman showed the paths they take can be described as the sum of all possible paths through the universe, for instance. So who is to say?

    The primary obstacles to adopting such a view I think are the notion of a “self” and the notion of “free will,” and if we relinquish our attachment to these ideas what is there but our attachment to human forms of awareness to say that biological complexity might not stand in some sort of inverse proportion to the maximal forms of consciousness itself? That is, if panpsychism has any real merit.

    It’s a thought. 🙂

    Michael

    Liked by 3 people

    1. Hi Michael,
      One benefit I do at times see with panpsychism is it removes the privileging of how human brains process information, the notion that something special at a fundamental level is going on. I sometimes wonder if it shouldn’t be accepted just to have people stop looking for some magical vitalism in the brain.

      But on the concept you’re describing, my initial reaction is to wonder if we’re still talking about what is commonly referred to as “consciousness.” Granted, the word is hopelessly protean, but if we’re talking about something an electron has an infinite amount of, I’m not sure we’re still talking about subjective experience, or even proto-experience.

      I think we’ve discussed before that I see consciousness as information processing, but not all information processing is conscious. Reading your description, I wondered if we might be talking about information. Although I don’t see that a subatomic particle would have an infinite amount of it.

      Speaking in terms of quantum mechanics, a subatomic particle does have infinite possibilities on where it might be in a wave function, albeit with some locations far more probable than others. And as we scale up to macro-levels, all the interactions collapse all those quantum possibilities into the specific reality we deal with. But that seems broader than just consciousness. It seems to apply to the whole maco level of existence, and I’m not sure how it would relate to the consciousness we perceive.

      Sorry. I know you did a post on this, which I read, but meant to re-read before commenting. Unfortunately I let it slip through the cracks. Just bookmarked it so I’ll remember to get back to it.

      Liked by 1 person

      1. No worries on my post, Mike. It’s a very different conversation there than here–both of which I enjoy.

        I think your answer above reflects the difficulty in even considering the notion that I described, which would be that the simplest structures in nature have the greatest awareness. The primary reason to dismiss such a notion is that consciousness is presumed to exist in proportion to the information-processing capacity of organisms (or AI systems), and electrons and photons and the like possess virtually no information-processing capacity the way we view them. So, when you say, …if we’re talking about something an electron has an infinite amount of, I’m not sure we’re still talking about subjective experience, or even proto-experience, this is no more than a statement of a different position than I am asking you to consider.

        What I am asking you to consider, for a moment, if you’re willing, and just for the fun of it, is that consciousness might not be a phenomenon built up by and operating upon biological complexity, but that the forms of consciousness associated with biological organisms are actually reductions of a primordial consciousness that are conditioned by and operating upon biological complexity, such that increasing biological complexity more completely conditions, or bounds, or gives attributes to what is in its natural state wholly unconditioned and perhaps boundless–like a field capable of coupling with matter at any/every point.

        One argument for pantheism, as I understand it, and which you and Wyrd have been debating, is that the awareness of what we do is not necessarily to our evolutionary advantage. At minimum, it’s a debate that could be had with reasonable disagreement between educated people, I think. Because it is not our awareness of what is happening that matters one wit—it is what happens that matters. Our awareness of what we’re doing can reasonably be viewed as extraneous. So pantheism is something I think people consider as a way of saying that our awareness of what is happening isn’t unique to certain systems, but is always latent in every particle of the universe, in some degree.

        Duality emerges when we decouple consciousness from matter, and say that consciousness exists on the one hand, and matter and energy on the other, and somehow they overlay upon one another. Pantheism based on information processing attempts to do away with this duality, right? It posits that awareness is part of what matter and energy are, and then suggests that the forms of awareness, or consciousness, that emerge are simply the product of complexity as all these little motes of conscious potential congeal into some greater whole, at least with regards to information processing. But the dominant perspective of pantheism places matter and energy at the lead, and suggests that the simplest aggregations of matter and energy have the least awareness. As we get to the Planck scale of energetic and material phenomena, we have close to zero consciousness, right?

        I am saying, what is a good reason it could not be exactly the reverse? What is a good reason why the simplest structures of matter and energy could not be the most aware? The answer that they don’t process much information is not (in my opinion) an answer, because it presumes a different starting point at the outset, so it is sort of circular to both start and end with that reply. What I’m asking you to consider is nothing more than the notion that some form of consciousness, which may seem very strange to you and I, but which approximates the most unlimited form that consciousness can take, (as opposed to the dimmest possible form consciousness could take), is what the fundamental elements of matter and energy are. Is there a reason not to consider this conjecture? Probably all sorts of reasons. But I’m suggesting it be taken as a given for a moment, so that what is logically inconsistent about it can be identified.

        What I’m also saying is that what we call human consciousness, or dog consciousness, or octopus consciousness, is still specifically related to the biological structure of those organisms. But rather than being built up from nothing, it is being distilled from everything by the lens of the physical structure of the organisms themselves. There is nothing about this idea that would preclude specific forms of consciousness, like human consciousness, from existing in a one-to-one relationship to states of the human body, and so nothing to really detract from the science we have. It just flips the spectrum of consciousness and says at the bottom of the universe, (to speak in directional terms which don’t make a lot of sense, I know), is the greatest luminosity of awareness, instead of the dimmest.

        I’ve gone on for a while here because your response seemed like it came from the vantage I know is your home base—that consciousness is information processing built upon biological complexity. So in essence, you didn’t say why this idea does or doesn’t make sense to you, except to say that you think differently in general. Which I already knew.
        What would be the evidence you would present for this being a flawed perspective?

        Michael

        Liked by 2 people

        1. Thanks Michael.

          You use the word “pantheism” rather than “panpsychism”. I wonder if this was intentional, because it seems to be what’s on your mind. What you describe differs from typical panpsychism, it seems to me, in much the same way that pantheism does. Infinite consciousness that we are all a small constrained part of seems similar to the traditional pantheism narrative.

          What evidence would I present for it being flawed? I don’t know that there is any, at least unless it makes some sort of predictions we could then compare to reality. Or perhaps I should ask, if it is true, then what do you see as the implications?

          Do you see this infinite consciousness as unified? In other words, does it have thoughts, plans, emotions, attention, etc? Or is it more a diffuse sort of force?

          If unified, how would you see it being reconciled with the expanding universe? 97% of the observable universe is already forever unreachable, and matter is constantly moving beyond our cosmological horizon. Would a unified consciousness reach beyond this horizon? If so, how? Or what about black holes and their event horizons?

          (We discussed quantum wave functions above. Do wave functions spread beyond these horizons?)

          You mentioned that this ” is what the fundamental elements of matter and energy are”. This seems like a form of idealism. Although if we define this form of consciousness you’re talking about as being equivalent to existence, then the quantum wave function stuff might become relevant again, since what constrains an elementary particles existence seems to be its interaction with the environment.

          On discussing this in terms of how I usually think, I’m not sure how to break out of that, at least while remaining authentic. If you ask me to think about a proposition, you’re going to get something in scientific or reductionist philosophy. My trying to relate what you’re saying into the way I see things is just an attempt to find an interpretation I might find plausible.

          Liked by 1 person

          1. Hi Mike,

            Pantheism was a Freudian slip. I read about that term once a number of years ago, but don’t recall exactly how it differed from panpsychism. My intent here was to avoid invoking any historical concept of a deity, and to focus instead on a simple conceptual position: what if consciousness exists in relationship to matter/energy as I suggested in the note above? It may well be similar to pantheism, but in any event, I was trying to start with a very simple idea.

            As to your final paragraph, when you note that you are not sure how to get out of your usual patterns of thinking while remaining authentic, this is of course a problem for dialogue between any two parties with diversity of viewpoints. I’m not asking you to be inauthentic in the sense of compromising your ethical positions or beliefs–unless you think that temporarily testing a logical position or attempting to see the world through another person’s eyes is, in fact, crossing some line. The inability or unwillingness to entertain positions different than one’s own is really just fundamentalism, is it not? I don’t actually think of you as being unwilling to explore other positions, so I’m not sure I’m understanding your comment as you intended it. But it is clear that you have some difficulty in speaking with me “as if” the idea I proposed were feasible.

            Were this notion correct, there could be many implications to our concepts of reality, or very few in fact, depending on the details of how this starting point were developed. But it’s not going to be a fruitful discussion to suggest implications if the vantage from which they will all be viewed is the vantage you have difficulty stepping outside of. We’ve done that before.

            If there was one idea I would ask you to consider meaningfully, it would not be that your view of scientific reductionism is wrong, but that it is a logically consistent system of thought with a starting point that is itself outside of the system. The starting point is a choice, and one that cannot be proven correct. Granted there is a great deal of evidence that comes after that, which is not logically in conflict with the starting point, but the point I would ask you to consider is that the reason the evidence is not logically in conflict with the starting point is that the starting point defines the evidence.

            Let me say it one other way. There is, at face value, as you’ve agreed above, nothing logically flawed per se with a different starting point. Once you accept that alternate position as your starting point, then a logically consistent thought system can also be developed using the same evidence a scientist would use, only because the starting point is different, the evidence stands in a different light. But there is no way for me to argue one starting point or the other is “true.” Nor do I care to do so. What I care about most, actually, is not who is “right”, but acknowledgment of the validity of diversity in starting points, which would enable a richer, more expanded conversation with out the rigidity and defensiveness we see between differing philosophical positions. As things stand now, the way I see it is that we are generally at loggerheads in our society about things that are simply preferences, but we rarely if ever get to the point we’re actually talking about–which isn’t what any given particular fact or bit of evidence means, but what our starting points are. Because our starting point determines which way the light of our thinking shines upon the topography of facts in which we navigate, which in turn defines the direction in which the shadows of our attentions fall, and arguing, discussing, debating about the evidence we see in the shadows is getting us nowhere, really. It is actually preventing us from navigating the spaces we must share in order to bring about the improvements we would all wish to see in our world.

            I should say that generally speaking, the theists of the world are desperately in need of this same medicine.

            Michael

            Liked by 1 person

          2. Hi Michael,
            I’m sorry you perceive that I didn’t try to engage with your idea. I did, to the extent I could. That’s what me trying to relate it to the world as I understand it (including the evidence I’m aware of) and the questions were about. It wasn’t simply to supply a counter-narrative. But it seems like it wasn’t what you were looking for. Sorry. If I knew a way to do it better, I would. Some of it might be that my interest in spiritual matters is pretty limited, which probably hobbles my ability to give you the type of discussion you wanted on this.

            On the rigidity and defensiveness we see between different positions, if you can find a way to avoid that, I’d be very interested. People seem to have powerful emotional feelings with a lot of this stuff. I stopped debating theists years ago, because I perceived that the debate was painful for many of them. (That and it didn’t seem like there had been new arguments for a long time.) But many people seem to have similarly powerful feelings about consciousness and the mind. Sometimes I wonder if I shouldn’t just shut up about the subject, but I’m too interested in it to leave it alone.

            On starting points, that’s a proposition I often hear from spiritualists and similarly minded people. The implication is that science is making all kinds of assumptions. Why not try different ones? Why not evaluate evidence under the assumption that certain spiritual beliefs are true? Such as God exists? Or that the paranormal phenomena are real?

            But I prefer to start with as few assumption as possible, and force every proposition to be justified. Undoubtedly I’m currently making unwarranted assumptions, perhaps for psychological reasons, or due to cultural indoctrination. But I think we should have a goal to try to discover these and scrutinize them. Of course, you can get carried away with this and bury yourself in solipsism. In the end, for me, it comes down to what enhances our ability to predict future experiences and what doesn’t.

            Liked by 1 person

          3. Hi Mike,

            First of all, there is no need to apologize. Your latest note, and the direction this exchange has taken, feel more like talking to one another than solely debating ideas, which is truly enjoyable for me. And I love this subject too, or I wouldn’t be here. If my response led you to feel the need to apologize, then it may not have come off as intended. I am sorry about that, too. There was no intent on my part to chide you. There was instead an effort on my part to try and explain in a clear-headed way where I experienced a gap in your willingness to entertain a foreign idea, and to suggest that this type of gap (which occurs everywhere, all the time, between any two people with different vantages) is directly related to our inability to communicate on issues that people get emotional about, myself included. As a partner in conversation, I hope you might find this information useful.

            You may think that I’m asking you to engage on some spiritual footing that is unfamiliar to you, but I am not. I was really just asking you to consider a version of panpsychism in which consciousness is most expanded when coupled to physical form in the most limited way, as opposed to vice versa. And I appreciate that you gave it your best. My motivation was to both to explore a notion that was new and intriguing to me, and also to suggest that there are multiple ways of viewing our experience of reality that are potentially consistent. The second aspect of my motivation is a response to the idea sometimes presented here, and elsewhere, that a person like me who keeps the door open for possibilities that differ from a strictly materialist viewpoint is somehow not willing to think or consider evidence, or insists on thinking as they do strictly for emotional solace, or is well-intentioned, but naïve and delusional.

            The way to avoid rigidity and defensiveness with people of various backgrounds and thought systems and orientations in belief, is to consider the potential validity of every position you encounter, and not presume that the reason they are unwilling to adopt your own position is because they are needy. I have complete respect for your preference to begin with as few assumptions as possible, and I comprehend the striking power of a materialist perspective. It exerts a certain sway over me, too, and I’m uncomfortable proposing ideas that are in obvious conflict with the evidence we as a species have developed to date.
            But, at the same time, I’m uncomfortable telling other people just as smart as I am, that they have devoted their lives to delusion. When I say this, you might (I don’t know) conjure the image of a fundamentalist, book-oriented, text-thumping theist, but those are not the persons I am speaking about. It is my preference to seek to understand as many sides and positions as I can. In doing so, I have discovered something very important, I think, and that is that the starting point of a thought system is everything. There is no really no such thing as evidence that stands apart from, or outside of, the orientation in which the evidence was cultivated.

            Does this mean I don’t believe the facts derived from the double-slit experiments, or the eclipse of the sun? No. I think those facts are obvious. But in my opinion, the interpretation of those facts can never make the entry point into the thought system objectively valid. And so what happens, and what causes rigidity and defensiveness, is that people have an extremely hard time engaging when the entry point into their entire thought system is threatened. We all do.

            The honest answer in my opinion, is that we simply don’t know what the correct starting point is. But we kind of have to pick one to get going, so we’re all groping in the dark here. Each starting point has its own array of outcomes, and territories to explore, and the dialogue that interests me most is the one where we each say, you went left when I went right, and what did you find there? That, to me, is how to avoid rigidity and defensiveness. And if you have two people willing to engage in the temporary discomfort of not knowing for a little while, they can each describe to the other what they have found, and compare notes.

            But it requires moving the conversation from a debate about what is correct, to a discussion about what obtains from various points of beginning. And a corollary to this, is that it is not simply a debate of ideas, but a getting to know one another.

            I am a tribeless person, Mike. Or a multi-tribal person perhaps. I like to think I have a willingness to consider different starting points and their consequences. The topics you bring up are truly interesting to me, and I think they could be very intriguing to explore from a variety of vantages. When you note that spiritual people imply that science is making all kinds of assumptions, it sounds like you disagree with this being a reasonable assertion on their part. I don’t personally think science is making all that many assumptions, but I do think it’s reasonable to accept that it’s making at least one. That one assumption has very good reason for coming into being, and it also has very powerful and fruitful outcomes. But it also has limitations. And the outcomes—at least in my opinion—do not inherently justify nullifying all outcomes that derive from other starting points.

            The real crazy thing, for me, is that we simply can’t attack the problem without an initial stake in the ground. Knowing that, I’m interested in multiple stakes and what they reveal. My only point of difficulty, in truth, is the elevation of one particular starting point above others. Having said that, I can appreciate that you might wish to have the conversation here in terms of only one starting point. If that is so, I can certainly respect that, and should probably move on and cease engaging in what might feel like badgering. It is certainly not my intent to do so.

            My neediness, in a sense, is my interest in both sides of the coin. So to return to your note, I have no interest in something as banal and ill-defined as asking you to presume God is real. We don’t even know what the word God means. Or to say it another way, I can almost guarantee you that it means something different to the two of us. But if we were to take that one question, I would perhaps rephrase it as follows to provide a more interesting discussion: if you were to posit momentarily that some sort of God existed, what type of God would give rise to the universe that exists as you have encountered it? What type of God would be consistent with your understanding of this world?

            Or we could ask: if it is correct that the starting point of our various lines of inquiry has such a bearing on our interpretations of evidence, then how are we to make sense of anything whatsoever? How could we possibly derive a firm footing on which to stand?

            I don’t know if any of these questions are interesting to you. They may not be. I’m just trying to suggest questions that don’t put us into positions of defensiveness.

            Michael

            Liked by 2 people

          4. Hi Michael,
            I’m appreciate the clarifications. Sounds like I did misinterpret your sentiment. I’m grateful for your gracious comments.

            On considering the potential validity of every position you encounter, I can see that. The problem is that we all have limited time. I’ll consider just about any proposition, to an extent, and do my best to interpret what I’m being told in the most plausible manner possible. But I will ask questions, particularly about points I see could be an issue. Sometimes the person has answers and I do end up accepting a proposition I was initially skeptical of. (I credit Eric Schwitzgebel with often selling me on points I’m initially leery of.)

            But I’ve had people, when they couldn’t answer questions about their position, want me to watch long videos, read books or long articles, or generally sink a lot of time into investigating their position, and then get offended and nasty when I won’t. And people have stomped off the blog when I or others didn’t ultimately accept their position, no matter how much discussion preceded that outcome.

            I’m not comfortable telling people they’re devoted to a delusion either. I might think they are, but I’m not going to go out of my way to shove my views on them. On the other hand, I believe what I believe (or more accurately, don’t believe), and if they show up and force the issue, they’ll hear my views.

            On different starting points, I see what you’re saying. I guess my conception of the truth is that it’s usually forced on us by experience, regardless of our starting point. Indeed, the most reliable knowledge comes from conclusions we converge on from multiple starting points. I’m generally suspicious of a conclusion that depends on a particular starting point. Such a conclusion feels fragile. I want my conclusions to have an inescapable necessity. Consider that quantum mechanics is forced on us by the data, no matter where we start from.

            On what God means, I actually hold numerous conceptions in my mind. There is the angry fundamentalist god, the loving all father figure, the cosmic architect that either intervenes in its creation or doesn’t, the impersonal ground of being, the universe as a whole, or the fundamental laws of nature. My stance toward God depends on which version we’re considering. If God is the laws of nature (and there are people who worship that conception), then to paraphrase Sagan, there are very few atheists since it’s madness to deny gravity.

            Questions of epistemology are always interesting to me. Remember, I’m an instrumentalist, and probably far less committed to materialism than you might think. My commitment is to truth, which I take to be equivalent to that which enhances our ability to predict future experiences. I gave William James’ radical empiricism more credit than you might have expected. But even under that empiricism, there are concepts that aid our predictions and those that don’t.

            Anyway, thanks Michael. I always enjoy our discussions!

            Liked by 2 people

        2. I hope you don’t mind my jumping in. I noticed my name, and then I noticed your question:

          “What is a good reason why the simplest structures of matter and energy could not be the most aware? The answer that they don’t process much information is not (in my opinion) an answer,…”

          For me the reason is there isn’t enough structure or complexity at that level to support awareness.

          Awareness surely involves content — information with structure. To have such awareness requires a place for such awareness to reside. There has to be a “buffer” (so to speak) to contain the content of awareness.

          There is something called the holographic limit. It’s the surprising finding that the amount of information a volume can contain is constrained by its surface boundary

          Down in the Planck regime there are two problems. The first is the holographic limit, which constrains how much information can be in such a tiny volume.

          The second involves energy. Size is inversely related to energy when it comes to extracting energy. So is time, which is why slo-mo cameras need more light (more energy) to extract image information from tiny time slices.

          Likewise, looking at smaller and smaller things requires more and more energy. Hence CERN — a giant honking accelerator using many tera-volts to study quarks and gluons.

          More to the point, we think the amount of energy required to extract information from something Planck sized is so great, in such a small area, it would collapse into a micro-black hole. Thus the Planck limit isn’t just a size limit, it’s an energy limit. (The Strings in String Theory are under incredible tension, which is the energy they use to be that small.)

          All this, structure, size, and energy, combined makes it very unlikely (nearly impossible) that something like awareness, especially the “most aware,” could exist at that level. (At least based on physics as we understand it.)

          Liked by 1 person

          1. Hello Wyrd,

            I certainly don’t mind. It is always helpful when someone saves me from myself. Ha!

            Your points are all perfectly reasonable within the context from which you are offering them, but I was speaking from a different context and so I don’t see them as applicable in the same way that you might. I’m not saying that you’re incorrect about the vast quantities of energy required to mine ever smaller bits of space and time, or that there are physical limits to energetic information storage, but in the context from which I was day-dreaming, the most extensive form of consciousness would require the least physical hardware on which to operate. That is nothing but sheer assumption—the starting point for a game of what if…

            Your answer is predicated on that sheer assumption being untenable, which eliminates the game completely, and that is fine. I can see how difficult it is to develop a thought system with a different starting point, when it seems so completely insane from the perspective of one’s own. But that is the question nonetheless.

            Let me offer some potentially very bad ideas for consideration. Let’s hypothesize that primordial awareness is much more expansive than human consciousness, and that it is not the product of energetic information processing, but the reverse is so: matter and energy are the product of primordial thought. With regards to the bounds you offered, I could be mistaken but I think they are the bounds for observable energetic imbalances—meaning, if you want to measure a photon, you need a perturbation of the EM field to produce such a photon. Everything we can possibly measure or experience is the result of a perturbation, including all of matter and energy.

            So, let’s hypothesize that all known forms of matter and energy—including gravity, strong and weak nuclear forces, light, inertial and gravitational mass, the electromagnetic field, etc.—are the result of a temporary imbalance to a singular “field” (I’m out of words, here) that when unperturbed is completely massless, completely motionless, completely unobservable, and yet profoundly aware. Now how many perfectly cancelling waveforms can one amass on the head of a pin before you exceed the bounds of observable energy? I don’t know. Why could I not hypothesize a nearly infinite quantity of perfectly cancelling waves or fields? While they are all perfectly self-cancelling, there is nothing to observe, no mass or energy, no stuff or push or pull or density or any physical parameter to be measured, and so there is no limit on this is there? And if there is no limit to the complexity that cannot be observed, then why could we not hypothesize that it has something like awareness on a scale or with a quality we cannot really imagine?

            And to complete this experiment, let’s suggest that this perfectly neutral field (for lack of a better word) is not only a sublime form of awareness, but that the act of disturbing its own perfect balance produces the matter and energy we observe. This is the equating of a type of thought we cannot possibly comprehend, with the behavior of matter and energy. But it is a type of awareness—a type of thought—nonetheless. All I have done is proposed that awareness does not follow from matter and energy, but that matter and energy follow from awareness. These two alternatives are virtually impossible to tease apart. I’m not sure we can or ever will, which is basically my point.

            And now we’re off to the races. Imbalances (quarks and other primordial particles let’s say) collide, expand, annihilate one another, appear from the vacuum, and do all the things we’ve observed little ingots of imbalance doing for as long as we’ve been capable of observing them. And over time they join into more complex structures, and what I’m suggesting is that as the complexity of daisy-chained imbalances increases, the type of awareness is proportionally refined. Until you reach the point where, yes, if you want to have human consciousness, you need the human organism. And evolution stands. And quantum mechanics stands. And relativity theory stands. At least until science discovers something more.

            But meanwhile, a universe with this structure may have potentialities at rest within it that a universe in which awareness follows from matter and energy do not. I’ll stop there for now. But that is the idea, Wyrd.

            Michael

            Liked by 1 person

          2. What you’re trading in is poetry and science fiction. I wish you a delightful journey, but it’s not a bus I’m personally interested in riding. Do enjoy your trip!

            Like

  10. Well let’s see if I can help sort some of this business out Mike. One issue is that for some of us I think you might be going a bit too ontological rather than epistemic with your use of terms like “mysterious” and “special”. I consider these terms effective descriptions of phenomenal experience, and mean this in the vein of “I don’t understand how nature creates this sort of thing, and find it strange in general”. Surely you’re not going to tell me that I do understand how nature creates this, and that I don’t consider it quite special? Thus I wouldn’t think that you’d have any issue with me classifying phenomenal experience under such headings epistemically. Yes I could understand in an ontological sense, though for me that’s simply not the case.

    All of which brings me to the reason for this post. Many people see phenomenal consciousness as somehow an intractable problem, one that science can’t solve, one that many people cite as driving them towards various forms of dualism or the expansive types of panpsychism that Taylor advocates.

    Right. Of course that’s not me. But I worry that you might go too far the other way and so not give phenomenal experience its due credit in nature. It could be that valence may effectively be considered as “fuel” which powers the conscious form of function. Here access consciousness would ride along as a crucial informational form of input, though without phenomena would cease to have any “personal” relevance to a conscious entity — conscious function would stop.

    Liked by 1 person

    1. Eric,
      I don’t think I’ve ever argued that people don’t genuinely find this mysterious. Admittedly, I have gotten into the psychology of it before, although I’ve been trying to avoid that in this discussion, but people get into my psychology all the time, so I don’t feel too guilty about it.

      And I’m asking what is it about phenomenal consciousness that makes people find it so mysterious, that convinces them that there’s a problem here that needs addressing, possibly with super-physical or exotic-physics explanations? If even asking that is seen as denying their personal experience, well, how then should I address this question?

      I know valence as fuel is your usual selling point. I do see affects in general as input into our reasoning faculties, just as sensory input is. For me, the sum total of the sensory input and affects adds up to our “something it is like” experience, or more accurately the foundation which feeds that experience. Given that this is our most primal experience of reality, I can see why people attach a lot of significance to it. I’m with everyone up to that point.

      It’s the next step, the one that says that there is something about this above and beyond the flow of data that must be explained, that I don’t see, and would like to see elaborated. So far, most of what I’ve gotten (aside from James’ point about semantics) seem like restatements of the basic intuition.

      Liked by 1 person

      1. Okay Mike, well defended. You weren’t asking if people truly do consider this question mysterious, as I implied, but rather wondered why they do.

        Anyway your post itself seems pretty safe to me. I haven’t objected. But then you didn’t highlight one of your more speculative associated beliefs. From the metaphysics of naturalism you can defend the post itself pretty easily, though there is an issues that many of your readers are aware of. It’s that you believe there isn’t much more to valence than one part of the brain signaling another part of the brain.

        This is of course possible, though as I believe it may not be quite that simple. But what are the implications of our respective positions? Which of them makes more sense given standard evidence? And regardless of how it’s created, what are the implications of an entity having valence in itself? Beyond the “how” of it (which to me isn’t the critical question) wouldn’t it be helpful if science were to develop an effective definition for the “consciousness” term itself? An effective “What?” answer?

        You’ve mentioned seeing valence as “input into our reasoning faculties, just as sensory input is”. (To these “value” and “information” forms of input, my model includes a “memory” form of input as well by the way.) So if these are inputs to consciousness, then what is it that outputs them? The brain of course! So from here could it not be said that a non-conscious form of computer (or brain), outputs inputs to a conscious form of computer (like yourself)? And notice that I haven’t even challenged your belief that valence exists simply enough as one part of the brain signaling another. Let’s set that question aside for now. Do you see anything wrong with this sort of architecture so far?

        Liked by 1 person

        1. Eric,
          “It’s that you believe there isn’t much more to valence than one part of the brain signaling another part of the brain.”

          This is true, but I’m wondering what you see as the alternative to this. For a non-dualist, what else can any aspect of consciousness or cognition be other than parts of the brain signaling other parts (at least aside from sensory input and motor output)? What part of your own model would transcend this constraint?

          “Beyond the “how” of it (which to me isn’t the critical question) wouldn’t it be helpful if science were to develop an effective definition for the “consciousness” term itself?”

          That assumes that there is a single definition that will capture all our intuitions about consciousness. Maybe there is, but if so, I can’t see it, at least other than vague ones like “subjective experience” or “something it is like”. But one that tries to get into the details? I suspect it’s like trying to define “life”.

          “So from here could it not be said that a non-conscious form of computer (or brain), outputs inputs to a conscious form of computer (like yourself)?”

          I see what you’re trying to do with defining consciousness as a computer. I just don’t know if I find it a productive outlook. Maybe if the idea was fleshed out more. But you’ve been resistant to delving into those details. The problem with staying at too high a level is you may not be wrong, but you may not be productively right either. There’s just not enough yet to make it into either category.

          Liked by 1 person

      2. “For a non-dualist, what else can any aspect of consciousness or cognition be other than parts of the brain signaling other parts (at least aside from sensory input and motor output)?”

        Well try this Mike. If the brain produces valence for something other than the brain to experience (which is the conscious entity), then this entity should be useful to consider as an “output” of brain function. It’s like the screen on your phone, or an output which is animated by computation and so is not in itself computation. This is to say that it’s not just one part of the computer signaling another part. Just as your phone has self contained output mechanisms like a screen, couldn’t a brain be a machine that doesn’t only compute, but also has various self contained output mechanisms? Valence producers? Sense producers? Memory producers?

        I very much like this phone screen to valence analogy. None are perfect, but just as electricity lights up pixels on your screen, I consider valence to be what drives the function of the conscious entity which is animated by the brain. Without valence your own experience is extinguished as well as I see it, and even though the non-conscious brain might still be doing all sorts of other things. Regardless, there should be nothing it is like to be you under a perfect void of valence. This is primal consciousness itself as defined by me.

        Still I don’t want to get too semantic here and so be entirely closed off to your “one part of the brain signaling another” account. I can go that way as well if need be. It just doesn’t seem very clean to me. Wouldn’t this mean that there’s a conscious part of the brain? I’ve never heard you imply anything like that. But I can still take my model in this direction if need be. As I’ve said, it’s the “What?” of consciousness that interests me rather than the “How?”

        Regarding the “What?”, I don’t think that I need to account for everyone’s intuitions. All that I should need to do is develop a model which mental and behavioral scientists in general find effective for their work — something with details from which to predict what’s observed. And yes I do believe that the model which I’ve developed has such potential. Helping others understand in a “practical” rather than just “lecture level” capacity has been challenging though. To the extent that you can tell me what my models suggest about various practical matters, you’ll have it. I very much desire this.

        I can see why you’d analogize the “consciousness” term with “life”. Couldn’t mental and behavioral scientists explore our nature effectively without a useful term for “consciousness”, just as biologists have been able to do without a nailed down definition for “life”? Apparently not. Note that biologists have been able to observe physiology progressively better as technology has improved. They’ve also gained the support of chemistry as this field has developed. It’s not like the nature of viruses couldn’t be grasped because biologists weren’t sure if it was effective to call this stuff “life”. Biologist have had solid positions from which to explore both weird and non-weird dynamics in their field.

        An effective definition for consciousness, conversely, is more akin to functional brain architecture. Just as chemistry would remain a soft science if it didn’t have a functional model of the atom, our mental and behavioral sciences remain soft today given that they do not yet have functional models of brain architecture (which should include an effective definition for the consciousness term).

        And why has this sort of thing been so vexing? I suspect because macro human function is so important to us, that we permit standard human biases to trip us up. Thus the need for effective principles of metaphysics, epistemology, and axiology, which is to say for philosophy to finally enter the realm of science.

        On defining consciousness as a form of computer, I only phrased it this way because I thought you’d be fine with it. In my discussions with Wyrd for example, who has adopted tight technological specifications for “computation”, I try not to associate this term with the function of life. Instead I reference two forms of “brain function”. Some should find the computer analogy helpful, though others should not.

        Of course you do find this helpful regarding the brain itself. But is consciousness not useful to speak of as a second form of computer produced by the first? Apparently when computer science began, which was before technological computers even existed, there was a profession of people who functioned as “computers” themselves. They would do their work by means of “consciousness”. Furthermore I’ve heard Wyrd repeat a saying in the field which goes something like “Computer science is no more about computers, than astronomy is about telescopes”.

        I don’t think I’ve resisted delving into the details of my model with you, though since you’re so interested in neuroscience it might feel that way. My models simply don’t get into that sort of thing. But that doesn’t mean there aren’t associated details to check by means of experimental evidence. My ideas are essentially psychological in nature.

        Liked by 1 person

        1. Eric,
          I think the difficulty here is if you’re going to talk about the brain and its outputs, then I think you need to be careful with loose language. Talking about it outputting information to something else (other than the rest of the body) smacks of dualism. I know you don’t mean that, but that’s what that type of language implies. Yes, talking about portions of the brain talking with other portions is very messy. Unfortunately, that’s the reality. Biology is messy.

          You’re right that I generally don’t talk about any one part of the brain being conscious. It’s a mistaken view, akin to Descartes attempt to localize it in the Pineal gland. But different cognitive functions operate in various networks in the brain, and those definitely involve specific locations. When we talk about conscious access, the frontoparietal network appears to be a major player, although portions of that network can be damaged and the rest function in a reduced capacity. So talking about valances involving communication between the prefrontal cortex (part of that network), limbic system, and brainstem, is productive.

          Of course, if you don’t want to get into all those messy details, you could completely eschew any talk of brains and neuroscience and strictly stick to psychology. I was surprised how fruitful this approach can be when reading Steven Pinker’s ‘How the Mind Works’. But I understand your opinion of psychology to be pretty low. (Admittedly some of the nutty stuff that comes from some psychologists do hurt the credibility of the field, although I do think there is a solid scientific core there.)

          On viewing consciousness as a second form of computer generated by the brain, I think I can see what you’re trying to get at, but describing it that way leads to all sets of questions. For example, we have a general understanding of how information processing happens in a biological neural network, but how does it happen in this second form of computer? What are its components? What does the architecture of the second computer look like?

          On finding a productive definition of consciousness, I actually think focusing on the word “consciousness” by itself is a boondoggle. I just finished reading Stanislas Dehaene’s book on consciousness, where all of this research is focused specifically on “conscious access”, the utilization of conscious content for decision making. By narrowing his focus, he’s got something objective he can focus empirical work on. I think that’s a productive way to go. Trying to get everyone to use the word “consciousness”, a word that’s been around for centuries and has never been used consistently, is probably a lost cause.

          Liked by 1 person

      3. Mike,
        I have no problem with parts of the brain talking with other parts of the brain. I refer to all such function as “non-conscious”. This concerns the neuroscience of brain function, which is to say “engineering”. I do enjoy reading about this stuff casually a bit here and there, though it’s not truly my cup of tea. I’m interested in brain function at the macro level, which is to say its “architecture”.

        It’s true that I’m speaking of the brain producing something that the brain does not itself experience. This does not inherently mandate dualism however. It merely means that the mechanics of the brain are able to “output” various things. For example one output of my brain is to regulate how my heart beats. Such function is not “brain” in itself, but rather a product of brain function. Similarly it can be said that the brain outputs valence for you to experience. Surely you consider yourself to exist as a product of your brain, though not exactly “brain”? If it feels less dualistic for you to consider yourself as “brain”, and thus (as brain) you produce valence that you yourself experience, then okay we will go with that. I think most people prefer to instead say that they exist as products of brain. Regardless I see no inherent dualism in any of this.

        It’s only the softness of psychology that causes my opinion of it to be low, not the importance of the subject matter itself. We don’t speak of physics ever being a “soft science”, since it was part of the tide that invented science. In the past it has been no less soft than modern psychology however . (And given that it currently tries to substitute “beauty” for “evidence”, in some areas physics remains soft today as well.) I doubt that psychology will remain soft forever though. This is to say that it should develop broad general theory regarding our function which is generally accepted.

        You say that we have a general understanding of how information processing happens in a biological neural network (and I take you to mean this quite roughly). So how does information processing occur in the second form of computer that’s produced by the first? Great question!

        According to my model, a functional conscious entity harbors three forms of input, one form of processor, and one form of non processing output. The valence input is essentially the motivation which drives the system’s function. This would be pure value, and I even define it as “primal consciousness”. There is something it is like to have valence, even when not “functionally” conscious.

        Then next would be a pure information form of input which I call “senses”. Of course “smell” is considered sense, though under my system the valence component must be removed to be classified under the first heading. Similarly only the part of toe pain which provides location is “sense”, thought the pain itself is classified under “valence”.

        Next is “memory”, or “past consciousness which remains”. This effectively bonds the present entity with past entities, as well as provides all sorts of useful information and valences. (Come to think of it, technically memory could be segregated into the two other forms of input, though it does seem helpful to distinguish this on its own given that it concerns the past rather than present.)

        These three forms of input are (1) interpreted and (2) scenarios are constructed, for (3) the purpose of promoting instant valence. I call such processing “thought”, such as what you’re doing right now.

        Though I presume non-conscious function harbors countless forms of such output, I know of but one form of conscious output. This is “muscle operation”. Furthermore just as all elements of consciousness exist as output of non-conscious brain function, this conscious output also does not stand alone. Essentially a decision is made, and the non-conscious computer then takes care of working the associated muscles. When I decide to wiggle my own finger for example, you know better than I how the non-conscious computer makes this happen.

        Regarding Stanislas Dehaene’s book on consciousness, I think we’ve privately established that he’s referring to how the non-conscious computer (as I call it) creates information. And apparently he doesn’t get into the creation of valences, though surely presumes that this occurs in a lower way that his models don’t explicitly address. That’s all fine with me, though not really my cup of tea.

        On the “consciousness” term itself, yes it has been and today remains a bit of joke. But I see no reason to believe that it must always be that way. Effective definitions for personal human existence are needed — not for us masses, but rather for the institution of science itself. This could involve my ideas or others’, though where science goes you can bet the masses won’t be far behind. I realize that you’re bullish on science. Regarding our soft sciences however, am I more bullish than you? Will humanity never be able to effectively model its own nature? If it ever does, you can bet that the “consciousness” term will finally gain a generally useful definition.

        Liked by 1 person

  11. Hi Mike, very thought provoking as usual. Perhaps we are overloading ourselves with too much personal experience, clouding our view of the ‘big picture’. Neuroscientists studying human consciousness are diving in at the deep end, starting at the top as it were.

    The evolutionary biology approach starts at the bottom and works its way up. I find the data compelling and easy to understand: extremely primitive (compared to us) little creatures have the ability to innovate when facing a problem; they also have the ability to copy behaviors that appear to be successful. Cultural behaviors can thus be adapted to a new situation. Fruit flies and stickle back fish are well studied examples. In order to copy, some primitive sense of SELF, OTHER, and the ability to mirror is required. These are highly complex challenges and require the primordia of phenomenal consciousness – indeed, some in the field believe that all animals are aware of phenomena. For example, the mating dance of a 1mm C. elegans would require the awareness of an appropriate other.

    We humans are so impressed with our comparatively immense abilities of memory, language and culture, that we have intuitively assumed that those ‘simple’ little creatures in no way can be compared with us. I think that is a mistake – all multicellular creatures share a large number of genes. It seems likely that all animals have the intelligence necessary for awareness of self in the environment. ‘Naturalistic’ panpsychism does not define consciousness in a deflated manner. This better fits the historical data: life has existed for about 4,000 million years, human consciousness has only been around for 0.3 million years. We just happen to be the cherry on top in a very long history of evolution of consciousness.

    Liked by 1 person

    1. Thanks Liam!

      I could quibble with some of the details, but I definitely agree with the sentiment. There is a definite continuity between all life forms and the minds we have today. Antonio Damasio discusses the concept of biological value, which goes all the way back to the simplest unicellular organisms. The sense of self is rooted in that biological value, and what we call consciousness is a continuation of it.

      And Gerhard Roth points out that changes in behavior based on sensory input go back to unicelluar organisms. The earliest sensoriums and motoriums predate what we call consciousness, which seem more about the interaction between the two.

      There was never a first conscious creature, just increasing capabilities that gradually added up until we get to an organism we might be tempted to apply the “conscious” label to, possibly something living in the Cambrian.

      And I agree we probably overload ourselves, or more accurately, psych ourselves out with overly inflated and mysterious views of how we work. There is a lot to learn, but probably not the deep metaphysical barrier to overcome that many erect in their minds.

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.