The problems with the Chinese room argument

In 1950, Alan Turing published a seminal paper on machine intelligence (which is available online).  Turing ponders whether machines can think.  However, he pretty much immediately abandons this initial question as hopelessly metaphysical and replaces it with another question that can be approached scientifically: can a machine ever convince us that it’s thinking?

Turing posits a test, a variation of something called the Imitation Game.  The idea is that people interact with a system through a chat interface.  (Teletypes in Turing’s day; chat windows in modern systems.)  If people can’t tell whether they are talking with a machine or another person, then that machine passes the test.

Turing doesn’t stipulate a time limit for the test or any qualifications for the people participating in the conversation, although in a throwaway remark, he predicts that by the year 2000 there will exist a system that could fool 30% of participants after five minutes of conversation, a standard many have fixated on.  This is a pretty weak version of the test, yet no system has managed to pass it.

(There was a claim a few years ago that a chatbot had passed, but it turned out to depend on a clever description of the person that was supposedly on the other end, a foreign teen with a shaky grasp of English, which most people think invalidated the claim.)

We’re nowhere near being able to build a system that can pass a robust version of the test with at least an hour of conversation that fools at least 50% of a large sample of human participants.

I think Turing’s overall point is the philosophical problem of other minds.  We only ever have access to our own consciousness.  Although physical or systematic similarities may give us clues, we can ultimately only infer the existence of other minds by the behavior of the systems in question.  The Turing test is essentially a recognition of this fact.

The argument most commonly cited in opposition to the idea of the Turing test is a philosophical thought experiment put forth by John Searle in 1980: the Chinese room argument.  (The original paper is also available online).

Searle imagines himself sealed in a room with a slit for questions in Chinese to be submitted on paper.  Searle, who does not know Chinese, has a set of instructions for taking the symbols he receives and, using pencil and paper, producing answers in Chinese.  He can follow the instructions to the letter and produce the answers, which he slides back out the slit.

Searle’s point is that the Chinese room may pass the Turing test.  It appears to understand and can respond to Chinese questions.  But Searle himself doesn’t understand a word of Chinese, and, he argues,  neither does anything else in the room.  It appears to be a case where we have an entity that can pass the Turing test, but which doesn’t have any real understanding.

The takeaway from this argument is supposed to be that Searle is doing the same thing a computer processor does, receiving and manipulating symbols, but with no understanding of what is happening.  Therefore the Turing test is not valid, and computationalism overall is wrong.

There are a number of common criticisms of this argument, which Searle responds to in the paper, one of which I’ll get to in a bit.  But what I consider the most damaging criticism is rarely discussed, that the scenario described is, if not impossible in principle, utterly infeasible.

We’re asked to suppose that Searle will do everything a computational system that can pass the Turing test will do.  But no one really imagines him doing that.  Generally we end up imagining some procedure with maybe a few dozen, or perhaps even a few hundred steps.  It might take Searle a while to respond, but there’s nothing too out of bounds about it.

Except that we need to consider what a system that can pass a robust Turing test needs to be able to do.  A brain has billions of neurons that can spike dozens to hundreds of times per second, and communication throughout the brain tends to be recurrent and ongoing.  Which is to say, that a brain receiving a question, parsing it, considering it and responding to it, will engage in at least hundreds of billions of events, that is, hundreds of billions of instructions.  A machine passing the Turing test may not do it exactly this way, but we should expect similar sophistication.

And Searle is going to do this by hand?  Let’s suppose that he’s particularly productive and can manually perform one instruction per second.  If he takes no bathroom, meal, or sleep breaks, he should have his first billion instructions performed in around 30 years.  Responding to any kind of reasonably complex question would take centuries, if not millenia.

Maybe, since Searle is a fairly complex system in his own right, we can provide higher level instructions?  Doing so, we might be able to reduce the number of steps by a factor of 10 or maybe even 100.  But even with that move, the response will be years to decades in coming.  And making this move increases the amount of human cognition involved, which I think compromises the intuition of the thought experiment.

We can make the thought experiment more practical by the expedient of giving Searle…a computer.  Even a mobile phone today operates at tens of thousands of MIPS, that is, tens of billions of instructions per second.  But of course, then we’re right back to where we started, and the intuitive appeal of the thought experiment is gone.

Okay, you might be thinking, but by introducing all this practicality, am I not failing to take this philosophical thought experiment seriously?  I’d argue that I am taking it seriously, more seriously in fact than its proponents.  But, in the spirit of philosophical argument, I’ll bracket those practicalities for a moment.

The other response to the argument I think remains strong is the first one Searle addresses in the paper, the system response.  The idea is that while Searle may not understand Chinese, the overall system of the room, including him and the instructions, do.  If the room can respond intelligently in Chinese, including to unplanned questions about the house and village that it grew up in China, which sports teams it was a fan of, which schools it went to, restaurants it ate at, etc, then at some point we should consider that buried in that system is an entity that actually thinks it did grow up in China, or at least one that can conceptualize itself doing so.

Searle’s response (done with disdain, but then the whole paper has a polemical feel to it) is to simply posit that he memorizes all the instructions and performs them mentally.  With this modification of the scenario, Searle still doesn’t understand Chinese and, he argues, the system reply is invalidated.

Okay, I know I said I would bracket the practicalities, but if the initial scenario was infeasible, this one is simply ridiculous enough that it should be self refuting.  Searle’s going to memorize the hundreds of billions of instructions necessary to provide convincing answers?

But, bracketing that issue again, nothing has changed.  The system still understands Chinese even if the parts of Searle following the instructions doesn’t.  If Searle is somehow superhuman enough to memorize and mentally follow the code of the Chinese system, then he’s arguably superhuman enough to hold another thinking entity in his head.

And a counter argument here is to consider how I, as a native English speaker, understand English.  If someone were to sufficiently damage Wernicke’s area in my brain, it would destroy my ability to comprehend English (or any other language).  In other words, the rest of my brain doesn’t understand English any more than the non-instruction part of Searle understands Chinese.  It’s only with the whole system, with all the necessary functional components, that I can understand English.  What’s true for me is also true for the room, and for Searle’s memorized version of it.

Searle addresses a number of other responses, which I’m not going to get into, because this post is already too long, and I think the points above are sufficient to dismiss the argument.

If Searle had restricted himself to addressing the possibility of a simple system passing the weak version of the Turing test, pointing out that such a system would be missing the mental representations necessary for true understanding, and the utter inability of computer technology c. 1980 to hold those representations, he might have been on somewhat firmer ground.  But he pushes a much stronger thesis, that mental content in computation is impossible, even in principle.  How does he know this?  He feels it’s the obvious takeaway from the thought experiment.

Any meaning in the computer system, Searle argues, comes from human users and designers.  What he either doesn’t understand or can’t accept, is that the same thing is true for a brain.  There’s nothing meaningful, in and of itself, in the firing of individual neurons, or even in subsystems like the amygdala or visual cortex.  The signals in these systems only get their meaning from evolution and by the relation of that content to the environment, which for a brain includes its body.

A computer system gets its meaning from its designers and environment, including its human users, but the principle is the same, particularly if we set that computer as the control system in a robotic body.  Yes, human brains include representations about itself, but so do most computer systems.  All you have to do is pull up Task Manager on a Windows system to see representations in the system about itself.

So I don’t think the Chinese room makes its case.  It attempts to demonstrate the infeasibility of computationalism with a contrived example that is itself far more obviously infeasible, and responds to one of the strongest criticisms against it by ramping that infeasibility to absurd levels.  The best that might be said for it is it clarifies the intuitions of some anti-computationalists.  The worst is that by demonstrating the need to resort to such absurd counter-examples, it arguably strengthens what it attacks.

Unless of course I’m missing something?  Are there weaknesses in Turing’s argument or strengths in Searle’s that I’m missing?

80 thoughts on “The problems with the Chinese room argument

  1. I’ve got no love for Searle’s Chinese Room. Intentionality is an important concept, but Searle doesn’t actually have his own theory of it, as far as I can see, just some negative remarks on other theories. But it might help to dispute the starting assumptions:

    We only ever have access to our own consciousness.

    That’s an overstatement. We have fuzzy fallible access to our own consciousness, and even more fuzzy access to other people’s consciousness.

    Although physical or systematic similarities may give us clues, we can ultimately only infer the existence of other minds by the behavior of the systems in question.

    Physical concepts are essential to our description of our own consciousness. “Happy” is a concept we learned at our mothers’ knee while smiling and being smiled upon. “Red” is a concept applicable to both subjective experience and objective fire trucks. And so on for other emotions and sensations. Other minds aren’t just an inference, they’re a presupposition of our self-description as conscious feeling beings. That presupposition can be questioned, but only at the cost of a radical restructuring of one’s self-description.

    Like

    1. “We have fuzzy fallible access to our own consciousness, and even more fuzzy access to other people’s consciousness.”

      I totally agree that our access to our own consciousness is fuzzy and unreliable. I actually think that’s the most important thing to understand in any exploration of the mind. A lot of the mysteries of consciousness amount to trying to understand the oversimplified picture presented by that access.

      On more fuzzy access to other people’s consciousness, I can see where you’re coming from, but I do think there’s a difference. My knowledge of someone else’s consciousness is wholly dependent on my sensory perceptions of their behavior and extrapolation from that and what access I do have to my own experience.

      “Other minds aren’t just an inference, they’re a presupposition of our self-description as conscious feeling beings.”

      We are social creatures and innately predisposed to seeing agency and empathizing with it to at least some degree. But that agency detection can be hijacked and misdirected. It’s why people worshiped rivers, volcanoes, and the sun for so long.

      Like

      1. “My knowledge of someone else’s consciousness is wholly dependent on my sensory perceptions of their behavior and extrapolation from that…”

        You also have what they report about their consciousness.

        One of the great things about literature (and songs) is that it shares the thoughts of others and demonstrates that others think as we do. We see ourselves in stories or characters, and this tells us how much our minds have in common.

        “It’s why people worshiped rivers, volcanoes, and the sun for so long.”

        Agency and the fact that those things were sources of power, resource, or fear. (As opposed to seeing agency in your dinner fork or trousers. 😀 )

        Like

        1. Technically their report fall under behavior, but I agree language is a special case. It requires recursive metacognition to both express and understand, allowing us to, in essence, communicate our experience to each other, something no other species can do. Of course, when I communicate my experience to you, you interpret in terms of your own experiences, which can sometimes cause confusion.

          Good point about power. That and variation in how it acts, which could be interpreted as volition. If it is volition, maybe whatever it is can be appeased, hence the worshiping.

          Like

          1. “Technically their report fall under behavior,”

            The act of reporting does, but the content of that report contains information about their internal mental states.

            As you say, subjective bias applies, but it applies to self-reporting our own mental states, too.

            “If it is volition, maybe whatever it is can be appeased, hence the worshiping.”

            Make the gods happy and they will reward us (or at least not smite us)!

            But whoever came up with the idea of sacrificing virgins really needed their head examined.

            Like

          2. What’s bizarre is that human sacrifice actually appears to have been common in pre-literate agricultural societies. It seems like maybe a gradual escalation in the appeasing thing, until they get to the point of giving up the thing they hold most dear, their children, including virgin daughters.

            What’s also interesting is that human sacrifice generally doesn’t appear to last long once writing is introduced, indicating that something about record keeping undercuts it.

            Like

          3. 😀 You mean like the fact that sacrifice turns out to not even be correlated with Good Times, let alone causing them?

            Did you ever see (either version of) The Wicker Man (although, for your sake, if you’ve seen only one, I hope it wasn’t the Nick Cage one)?

            Like

      2. “but I do think there’s a difference. My knowledge of someone else’s consciousness is wholly dependent on my sensory perceptions of their behavior and extrapolation from that and what access I do have to my own experience.”

        Not just behavior, but also biology. But more important, our understanding of our own consciousness depends on memory. And interpretation can be tricky both with self and others. I see a lot of moderns casually following in the wrongheaded footsteps of Locke, Descartes, Hume and others of that period/intellectual climate. I’m trying to steer you away.

        Like

        1. I’m pretty familiar with the errors of Descartes, and the fact that Cartesian dualism permeates philosophy of mind discussion, even among people who’ve sworn off dualism. But I’m less familiar withe any issues from Locke and Hume. How would you describe them?

          Like

          1. Locke and Hume are called “Empiricists” because they wanted to build our knowledge of the external world (or lack of knowledge, for Hume) on top of our knowledge of our own sensations. Behaviorists sometimes wanted to turn that around and construct our mental properties, to the extent that any such things were countenanced at all, on top of our objective behavior. I favor holism. All our knowledge, inside and out, is built together in a single inseparable process, in one gigantic long-running fell swoop.

            Philosophically, the Empiricist project and that variety of Behaviorism both seem unworkable. Translated into accounts of developmental psychology, they’d be laughable. Our knowledge of our own sensations as sensations and our actions as our intentions comes only after an enormous amount of cognitive skills have already been achieved. And when it comes, the knowledge of other people as feelers of feelings comes at the same time. Research by Alison Gopnik is a good place to turn for this stuff. She attributes these developmental breakthroughs to children’s achievement of “the concept of a person.” A person in this sense meaning a being who is a physical entity with an inner life.

            Liked by 1 person

          2. Thanks for the description and explanation!

            I have to admit I’m mostly in the empiricist camp. Although nothing is absolute. Our senses (both inner and outer) are fallible, and can’t be accepted unquestioningly. They provide data, but data that has to be corroborated, repeatable, or otherwise verifiable, and fit into rational theories. In the end, our only real measure of success is whether those resulting theories increase the accuracy of our predictions of future sensations.

            I’ve seen Gopnik’s TED talk, but it’s been a while. But definitely, we tend to underestimate just how much knowledge is contained in the “common sense” we acquire in the first few years of life.

            Like

  2. Thanks for this post. I’ve never understood how anybody could take the Chinese room argument seriously. You explained perfectly well why it’s a bad thought experiment.

    Liked by 1 person

  3. First, I’m entirely with you here. I think the systems response is correct and Searle’s response to it doesn’t work, for the reasons you describe.

    Second, I don’t think your argument for the infeasibility of the original setup is philosophically valid, as in fact Searle is putting himself in the place of the “computer” (although as many people remark in actuality he is the CPU) and you have to give him the benefit of computer speed. That said, another argument for the actual infeasability that doesn’t concern speed is that the original setup as written constitutes a Library of Babel. In order for this system to pass the Turing test, the notebooks would have to contain a complete copy of every possible conversation. So it would go like this:
    Q: What did you have for breakfast?
    A: look up “whatdidyouhaveforbreakfast” —> “Eggs and toast”
    Q: Did you make them yourself?
    A: look up “whatdidyouhaveforbreakfastDidyoumakethemyourself“ —> “yes”
    Q:What kind of eggs?
    A: look up “whatdidyouhaveforbreakfastDidyoumakethemyourselfWhatkindofeggs” —> “scrambled”
    Q: Do you like dogs?
    A: look up “whatdidyouhaveforbreakfastDidyoumakethemyourselfWhatkindofeggsDoyoulikedogs” —> “Yes, I have a dachshund”
    Q: Did he eat the same breakfast?
    A: look up “whatdidyouhaveforbreakfastDidyoumakethemyourselfWhatkindofeggsDoyoulikedogsDidheeatthesamebreakfast” —> “No, he doesn’t like eggs”

    I forget what the math is, but to have any reasonable Turing test capability you quickly run out of resources, as in, not enough particles in the universe for the notebooks.

    So to actually make a system that can respond correctly in real time you have to take shortcuts. You have to have working memory, and you have to have a system of concepts that can be referred to, and you will have to generate representations of these concepts in memory. And these are the pieces that make up Consciousness.

    *

    Liked by 1 person

    1. The reason I’m not willing to just grant Searle the speed of a computer, is the whole thought experiment is based on intuition, and speed seems like a crucial component. It’s a lot easier to see a computer mind as obviously false if it’s equivalent to just following written procedures, but being equivalent to written procedures that take centuries to complete changes the intuition, at least it does for me.

      But I totally agree on all the rest. At some point, the amount of processing necessary to fake it becomes exponentially more than just taking the same shortcuts that evolution took, which as you note, results in consciousness, or at least something equivalent to it.

      Like

      1. “The reason I’m not willing to just grant Searle the speed of a computer, is the whole thought experiment is based on intuition, and speed seems like a crucial component.”

        I’m afraid I agree with James here; the speed at which Searle operates can’t be an objection (nor do I think it’s crucial to the thought experiment).

        We can simply grant Searle Flash-like powers of speed!

        As James suggests, a greater problem is how lookup works in terms of virtual real-life experience. the room obviously didn’t have anything for breakfast, nor does it own a dog.

        That said, if those experiences were fed into the library appropriately, James’ examples might overstate the case:

        Q: What did you have for breakfast [this morning]?

        Easily looked up and answered. In fact, the full answer is: I made and ate two eggs, sunnyside up, plus two strips of apple-smoked bacon I fried and a cup of Earl Gray tea I brewed.

        Q: Did you make them yourself?

        If we assume questions can’t be related, the room could, at this point, ask: “Make what myself?” Then the question is rephrased as a single request;

        Q: Did you make eggs for yourself for breakfast [this morning]?

        Easily looked up and answered. (Part of the original answer!)

        Q: What kind of eggs [did you have for breakfast this morning]?

        Likewise. You get the idea.

        That said, the programming requirements are still utterly ridiculous. The one we carry around in our head took millions of years to develop and takes decades to train to any usefulness.

        Like

  4. “All you have to do is pull up Task Manager on a Windows system to see representations in the system about itself.”

    The brain also has representations about the external world, the relationship of the body to the external world, and how to take actions in the external world, which requires understanding the context of those actions. Task Manager is purely internally focused. So I don’t see that analogy working. Actually one of the things that makes the brain really different is how little insight it has into its own inner workings. We don’t have a Task Manager that tells us visual unit 5 is consuming 10% of our CPU or that our self-obsessed hubris unit is a memory hog. A lot of our internals is hidden from us. Our representations of ourselves are mostly representations of our body (which is treated almost as if it were external to the “I”) or social representations of ourselves (“I’m a lot smarter, better looking, earn more money, etc than you”). Emulating that would be part of the task of creating a human-like robot.

    I think Searle’s point is that a person who understood Chinese would be able to respond in a contextual way, not simply a grammatically correct way, with either words or actions as well as ask questions to clarify the meaning of what was presented to it. That can’t be done by a giant lookup table or Google translate. It would require the representations of the world, the relationship of the world, and the context of the interchange. This is a much bigger computing task. Maybe not impossible but still for me would provide little evidence that the robot had consciousness like I do without some basic theory about why consciousness feels like it does to me. I can be persuaded that you or probably my cat has something like a feeling of consciousness because they are biological with body plans generally similar (eyes, ears, mouth, nose, legs, etc) to mine.

    Like

    1. [
      Q: What did you have for breakfast?
      A: lookup “whatdidyouhaveforbreakfast” —> “eggs”
      Q: Did he like them?
      A: lookup “whatdidyouhaveforbreakfastDidhelikethem” —> “Did who like them?”
      Q: The person you mentioned.
      A: lookup “whatdidyouhaveforbreakfastDidhelikethemThepersonyoumentioned” —> “I don’t think I mentioned any person …”
      Q: What spigot?
      A: lookup “whatdidyouhaveforbreakfastDidhelikethemThepersonyoumentionedWhatspigot” —> “Are you feeling ok?”

      *
      ]

      Like

      1. [because I can’t help myself]

        Q: What did you have for breakfast?
        A: lookup “whatdidyouhaveforbreakfast” —> “eggs”
        Q: Sure, but what about chickens?
        A: lookup “whatdidyouhaveforbreakfastSurebutwhataboutchickens” —> “What do you mean?”
        Q: Hume is the one you should ask about meaning.
        A: lookup “whatdidyouhaveforbreakfastSurebutwhataboutchickensHumeistheoneyoushouldaskaboutmeaning“ —> “Hey, are you a computer? Is this a Turing test?”

        *

        Like

    2. My point about the Task Manager was just that there’s no fundamental barrier to a system having information on itself. As you noted, the brain’s information in that regard is limited and not reliable, yet people seem to insist that we must explain what it tells us is there, rather than why it’s telling us that.

      I think everyone agrees that to pass a robust Turing test, a system has to have representations of the world and itself. But as James of Seattle noted, eventually this becomes easier to do than just continuing to expand the lookup tables. Shortcuts become required. Which is what evolution did. All of those shortcuts added together lead to what we call cognition, and some of it, consciousness.

      As I noted in the post, if Searle had simply stuck to that point, I’d have a different attitude toward his argument. But he takes it to mean that no computer system can ever have that intentionality, not even in principle. Maybe that’s true, but if so, I can’t see that he demonstrated it.

      He is open to a machine having intentionality if it has the “same causal powers” as the brain, but he doesn’t delineate what those causal power might be, or why a computer can’t implement them. (IIT takes a shot at this, but even it seems vague on exactly what’s supposed to be happening.)

      Like

  5. I agree with the idea, to the extent the Giant File Room is possible, that it gets any meaning it has at the system level. More specifically, its meaning is in its design and maintenance — the updating of its file system.

    If the GFR “knows” something, it’s because someone who knew that thing put it there. The analogy would be to a child growing up learning things. On a day-by-day basis, there is new information that must be stored and indexed. (If it is to answer questions like, “What did you have for breakfast this morning?”)

    Searle can no more be expected to know something than we can expect our CPU you know about our latest blog post.

    I also agree the GFR is effectively impossible to construct, although it doesn’t require a Library of Babel. What it requires is what one person could know — and remember! — in a lifetime to some point.

    This is, by the way, something not often mentioned about the GFR: If the idea is replicating a human, then surely the Room must be able to answer “I don’t know” or “I don’t remember.” This implies there is unavailable information and that old information might be discarded if unused.

    I disagree speed matters — that’s not the point of the Room. But I do agree Searle may have tried to take the analogy too far.

    It seems obvious our brains do more than just look stuff up and report it. As pointed out already, we maintain a model of reality plus an element of imagination and speculation, and all of that goes far beyond what just looking stuff up can do.

    Simply, the GFR lacks any dynamic process or modeling. There’s a reason computer programs themselves do more than just look stuff up.

    Like

    1. Wyrd, you might get a kick out of this. When I googled “giant file room”, your post on the Chinese Room was the first hit.

      Good point on the “I don’t know” response. Although it has to be for things a human plausibly wouldn’t know, and if used too much, would raise suspicion.

      If speed doesn’t matter, then it doesn’t seem like the size of the room could either. So if we grant Searle Flash like superpowers and infinite space (and materials), then yes, he could pass the Turing test with a relatively simple mechanism which wouldn’t understand Chinese or anything it was talking about.

      Although with all that speed and in all that vastness, we couldn’t rule out that there wasn’t an alternate architecture there for a conscious mind. Maybe evolution would have built our brain like that, if it had had all that speed and space to work with. A higher dimensional alien mind might actually work that way.

      On the other hand, maybe that’s why Deep Thought in Hitchhiker’s needed 7.5 million years to add up to 42. 🙂

      Like

      1. “When I googled ‘giant file room’, your post on the Chinese Room was the first hit.”

        😀 I’ve discovered that Google is better at searching for my posts than the WordPress search function is. (Wikipedia has the same issue — it’s native search function isn’t as good as Google.)

        “If speed doesn’t matter, then it doesn’t seem like the size of the room could either.”

        D’accord. (One can assume a really tiny card catalog and itty bitty books. 🙂 )

        But as I said above, I think the juxtaposition is between mere lookup and a more dynamical system. I’m not sure even a no-physical-limits version of the GFR would pass a Rich Turing Test.

        Although I suppose if the lookup was reduced to the level where it made the system dynamic (as happens in a real computer), then I suppose the GFR could seem dynamical, too.

        Liked by 1 person

  6. I guess I’m not exactly understanding the problem here. Turing said that some day Turing machines would somewhat fool people into thinking that they’re conscious. Searle has said the same. Is the problem that some people believe that if such a machine ever does fool people into thinking that it’s conscious, that it thus will be conscious? And in spite of Turing, Searle, and common sense?

    My previous understanding was that computationalists believe that there are no mechanisms in the brain associated with conscious function — it’s merely a lookup table. Therefore any conscious experience will simply be information that might just as well be printed out on paper. Thus with the proper lookup table, John Searle (or any mechanism) theoretically could take the input information of a brain, and process it so that the system could experience consciousness as we know it. But is it worse than that? Do computationalists believe that fooling people is sufficient to create some kind of ontological reality? What am I missing here Mike?

    Liked by 2 people

    1. “Is the problem that some people believe that if such a machine ever does fool people into thinking that it’s conscious, that it thus will be conscious?”

      I’m surprised to see you ask this question Eric since you’re the one usually arguing that we can never know truth. The question to ask yourself is, how do we ever know another system is conscious? Epistemically, what separates the systems that can convince us of their consciousness from the ones that truly are conscious? Turing’s point is that there is no scientifically identifiable difference.

      “My previous understanding was that computationalists believe that there are no mechanisms in the brain associated with conscious function — it’s merely a lookup table.”

      Um, no, that’s not computationalism. Computationalism is simply the view that the processing in the nervous system is a type of computation. (There are various camps regarding exactly what type of computation.) There are numerous theories of consciousness compatible with that view, none of which I know of that posit only a lookup table. Most involve predictive representations, models, of one type of another.

      Liked by 1 person

      1. “Turing’s point is that there is no scientifically identifiable difference.”

        That may or may not be Turing’s point. But the test is a rigged test because it omits most of the features we actually use to determine if another entity is conscious. This includes, of course, all the physical clues and non-verbal gut senses. A lot goes to the argument in Blink about there being some sense that something is incorrect like an art expert sensing that an incredibly good forgery is just a forgery.

        Like

        1. Definitely, the test is designed to isolate conversational ability from all the other indicators.
          Actually, every test is going to be rigged in some manner. People have a hard time not seeing cute cuddly robots as having feelings, even though the robots give no good reason to suppose that from their behavior.

          And I think about the Teri Schiavo case, where her family was unable to let go of the idea that there was still someone there because of all the seeming physical cues. A postmortem examination of her brain showed widescale destruction and left no doubt that she was in a permanent vegetative state.

          All of this just feeds into the fact that it’s inescapably a subjective judgment.

          Like

      2. Mike,
        I was referring to “believing” rather than “knowing”.

        On Turing’s point that there is no scientifically identifiable difference between something conscious and not, I’m not so sure about that. Scientists develop hard earned opinions about things, and it seems to me that they should progressively have stronger opinions about consciousness as our soft sciences progressively harden up. As you know, I’d like to help it do so.

        It’s good to hear that I had computationalism wrong. From your provided definition I’m squarely in your camp. But then I perceive John Searle to be with us as well. Or at least that’s what I thought when I read this:

        Click to access Searle,%20Is%20the%20Brain’s%20Mind%20a%20Computer%20Program.pdf

        It seems to me that he thinks “that the processing in the nervous system is a type of computation”. It’s just that what we commonly consider “conscious”, is probably more than information alone, or a computer program that could thus work on any machine that runs it — even a guy with a lookup table and pencil and paper to write with.

        Liked by 1 person

        1. Eric,
          “On Turing’s point that there is no scientifically identifiable difference between something conscious and not,”

          His actual position is that ultimately the only evidence we have is the way the system behaves, including how it communicates. Even when we refer to structural similarities, those are similarities to systems that have convinced us of their consciousness. I’d be very interested if you can think of criteria that isn’t ultimately related to behavior.

          On Searle, I haven’t read that article. (I feel I’ve read enough of his stuff already.) But just going off the abstract, I can’t see him in the computation camp, and everything I’ve seen from him puts him firmly outside of it.

          The argument that, yes there’s computation going on, but just not for the consciousness part, is a pretty common one, but I think it’s special pleading. At least until someone can, with at least a reasonable degree of clarity, identify what that other thing might be beyond vague “causal power” remarks. (Many people take IIT to provide that answer, but just because it has mathematics doesn’t mean it’s not vague and obfuscated.)

          Liked by 1 person

      3. “Computationalism is simply the view that the processing in the nervous system is a type of computation.”

        I’ve been under the impression computationalism is the view that a computation (i.e. numeric) [A] simulation of a physical brain, or [B] actual mind algorithm (if it exists), or [C] functional representation of the mind or brain, will result in consciousness (or something that resembles it closely enough).

        I’ve never thought the view said as much about what actually happens in the brain so much as the ability to compute it.

        It sounds like you don’t see it that way?

        Like

        1. Well, in my mind if we can compute it, to the extent that doing so reproduces all the significant effects or the original, then the original is functionally doing that computation.

          But I do accept that there’s always an element of interpretation on whether a particular physical system is doing a particular computation, a proposition I know you don’t agree with.

          Like

          1. I’m fine if people want to say a physical system “computes” so long as they keep straight what such a broad definition includes and don’t confuse it with the standard definition of computation.

            In that context, let me rephrase my question: Is computationalism mainly the proposition that conventional computing can simulate consciousness?

            Like

          2. I would say it includes the proposition that consciousness can be implemented with conventional computing, at least in principle. (As I’ve noted before, with conventional designs, performance would likely be a serious limiting factor.)

            Like

          3. You say it “includes” the proposition. Does that mean you don’t see it as the central tenant of computationalism? What do you see as its central tenant?

            Like

          4. I’m pretty sure we agree that CNS computation is nothing like conventional computation, so are you saying that, to you, Computationalism is nothing more than a view that brains are doing some kind of broadly defined computation?

            How do you connect that kind of computing with what conventional computers do? What makes Computationalism computationalism?

            Like

          5. You seem to like the phrase “nothing like”, but if it were true here, I don’t think we’d use the word “computation” in both cases.

            I see the summation of excitatory and inhibitory signals that neurons engage and in and threshold firing as logical evaluation and computation, and the neuromodulation of synapses as storage modification. The brain receives signals from sensory organs at a very consistent level of energy, selectively and recurrently processes signals, and produces output signals at the same level of energy. It also sends broadcasts in the form of hormones. These signals lead to mechanical force being exerted, but they don’t supply the energy for those actions. (The energy actually comes from intramuscular ATP.)

            When I see all this, I see an information processing system, a computational one. I don’t see a Turing machine; it’s not a general purpose programmable computer. But I can’t see any reason why, in principle, its information processing functionality couldn’t be performed by one.

            Like

          6. “You seem to like the phrase ‘nothing like’,…”

            I like the phrase? I think it fits, but you can throw an “almost” in front of it, if that helps.

            “I don’t think we’d use the word “computation” in both cases.”

            Well, I don’t think we should. I think it’s a category error. 😛

            “…as logical evaluation and computation…”

            You’re using the broad sense of the word “computation” here. I think we agree neurons are analogous to logic gates? As I mentioned recently, logic gates are not computation in the conventional sense.

            I’m fine with the broad sense if you want to use it. My point is that you aren’t connecting neurons with conventional computers here. What neurons do is ‘nothing like’ conventional computation.

            I think, too, you’re reading a great deal into the ideas of “memory” and “signal” — brains and computers do have both, but the commonality pretty much ends with the basic concept.

            I stand by ‘nothing like’ but I take it your answer to my question (What is Computationalism to you?) is that you see the brain as being sufficiently computer-like that simulating a brain in a conventional computer is, on some level, almost a no-brainer.

            Fair statement?

            Like

          7. “No-brainer” (pun intended?) implies far more certitude than I have with it. My actual position is that, based on what we currently know about the brain, there doesn’t appear to be any obstacles, in principle, from implementing a mind in a conventional computer system, provided enough performance and capacity. In practice may be another story.

            What I will say is a “no-brainer” is that minds exist in nature, and unless something magical is discovered about them, eventually engineered minds will exist.

            Like

          8. I quite agree and have said so many times. But there’s a huge difference between engineering a similar system (e.g. birds … airplanes) and the claim that a numerical simulation accomplishes the same things (e.g. birds … MS Flight Sim). Somehow I can never get you to acknowledge that difference.

            We can cheerfully disagree on whether that will work — it could go either way — but this disagreement those are completely different systems … that I don’t get at all. To me that’s objective reality.

            But to take this back to my original comment, the question I originally asked about how you see Computationalism, why that’s important to me is that in the many years we’ve been debating this, I’ve been using “Computationalism” as short-hand for “the proposition that conventional computers can compute mind (despite that brains really are nothing like computers).”

            Your answer to Eric, that it’s “simply the view that the processing in the nervous system is a type of computation” suggests a possible misunderstanding between us all that time, and I’d like to resolve it.

            Like

      4. Mike,
        It’s good to hear that like me, Turing thought that consciousness could be evaluated scientifically. And yes, how something behaves should be our only evidence (which could include all manners of invasive circumstances, but still). I personally suspect that nothing could behave like a conscious human in general without also being conscious (as I define the term). This is to say that philosophical zombies cannot exist.

        On Searle, there seems to be a wide misconception that he doesn’t think brains compute. In fact several posts ago one of our friends here even provided me with the article that I just shared, though from the mistaken belief that Searle supported that position. When I noted that the article actually illustrates that Searle takes the position that you and I do, or that brains do function as computers, I presume that he realized his mistake since he didn’t object.

        I don’t consider Searle to say that there is computation in the brain except for the consciousness part. It’s more that computation should animate whatever physics is required for consciousness.

        The true difference between Searle and a great many others, seems to be that he doesn’t see consciousness as “software” alone, or something which does things in machine independent ways. Instead he and I believe that there should be physics based mechanisms in the brain associated with conscious function, and thus only a machine with such mechanisms as well should potentially be conscious. Indeed, I don’t know of anything that exists without physics based mechanisms — even mathematics. Without people who know mathematics, math shouldn’t exist. Note that a math book without people who understand it, will simply be paper with meaningless symbols on it. This is to say that the physics of mathematics exists by means of people who grasp this language. It’s the same for English.

        One difficulty with the CR thought experiment is that the concept of understanding a language can be quite vague. What exactly does it mean to consciously grasp the meaning of symbols? That’s where my own version of this though experiment steps in with a different approach.

        I presume that we all know what a whacked thumb feels like — it’s horrible. If such an experience is entirely informational, or exists without any associated physics, then whatever information thus goes to my brain when my thumb gets whacked, could theoretically instead exist as paper inscribed symbols. So if this input were properly processed into new symbol inscribed sheets of paper, something should thus experience what I do when my thumb gets whacked! If people are fine with this implication of their beliefs about conscious function, then I do grant them consistency. It seems to me that in the past you have accepted Searle’s conclusion, though with tremendous protest.

        It could be that we’ve become too invested in our positions to grasp where they fail. In the end the human does not consider things objectively, but is rather a self interested product of its circumstances.

        Liked by 1 person

        1. Eric,
          “I don’t consider Searle to say that there is computation in the brain except for the consciousness part. It’s more that computation should animate whatever physics is required for consciousness.”

          I’m having trouble seeing the distinction between “computation in the brain except for the consciousness part” and “computation should animate whatever physics is required.” I suppose the latter adds a causal connection which makes it a slightly narrower proposition than the former, but not in any way that changes my argument.

          ” Instead he and I believe that there should be physics based mechanisms in the brain associated with conscious function, and thus only a machine with such mechanisms as well should potentially be conscious.”

          Right, and those physics based mechanisms would be, what? I’m totally open to learning that they exist, but what are they? If you say “biology”, “organic chemistry”, or something along those lines, then what in particular about them is necessary for consciousness?

          “It seems to me that in the past you have accepted Searle’s conclusion, though with tremendous protest.”

          As I noted in the post, there is a weaker version of his proposition I see as plausible, but that’s not the one he argues for. Not sure what I might have said to give you the impression I accept his full-on conclusion. I’ve never seen it as anything but rhetoric for human or biological exceptionalism.

          “It could be that we’ve become too invested in our positions to grasp where they fail.”

          That’s always possible. Psychology has pretty much established that our ability to recognize that in ourselves is very limited. The only solution seems to be discussion and challenging each other to clarify and defend our positions.

          Liked by 1 person

      5. Mike,
        I’m not entirely sure what gives you the impression that Searle is a human or biological exceptionalist. Was that a theme of his seminal 1980 paper itself? If so then apparently he tried to make up for it later, or at least in the 1990 Scientific American article that I provided above. For example one such admission was “And, for all we know, it might be possible to produce a thinking machine out of different materials altogether-say, out of silicon chips or vacuum tubes.” Apparently he’s decided that the brain does essentially function as a computer, and he doesn’t presume that biology is required for such function.

        On the suggestion that you’ve accepted Searle’s conclusion, apparently I said this incorrectly. Maybe “Agree with its premise” works better? Regardless I found this statement from you over at Wyrd’s, and it should help get my meaning across:

        “So there’s no reason in principle [sentience] couldn’t be incorporated into the procedures of a Chinese room, or any technological system. In other words, there’s no reason in principle that the room couldn’t be angry, fearful, or happy.”

        https://logosconcarne.com/2019/05/20/the-giant-file-room/#comment-29083

        Yes, that will be the case if what we feel exist by means of information processing alone. Regarding a contrary mechanism based approach however, I don’t know Searle’s work well enough to know if he has an associated model. I do have such a model however.

        Just as a standard computer may animate the function of a computer screen, it’s possible that the computation of a brain may animate mechanisms which produce affective states in us such as pain, beauty, itchiness, and so on. And whether through pure computation, or computation enhanced physics, I consider this sort of thing to exist as fuel from which to drive the conscious form of computer.

        I can’t tell you what specific physics might cause affective states, and this is because I do not know. It’s the famous “hard problem of consciousness”, of course. And given that I don’t know, you surely wonder why I feel so strongly that abstract symbol manipulation should not be sufficient.

        This is because “pain” for example, should exist as a causally produced product of reality rather than as an abstraction of reality. Abstract things (like the words that you’re now reading) can exist generically, and specifically given that they’re abstractions. This is to say that they do not exist except by means of conscious convention. Therefore symbols that are processed into other symbols, should not in itself produce anything — that is unless used to implement physics based output mechanisms. One way to refute me here would be to identify something that computer processing does without output mechanisms (beyond externalities such as heat or entropy).

        If it turns out that I’m wrong about this however, and thus it’s possible for information processing alone to cause affective states — well okay, I’m far from perfect. But even in that case I suspect that my dual computers brain architecture would remain sensible.

        The only solution seems to be discussion and challenging each other to clarify and defend our positions.

        There’s no question that we here are able to sharpen each other up at the margins. Extreme conversions however remain extremely rare, and surely given our past investments. I certainly take my hat off to those with such apparent objectivity objectivity.

        Liked by 1 person

        1. Eric,
          My understanding of Searle’s positions:
          1. The mental can’t be implemented with computation
          2. It requires causal powers that currently only brains possess.
          3. He is open to the possibility that someday a machine might have 2.
          4. He has no theory about exactly what 2 might be. (Or none that I’ve ever read.)

          He calls this philosophy “biological naturalism”. But I see it as biological exceptionalism, particularly since he makes no effort (that I’ve discerned) to solve 4.

          On your concern about abstractions, remember any implementation of such an abstraction is 100% physical 100% of the time. The abstract part is just a description of the system, a blueprint, used to build a physical implementation. Any actual running software is, again, 100% physical 100% of the time, even if implemented on a system flexible enough to be configured for many different types of software.

          So when asking whether a system can implement something like pain, the question isn’t whether an abstract anything does so, but whether the resulting physical system can do so. Or perhaps more salient, did the abstraction used capture what was necessary from the original system when implemented in the new system?

          Liked by 1 person

          1. Saw my name, saw your reply, wanted to once again point out you are ignoring the differences between physical and numeric systems, which is the old debate, but based on our conversation above, I finally see that we’ve been arguing different propositions.

            All these years, I’ve wondered why you seem so blind to those differences, but now I realize it’s because you really do see brain function as a form of (conventional digital) computation.

            And thus emulating it with a computer isn’t the main trick — the main trick is figuring out the computation of the brain.

            Since I see the brain as “nothing like” a (conventional digital) computer, to me the main computationalism trick is figuring out a good enough numerical simulation that it fully emulates the brain.

            Quite different, and mutually exclusive, propositions.

            My arguments focus on the challenges of the simulation, but those are meaningless in the face of a belief that’s not really where the challenges lie. You have what seems to amount to a faith in the basic nature of the brain, and it’s impossible to argue against someone’s faith. (And it fully explains your lack of skepticism regarding brain uploading. So much is clear now…)

            Like

          2. Wyrd,
            I’ve stated many times that I do see the brain as a computational system, but not a digital one. I just don’t see the analog vs digital divide as the devastating issue you do, particularly since we have plenty of examples of it being bridged in technology. It is fair to say that, in the case of something like uploading, I see it as one computational system emulating another, although for straight AI, it would be more like similar functionality implemented in a different architecture.

            I’ve also given you the reasons many times why I think the computational view is valid. I take the characterization of that view as a faith position to be a deliberate insult, particularly since I have laid out those reasons, and you know I strive to take nothing on faith. You lamented in your Causal post about people seeming predisposed to disagreement rather than actually engaging in what others are saying. On this issue, I see a lot of that predisposition from you.

            Like

          3. “I do see the brain as a computational system, but not a digital one.”

            But if not digital computation, then how much like digital computation can it be?

            In your reply to me above you repeated a point you’ve made before:

            “I see an information processing system, a computational one. I don’t see a Turing machine;”

            Me either. Here’s my problem: Any conventional digital computation is a Turing Machine. If we agree (and we do) the brain isn’t a TM, then saying the brain “does computation” doesn’t mean much since non-TM computation obviously can’t be captured by a TM.

            So there seems a contradiction in the views: [A] The brain “computes” (but not conventionally). [B] Therefore a conventional computer shouldn’t have an issue doing the same non-conventional “computing.”

            In that reply above you also said, “I can’t see any reason why…” You’ve used it before, and it’s an expression of faith. In general, the idea that a conventional digital computation can emulate a non-conventional non-digital “computation” despite no supporting facts is an act of faith. (Can you point to any natural system that can be fully emulated by a digital computer?)

            I have pointed out to you more than once that, for such an avowed skeptic, I find your lack of skepticism in computationalism a bit surprising. If you don’t like “faith” how about “bias” strongly favoring the idea in lack of evidential support? It’s something you believe in. There’s no insult in that.

            “You lamented in your Causal post about people seeming predisposed to disagreement rather than actually engaging in what others are saying. On this issue, I see a lot of that predisposition from you.”

            That would be the post where you disagreed with my description of five physical systems on the basis there’s no such thing as a top-level description of a system?

            But you wouldn’t answer my questions: “What’s above the description of tin cans as a phone? Or a sound system or light circuit or car engine or computer? How would you describe those physical systems?”

            You seemed to be searching for a way to attack my post, so it really feels like you may have put the shoe on the wrong foot here, amigo.

            Like

      6. Mike,
        I think I’m getting a better sense of your issues with Searle. And indeed, at face value your provided account does seem pretty worrisome. But in the end he might also be saying some reasonable things. Or if not let me at least provide an interpretation that I consider reasonable. Let me know if you see any problems regardless.

        1. The mental can’t be implemented with computation

        Yes initially that sounds pretty bad. But if “implemented” is taken as output, and “computation” is taken as computer processing, then to me this seem much better. Output mechanisms should be required in order to implement anything. Here we could reasonably say that there is no computer processing that implements anything (beyond externalities such as heat or entropy), and even in human brains. Thus there wouldn’t be anything “exceptional” to our brains, but rather what they “do” (as in produce consciousness/ mind) may require mechanisms.

        What does any computer do by means of processing alone? If you say “produce consciousness”, well I guess that’s possible, so okay, but how could you know there weren’t output mechanisms actually doing it rather than the processing alone? Or what would be another example of something which is done by means of computer processing alone? This is where I think that you might have gotten mixed up with a less than natural idea. But I can also see how a statement like “The mental can’t be implemented with computation” could push a person this way.

        “2. It requires causal powers that currently only brains possess.”

        To me this one doesn’t seem too bad, that is since “causal powers” may readily be taken as “output mechanisms”. Still I do find the statement too ontological. I’m pretty sure that consciousness exists by means of causal powers in my brain, as well as many or most other central organism processors. Beyond that however it may be that something else exists in nature which experiences affect/ consciousness, so I’d rather Searle not speculate about that.

        “3. He is open to the possibility that someday a machine might have 2.”

        Yes, and he validly calls the human “a machine” as well, thus rendering truth by definition to the proposition right now. To take this beyond a tautology however, let’s replace “machine” with “technological system”. So just as evolution used physical dynamics to add affective states to some kinds of life in ways that promote survival, conceptually this could be added to a technological system as well. But I personally would leave the human and its abilities entirely out of the deductive statement itself. I see no strong similarities between the human and evolution.

        “4. He has no theory about exactly what 2 might be. (Or none that I’ve ever read.)”

        Right. Apparently like me he considers himself “an architect” rather than “an engineer”. And it seems to me that good architects are desperately needed today. You’ve told me before that everyone and there brother has a consciousness theory. Indeed, idiotic consciousness notions seem quite common in academia today. That he has no theory about exactly what causal powers create consciousness, suggests to me that he may be several steps beyond the standard idiocy.

        Note that if we had a community of respected professionals with their own generally accepted principles of metaphysics, epistemology, and axiology from which to better found the institution of science itself, some (or much) of this consciousness idiocy should be eliminated. Furthermore if our most fundamental behavioral science (psychology) were to develop some reasonably verified general theories regarding our nature, a conceptual picture of how consciousness functions should also emerge. Such a sketch ought to help all manners of brain exploration.

        Conversely today we essentially ask neuroscientists to do the entire job alone. But neuroscience simply is not set up to improve the structure of science itself (or what philosophers have failed to do), or to understand human behavior (or what psychologists have failed to do). So let’s not berate philosophers and psychologists for sticking to philosophy and psychology. To me they have no business proposing specific causal powers associated with creating consciousness.

        So Searle’s got a position called “biological naturalism”? Wow, bad move! He might as well have just named it “biological exceptionalism” himself. 🙂

        We seem perfectly aligned regarding abstractions. Implementation of an abstraction should be 100% physical 100% of the time. And right, software doesn’t run by means of abstractions but rather by means of associated physics. One pet peeve that I have is how people sometimes refer to the physical and the abstract like they’re two opposing kinds of stuff. I don’t exactly take them to mean dualism here, but rather that they’re not quite stating things as effectively as they might. If everything is physical and thus causal, then abstractions must exist physically as well. Note that abstractions exist by means of consciousness, and consciousness exists causally, so there’s the chain.

        Though I think that we’re aligned in our naturalism and as most things in general, it seems to me that you’ve also been pushed into a funky position regarding Searle. We’re all products of our circumstances in the end. If I were pushed into some funky beliefs then I’d hope for a friend to help me out. But from that position, would I instead interpret their proposed aid deviously? Unfortunately I might. As I said earlier, I take my hat off for anyone with this sort of apparent objectivity.

        Liked by 1 person

        1. Eric,
          Generally I’m on board with interpreting people’s language in the most reasonable manner we can. For philosophical discussions to be productive, there must be at least some degree of interpretational charity on all sides. It’s trivially easy to interpret a position we disagree with in a straw man fashion and attack just the straw man.

          On the other hand, we can take this charity too far, until we end up not talking about the person’s actual position anymore, but some variation of our own that we’re projecting on them. In my post on Chalmers’ views, I basically did something like that, although clarified that I had probably taken things to a stage where Chalmers wouldn’t agree with it.

          If Searle were here, we could simply describe what we think his position is and ask him whether we’ve got it, and if not, ask for clarification on where we’re wrong. But he’s not and we have to go by what he’s written and said in talks and interviews. Anyway, I’m tired of debating Searle’s position. Let’s focus on ours.

          You talk about “output mechanisms”. The issue I see with this term is that it’s relative. Each neuron has inputs and outputs. That neuron’s output mechanism is its axon, outbound synapses, and the action potentials sent through them. But a neuron is part of a neural circuit, where its input and output are just intermediate stages of processing.

          If we talk about the brain overall, it’s output appears to be hormone releases and motor signals to muscles. Your language implies you think consciousness is an output in addition to that. Or do you mean more like the intermediate stages noted above?

          My own view is that consciousness, particularly phenomenal, but really all aspects of consciousness, are intermediate processing stages. It’s not an output of the brain. It’s part of the processing that eventually produces the outputs we can externally see (hormones and motor signals). Now, we often use loose language implying it’s an output, as though it were a ghost being produced by the brain, but in most cases, at least among physicalists, that’s metaphorical, referring to our experience of the processing rather than something being output from the brain.

          If you see consciousness as an actual output of the brain, then my question is, what leads you to that view? And aside from it being consciousness, what do you see as the nature of that output?

          Or, by “output mechanism” do you mean intermediate processing states specifically of the brain, as opposed to states in a technological computer with the same information content and dispositions? If so, then are we talking about the unique physical aspects of the brain, the carbon chemistry, genetics, proteins, ion flows, etc, providing some capability that the technological system lacks?

          On the physical vs abstract and dualism, a lot depends on whether you’re a platonist. I’m (currently) not, so to me it’s all physical. We just have models (instantiated in our brain, on paper, or in other systems) that refer to common attributes not specific to any one instantiation of an object. But for a platonist, the abstraction has an existence apart from that. It is a type of dualism, but is very different from Cartesian substance dualism, since those platonic abstractions, in and of themselves, are causally inert.

          Liked by 1 person

      7. I definitely agree with you on this “straw man” business Mike. And in truth I think that most of the time people fail to consciously grasp that they’re taking this route, given how difficult it can be to sufficiently understand an opposing position. Your own such charity is surely one of the reasons that so many people enjoy talking with you. But this is not as docile a stance as might initially be presumed. I call it “the Gandhi approach”, and consider it to be an extremely effective brand of rhetoric.

        The interesting thing about it is that I’d actually like more people to employ it in their discussions with me. I’m pleased that you do. Back at Massimo’s I recall distinguished intellectuals instead taking standard shots at my positions, and with popular support, though it seems to me that I’d still come out well ahead. Of course that’s my own perception, though corroborated often enough I think. For example Massimo seemed in opposition with me in many regards, though would permit my bold statements to go unchallenged. Instead he’d let others try to deal with me and feast upon less adept interlocutors.

        And speaking of his blogs, I also remember boring conversations regarding what Hume or whomever actually believed, as if understanding a prominent person’s beliefs better would somehow make them correct in general. So I do appreciate being weary of discussing what Searle believes. But I do enjoy that he’s an extremely prominent modern philosopher who seems somewhat aligned with me. (In truth this concerns an extremely minor element of my ideas, though I’ll take what I can can.)

        If Searle were here right now, would he agree more with a guy who doesn’t like his Chinese room thought experiment, or rather a guy who likes it so well that he’s proposed a version of his own regarding thumb pain? Hmm….

        I agree that within brains, neurons may be considered as basic computers in themselves — they accept input information and process it for output function. I suppose that there are elements of technological computers which can be perceived this way as well.

        I see that you’re saying hormone releases and motor neurons provide formally understood brain output. Sure. And is that how affects occur? This would suggest that just as motor neurons regulate the function of the heart, that there must be something beyond the brain that motor neurons and/or hormones animate to cause the affects that we feel. But if this were the case then surely we’d notice such an organ. So no, I’m not suggesting that.

        The thing which this proposal does have going for it is that the diverse things which we feel are experienced by a single entity. Itchiness, hope, tasty food, toe pain, and so on, are experienced by “me”, or an individual conscious entity. Still without evidence of some sort of affect organ, I guess that I’m more in your camp here — it must exist by means of brain itself.

        It may be however that we’ve simply been using different terms for this. Even if produced in the brain, I’m saying that affect must exist as an output of the brain. Similarly light should exist as output of the lightbulb. If remaining brain bound forces you to call affect “processing”, well that does work for me — I can simply reinterpret it my own way. Here I can say that affective brain processing is taken as input to a second computer that processes it in a fundamentally different way. I believe that I know why this second computer exists, but not how. And apparently the brain should be doing it, so that would roughly be where.

        Here you might say, “Aha, so then this second computer must exist as the brain itself!” Not exactly. I’m simply saying that the brain produces this second computer, and marginally like a lightbulb produces light. One of these two entities functions on the basis of neurons, while the other functions on the basis of affects.

        Try this. Whether “carbon chemistry, genetics, proteins, ion flows, etc”, it seems to me that consciousness will not exist as information alone — I know of nothing that exists like that. Thus if my entire brain state information were uploaded to a computer file, something like me shouldn’t yet exist. But if that information were down loaded to some kind of robot which implements it somewhat the way that my own body does (and thus possesses “hard problem” mechanisms and so on) then yes, this machine should consciously think that it’s me and might even behave that way. So I can go that far. But rooms that grasp Chinese by means of symbolic information generically printed out for conversion into other symbols, let alone “thumb pain rooms”, remain difficult for me to even conceptually believe in. It seems to me that I’m merely demanding respect for the physics involved, whatever that physics might be.

        Platonic abstractions are considered to be causally inert, even by platonists? Interesting! I’m an atheist who thus considers God to be causally inert….

        Liked by 1 person

        1. Eric,
          On affects, maybe I can throw you a life line on the output thing. An important part of an affect is the interoceptive resonance. When the brain receives a stimulus that leads to a reflexive reaction, there are two parts of the reaction, the part that can be overridden by higher level circuitry, and the part that can’t.

          The part that can’t includes a number of physiological changes, heart rate, breathing rate, sweating, gastrointestinal reactions, etc. When a stimulus causes a reaction, that reaction is communicated to the higher level circuitry, but that circuitry also receives the interoceptive signals from those resulting physiological changes.

          Put another way, a feeling is both the mental perception of the reaction, as well as the feeling of its physical effects. This is almost certainly why we call these states “feelings.” (It’s also why we can take physical pain killers to help with emotional pain.)

          This interoceptive component shouldn’t be underestimated. It’s a core part of the experience. That’s true to the extent that if, for unrelated reasons, we have similar interoceptive impressions, it can actually trigger the affect in a sort of reverse activation.

          So, from that standpoint, affects are an output, even of the brain. They’re also an input. Or more accurately, a resonance loop.

          I agree imagining a Chinese room having that doesn’t seem intuitive. But we have to remember that, as far as the brain is concerned, its the signals that matter. And those signals could come from a virtual body in an uploaded environment. And to the extent we’re going to ignore the logistical problems with the Chinese room, the same kind of virtual body could be instantiated in its instructions and operator actions.

          On respecting the physics, remember that there’s no such thing as information without a physical substrate, and I don’t think there is consciousness without that substrate performing a certain collection of interrelated functions. At no point does anything non-physical happen, even in a virtual environment, even in a Chinese room.

          To rule out those last two, you have to posit that only a certain type of physics is sufficient. You can do that. (Many do. It’s what biological naturalism is about.) Maybe that will eventually turn out to be the reality. But until someone can convincingly identify what makes those specific physics necessary, I’ll be skeptical of that move.

          Liked by 1 person

      8. Mike,
        I appreciate you throwing me a life line here, though if accepted I fear that I’d mislead you about my position. What you seem to have presented me with is your own conception of brain architecture. Or this might even be the conception of neuroscientists in general. Permit me to walk you through a different model either way. It may not be right, though you’ll need to understand it in order to effectively point out its various flaws. Otherwise, and without even knowing it, your criticism would effectively be like whacking a straw man. You can bring along your conceptions of neuroscience if you like, though not in an architectural capacity. This may be challenging however since you might not realize where the engineering stops and the architecture begins. I believe that this will be required in order to successfully grasp the following model however.

        I’m sure that you can conceive of a time when neuroscience becomes so advanced, that we can objectively monitor affective brain states. Thus just as you can tell a doctor when something hurts, as well as hurts more, an objective machine will be able to quantify how good to bad a person feels at a given instant.

        Furthermore at such a future time let’s say that computer science in conjunction with neuroscience, figures out how to create a computer which is able to produce affective states. As I define it these scientists would thus create primary consciousness. “There is something it is like” in order for phenomenal experiences such as pain to exist. But this machine should not yet display functional consciousness. Indeed, it seems to me that the computer itself would function the same regardless of how good or bad an associated conscious entity is made to feel. This is to say that the conscious entity wouldn’t yet be hooked up to the computer which creates it, or somewhat like an electric potential which has no motor to drive. So the qualia here would be an epiphenomenal trait, but we’d know that it exists since the relevant physics would be understood and observed.

        At some point you can reinterpret the following from your own brain architecture if you like, but from mine the machine itself would not experience any qualia. Similarly I’d also say that my brain never does — in either case the associated physics should produce a conscious entity that does the experiencing. The essential difference between these two machines is that my brain evolved to produce functional consciousness, while the consciousness that I’ve just proposed will not yet be functional. So let’s get into what this would take under my model.

        In order to tweak this machine so that it has a form of consciousness that’s also functional (or what I call the second computer), these scientists would need to provide the experiencing entity with an ability to diminish negative qualia and/or increase positive qualia, which is to say an output mechanism. So how might they do that? In the end this should depend upon the main computer — essentially would need to detect that the conscious entity feels good to bad, and so provide it with the option of diminishing or increasing what it feels in various ways. At what point does the conscious entity decide that a given pleasure is not worth associated pain, and therefore might diminish them both if required? Once such a decision is detected, the main computer would then oblige in some capacity. So under my model this would be extremely basic functional consciousness, or agency if you like.

        I realize that this last bit may be difficult to grasp. If we do build a machine that produces qualia, how might we get it to detect what the associated conscious entity wants done, such as adjust competing qualia levels, and so have it do so? I do not know, but this does seem to be what evolution built us to be. This gets into a discussion that I had here with our friend Tina not too long ago.

        There must be an amazing number of muscle contractions and such required in order to properly operate human fingers. The conscious entity needn’t get into any of that however — it simply decides what movements to make, and the non-conscious brain takes care of instituting the proper motor neurons and whatnot to get the associated muscles to function as desired. Similarly I’m saying that for us to build functional consciousness, beyond just creating the conscious entity, our machine must be able to detect what that entity wants done given some basic choices, and then Institute that choice.

        I’ll leave this there for now rather than get into various advanced instruments associated with human consciousness. What I’ll be most curious to learn is the extent to which you’re able to grasp this dual computers model so far. If well understood, then yes, I’d like your assessment. Otherwise I predict that your concerns will force me go back into the basics of this model again. I guess that works too.

        Liked by 1 person

        1. Eric,
          I think I do understand your two computer model. It did take me a while, due to the terminology you were using, to understand that the second computer was an emergent or virtual one. The second computer feels what the underlying large computer provides, deliberates, and sends signals back to the large computer on what needs to happen. It’s what allows the large computer to be more than only a stimulus-response machine.

          Unless I misstated anything, I understand this model. But I don’t think the second emergent computer is necessary. There are brain structures which can perform what you’re assigning to it.

          In some ways, you’re positing your own version of higher order thought theory, but in your case, the higher order mechanisms exist in a separate emergent computer, whereas in most versions of HOT they exist in the prefrontal cortex, or perhaps the overall fronto-parietal network.

          The problem I do see with your model, aside from the lack of any evidence pushing us toward it, is that you describe the second computer as tiny, which implies that it doesn’t have the capacity to do what you’re assigning to it. (This is similar to all the people who want to point to various subcortical locations for consciousness. They’re also relatively small, without the substrate to credibly do what’s typically envisioned for them.)

          One reason so many consciousness researchers focus on the PFC, or the fronto-parietal network overall, is that these regions are vast, with plenty of computational substrate for coordinating action-scenario simulations to decide which lower level impulses to allow or inhibit. These are also the regions known to be involved in action planning and, in the case of the parietal lobe, multi-modal sensory integration. They’re the regions that make functional sense for the role.

          I used to think your model might simply be the more conventional one at a different level of abstraction. I can still envision a way to find it so, but similar to what I said above, I don’t think you’d agree with the result. I think for your model to be viable, someone will need to find evidence for that second computer, in other words, evidence that is only, or at least best, explained by your theory.

          Liked by 1 person

      9. Mike,
        If you do understand my model pretty well, then it stands to reason that you’d have a reasonable ability to predict the sorts of things that I’d say about various random questions regarding conscious function as I define the term. For example you might predict what I’d say about the split brain position from your most recent post. So perhaps you could jot down your suspicions and then privately check yourself on this once I tell you what my model suggests? Currently I’m pessimistic.

        It may be that the second computer idea has been quite problematic. James of Seattle once suggested this to me. Above I was hoping to finally get around this with the scenario about a human built machine which creates an associated conscious entity beyond that machine. I figured that since I was talking about something which we build that creates something other than it, and somewhat as a lightbulb creates something other than lightbulbness, that my meaning might emerge better for you.

        So okay, no more dual computers. From now on I have a single computer model associated with brain function, and it creates consciousness as I define the term. In order for our scientists to build a machine which also does this, affective states would need to be produced by a humanly fabricated machine. And note that the associated conscious entity would not exist as that computer itself, but rather as an experiencer of existence. By definition the affect experiencer will want to experience fewer things that feel bad to it, as well as more things that feel good to it. Note that regardless of what creates you and I, we are experiencers of existence rather than what creates us. We are not brains, but rather products of brains.

        It’s the desires of a conscious entity which I refer to as a primitive form of thought. I wouldn’t say that such a thing can signal its desires to the computer which creates it however. I see this entity essentially as something which simply desires — substrate should not exist from which to signal. Instead I believe that these desires should be detectable by a brain which creates it. So for a human made machine to have more than just consciousness, but rather functional consciousness, it would need to detect the desires of the conscious entity that it creates, as well as do applicable things given what’s detected. This would be an output mechanism of that conscious entity, and so technically provide functional consciousness or agency.

        Conversely human agency can be displayed by consciously moving a finger — the brain detects this desire and so causes the proper motor neurons to fire to get what the experiencer wants done.

        I realize that you’d like to figure out where consciousness exists in the brain, as is standard. I can at least agree that some parts of the brain must be more associated with conscious function than others. But ultimately I believe it will be most productive to consider consciousness as a product of the brain rather than something which exists as certain or many parts of the brain. So my own model takes that approach.

        Hopefully since I’m now speaking of a single computer which produces something phenomenal, rather than a virtual computer which I guess you consider too small to do what it does, that what I mean by “consciousness” will begin to become more clear to you. Let me clarity however that this is my own deficiency rather than anyone else’s. I need help illustrating the nature of my models so that others might see where improvements are needed. It’s hard to effectively criticize something that isn’t understood.

        Liked by 1 person

        1. Eric,
          I’m not sure what you’d say about the split-brain patients. Would there be two second computers? Or would the one still somehow emerge from the limited communication channels available?

          I found labeling the second emergent system a “computer” a bit odd, and felt like there were issues there. But it isn’t my main concern with the concept. Although ditching the “computer” label does make your idea more mainstream in a number of ways. A lot of people see consciousness as an emergent system of some kind. (It’s basically the main idea in IIT.)

          In terms of emergence, I actually don’t have a problem with that as an idea, at least as long as we’re talking about weak epistemic emergence as opposed to the stronger ontological variety. But saying that consciousness is emergent doesn’t interest me too much. I won’t be satisfied with any explanation that doesn’t account for how it emerges, at least at a coarse level.

          Although you haven’t used the word “emergent” yet, and keep falling back on the “output” terminology. Is this just a terminological preference, or is there an ontological distinction here? If so, what would you say it is?

          I’m actually not trying to figure out where in the brain consciousness resides. I think that’s like trying to figure out where on the baseball diamond baseball itself happens. It happens all over the field, but certain locations are crucial for pitching, batting, etc. Likewise, consciousness requires large portions of the brain, although certain locations provide crucial functionality. I am interested in those location / functionality correlations.

          On your meaning of “consciousness”, I thought you largely equated it with sentience, the ability to feel good or bad. Is that correct? If so, what are your thoughts about certain feeling being localizable to regions of the PFC, notably the changes from lobotomy type procedures?

          Liked by 1 person

      10. Mike,
        Well the split brain scenario gets quite deep into my models so it should actually take a proficient student to guess my perspective there. With consciousness essentially defined as sentience, split brain people would essentially need to house two different sentient beings in order to truly have dual consciousnesses. That’s not a claim that I’ve heard for this condition. Even in situations where one hand might be working against another, I suspect that this exist by means of non-conscious function. So yes, my ideas suggest single consciousness for these subjects.

        I’ve thought about why evolution didn’t provide normal people with two modes of consciousness. Why not permit me to silently read a book while also having a conversation with someone? Why must we split one consciousness between various conscious activities that we might thus work on simultaneously ? I suspect because the dual purposes of two conscious entities would tend to conflict too often to be productive in general. Perhaps some kinds of life have this though.

        Interestingly enough, I don’t consider the conscious entity inherently unified in a temporal sense. I believe that each moment there’s technically a new self, though in practice they tend to be connected with past selves by means of memory, and the future by means of present hope and worry about what’s going to happen. This is observed since people who lose their memories tend to become disassociated from who they used to be, and people without hope and worry tend not to look after the interests of future selves. In either case they effectively become temporally disconnected present entities.

        I don’t mind using the emergence term to describe causal dynamics. It seems to me that light can emerge from a lightbulb in a reasonably understood way. This is to say that science has been able to effectively reduce this sort of thing.

        But saying that consciousness is emergent doesn’t interest me too much. I won’t be satisfied with any explanation that doesn’t account for how it emerges, at least at a coarse level.

        Actually I do provide an extremely course proposal for this. It’s that affective states emerge by means of associated physical dynamics. Note that we can express certain things conceptually given that they’re the product of conscious convention. The words that we use are examples, and even MS Windows. They can thus be realizable by means of a vast spectrum of physical dynamics — even a “Chinese room”.

        Things that do not exist by means of conscious convention however, such as the light which is produced by a lightbulb, seem to require specific physics in order to exist. So while the concept of “light” can exist in all sorts of languages, and might even be expressed by means of a Chinese room, light itself should not be produced without associated physics. Or for a more complex concept, a Chinese room may be used to preform the function of MS Windows, but only given that this computer program exists as a conscious convention. I believe that affective states which we feel exist more like the light from a lightbulb than by means of conscious convention. Thus if you’d like to understand what consciousness/ affect emerges from, I believe that you’ll need to discover the associated physics.

        I’m ditching “the second computer” idea in my discussions with you, not because I consider it to be a bad analogy, but rather because in some ways the term seems to put you on the wrong track. For example, though the entire brain exist as substrate for producing the conscious entity as I see it, you keep telling me that the produced entity must instead be considered a massive computer. It’s quite frustrating. If you can’t accept that I’m not talking about this second computer as brain, but rather what brain produces, and that the produced entity should do less that 1000th of 1% as many calculations as the brain, then I must change my tactics. Here I can just say that the brain creates consciousness and just not call it a second computer. Who’s going to dispute that? The model itself remains the same though. This is to say that the experiencer interprets affect motivation like pain, an informational conduit such as vision, and a degraded sense of past consciousness (memory), and so constructs scenarios about what to do to promote its phenomenal well being.

        On localizing the source of certain feelings to the PFC, sure. The brain creates these sorts of things for the conscious entity to experience. I presume that associated physics is required to do so.

        Liked by 1 person

        1. Eric,
          “With consciousness essentially defined as sentience, split brain people would essentially need to house two different sentient beings in order to truly have dual consciousnesses. That’s not a claim that I’ve heard for this condition.”

          This is complicated, but I think some emotional constructivists might make that claim, since they see emotional feeling as a cortical phenomenon.

          “I’ve thought about why evolution didn’t provide normal people with two modes of consciousness. ”

          For me, the reason is that what we call “consciousness” requires vast resources. If it didn’t, then it might provide that ability. Strangely enough, split-brain people find it easier to divide their attention in that manner, drawing different shapes with each hand, something healthy people struggle to do unless the shapes are identical or complementary in some manner.

          “Actually I do provide an extremely course proposal for this. It’s that affective states emerge by means of associated physical dynamics.”

          Yeah, that’s too coarse. Obviously it’s true, but the interesting question is: why is it true?

          “Or for a more complex concept, a Chinese room may be used to preform the function of MS Windows, but only given that this computer program exists as a conscious convention.”

          Suppose we have a human being who is locked in, with only very limited ability to communicate. In other words, they are robbed of their ability to have objective physical effects in the world. We only interpret that they’re conscious from what they can communicate with us. What is the difference between the ontology of that person’s consciousness and that of MS Windows?

          “I’m ditching “the second computer” idea in my discussions with you, not because I consider it to be a bad analogy, but rather because in some ways the term seems to put you on the wrong track.”

          Actually, it’s not labeling it as a computer that causes that reaction from me, but statements like this:

          “This is to say that the experiencer interprets affect motivation like pain, an informational conduit such as vision, and a degraded sense of past consciousness (memory), and so constructs scenarios about what to do to promote its phenomenal well being.”

          My interpretation of that is that you’re assigning a lot of work to this experiencer (computer or otherwise). More than I suspect 1/1000th of 1% of the brain’s capacity (about 860,000 neurons) can handle. Essentially you seem to be describing deliberative imagination, which, based on what I’ve read, consumes most of the cortex.

          Liked by 1 person

        2. Eric, I’m trying to understand what you mean by “affective state”. You talk about it as if it is a specific kind of physical thing like light is a physical thing, such that it gets “produced” like light as opposed to being a relation among physical things which relation “emerges”. Would you say that we may one day have an “affective state” meter, the way we have light meters?

          *

          Liked by 1 person

      11. James,
        I’m not actually sure that I like the “affect” term. For my taste it seems a bit esoteric. But then if that’s the way that the pros are trending, I’ll go along. Earlier this year I was instead using the “valence” term quite a lot for this, though Mike informed me that people commonly take this in a more broad way that what I meant. Qualia is another such term. It might also be taken too broadly however, perhaps covering informational senses as well as things which feel good and bad. I’ve always hated the “utility” term given the standard “useful” connotation, which I believe came from Bentham. I quite like sentience.

        Anyway you seem to have pegged me pretty well. Yes I see affective states as causally produced dynamics rather than as relation based dynamics. Mind you that I do believe that relation based dynamics casually exist, but as a product of conscious abstractions. Without a conscious entity to think “Five balls”, then I don’t consider there to be five balls, but rather causal dynamics which a person might use the English language to refer to that way. My point is that abstractions, such as the legal construct of MS Windows, or maybe a given mathematical statement, shall casually exist by means of conscious experience, though not otherwise.

        On an “affective state” meter, I did actually propose such a thing here a few days ago, and from the presumption that Mike should have no problem with that idea given how bullish he is on technology in general. It begins with “I’m sure that you….” https://selfawarepatterns.com/2019/11/10/the-problems-with-the-chinese-room-argument/comment-page-1/#comment-42005

        Whether or not we ever do build anything like that, this does reflect my conception of an affective state — special stuff to potentially detect.

        Mike,
        It’s good to hear your assertion that it’s obviously true that affective states emerge by means of associated physical dynamics. As for why, I don’t think we need to figure that out. You and I simply believe this because we’re naturalists. We could be wrong, though in that case reality wouldn’t ultimately make much sense in that regard.

        On being “locked in”, my own conception of this is to essentially have no informational senses, such as vision, hearing, and so on, whatsoever. Unless otherwise stated I’d include memory, as well as affects such as pain and worry. I’d also leave in memory, as well as the ability to operate muscles. But note that one wouldn’t have feedback from which to grasp what he or she was effectively doing in that regard, so this should be anywhere from fruitless to dangerous to personally dangerous. Regardless of how this state happens to be defined however, by definition something should exist beyond the abstract, and regardless of whether or not it’s conscious.

        Conversely MS Windows exists today as a consciously agreed upon convention. Note that a programmer could get into trouble for distributing unauthorized software that’s perceived too similarly. It seems to me that it’s only because MS Windows exists as a product of conscious convention that it’s realizable in any number of formats, such as by a Chinese room. Two other examples of abstractions would be “Santa Claus” and “five”. So while I’m not concerned about preserving the physics of abstractions such as MS Windows, I am concerned about preserving the physics of affective states. This shouldn’t otherwise exist.

        On the “less than 1000th of 1%” thing, there’s clearly been some miscommunication here, and I suppose somewhat because I have a less rigid conception of computation than you do given that I’m no computer guy. So yes, forget about consciousness existing as a computer if it’s going to bend your conception of the term too far. I’d like you to see however that I’m not implying that human consciousness requires only an upward of 860,000 neurons. As far as I’m concerned it may as well use the entire 86,000,000. (By the way, 1000th of 1% of that would actually be 860).

        What I’m saying is that the brain is a computer (or even just a machine), which creates something else that’s useful to refer to as consciousness. Furthermore even though the brain creates it, it may be said to function very differently from the brain which creates it. I reduce consciousness to three types of input, one type of processor, and one type of output. Regardless of how many calculations that the brain does to facilitate consciousness, the components that I’ve just mentioned do very, very, few calculations in comparison. How many calculations must I personally do in a conscious sense, to effectively snap my fingers? Not many I think. How many calculations must my brain do to facilitate my figuring, as well as effectively use my desires to seamlessly cause the proper neurons to make my hand move in a corresponding way? This number should be staggeringly high.

        Here’s another way to think about this. We all know the story of Pinocchio, or the wooden magic puppet who ends up being turned into a real boy. Even if magic, and so no neurons fire in his head, I’m saying that this conscious entity will still need to figure out how to live his life. Figuring out how to live will require such an entity to do “If…then…” kinds of calculations about what’s going on as portrayed in the story. And even if this wooden boy were to become a Grand Master in chess, I’d still say that his conscious function wouldn’t require all that many calculations, or at least not when compared against the staggering number which human brains make each moment to support human conscious function.

        So maybe when you consider my conception of consciousness, you could disconnect it from the non-conscious brain/ computer which I consider to facilitate it? Instead I’d like for you to put it in terms of the sorts of figuring that even Pinocchio would need to do.

        Liked by 1 person

        1. Eric,
          On the “affect” vs “valence” terms, my point was that valence is a much simpler concept, and exists in a lot of non-conscious systems. Generally any system that has a preference about a particular state of affairs has valence, so it includes single celled organisms, plants, and even technological systems.

          Affect, understood as the feeling of that valence, is a much more complex state. That said, I’m not wild about the word myself. People often think you’ve misspelled “effect”, or just plain don’t understand the meaning. I used to use the word “emotion”, but people mean too many different things by it.

          “It’s good to hear your assertion that it’s obviously true that affective states emerge by means of associated physical dynamics.”

          I probably should clarify that the reason I think it’s obvious is because everything emerges by means of associated physical dynamics.

          “As for why, I don’t think we need to figure that out.”

          I probably should have said “how” rather than “why”. But I disagree. I want to understand mental phenomena down to their non-mental components. Any view which doesn’t provide at least a putative bridge across that divide, might be useful for some purposes, but doesn’t scratch my itch to understand.

          I think you’ve got locked-in syndrome backwards. Typically the patient has inbound senses, but lacks any ability to move or communicate except maybe by being able to twitch one eyelid or something along those lines. Whether they are actually conscious, or just in a vegetative state, is a matter of subjective judgment.

          The hard question is, what makes whether their brain is conscious a fact of the matter, while whether a computer is executing MS Windows, is only true by “conscious convention”? Both have an objective physical reality. What, other than our strong sentiments about it, make one more real than the other?

          “As far as I’m concerned it may as well use the entire 86,000,000. (By the way, 1000th of 1% of that would actually be 860).”

          You left off some zeroes. The brain actually has around 86,000,000,000 neurons (i.e. billions rather than millions). But if you would have been comfortable with 860 neurons, I really wonder what we’re actually talking about here.

          “Instead I’d like for you to put it in terms of the sorts of figuring that even Pinocchio would need to do.”

          Well, we can discuss it at the mental level, as you do. But as I said above, I want an accounting from that to the non-mental. Setting aside the issue of computationalism, I’m wondering how such a simple system could do the things you list, such as if-then-else processing, without a reasonable amount of dynamics to work with. This isn’t so much a computation issue as a functionalist one, or even a naturalist one. Unless you’re positing magic, it seems like there needs to be something there for stuff to happen.

          Liked by 1 person

      12. Mike,
        I do appreciate you providing your perception of “valence”, since I have no use for a term which suggests “preferences” for non-conscious systems. I’d rather people not speak of plants, microorganisms, technological machines, and so on “preferring” various things, that is unless they’re ready to go the whole way, or defend them as being conscious. Well actually no, let me restate that. Technically I don’t mind people speaking colloquially this way, but in that case they shouldn’t devise formal terms like “valence” for this without accepting the full implications. If we speak of plants having valences because they “prefer” to have water, we formally imply desires. I see this as a general problem in mental and behavioral sciences today.

        On your itch to understand the “How?” question, I think that you need to be a bit more patient. Without an effective “What?” for any “How?”, these efforts should largely be a waste of time. Biologists have found this very thing to be the case regarding the “life” term, as you know. Without effective psychology, or the very thing which my own ideas concern, brain science should engage itself in all sorts of wild speculation. I realize that you’re just as curious about psychology as I am, but I think that you also need to accept this subject as essential bridge to what you’re most curious about.

        You’re right that I got the locked in syndrome wrong. It’s about paralysis. Instead I brought in a different idea that I consider interesting, or existence without senses. So missed your question.

        The brain creates a conscious entity as I define the term, as long as it creates an affective state. I’m saying that I consider this definition useful ontologically. There is something it is like to feel good/bad.

        MS Windows, however, exists as a conscious convention, or by means of patents and such. Yes any agreed upon example of MS Windows will actually function in that way, though it’s the verbiage which defines the idea in the legal capacity which concerns Microsoft. Thus having an affective state is not a conscious convention, though having MS Windows is a conscious convention.

        It may be that my Pinocchio example hit its mark. As you said, “Well, we can discuss it at the mental level, as you do.” Yes, it’s a brain of 86,000,000,000 neurons (thanks on the zeros!) that creates the human “mind”. And this mind may be modeled regardless of what creates it. Without effective models of its function at this high level, neuroscience has a fundamental problem given that it needs to explain psychology rather than the converse. So let’s try to straighten out psychology so that neuroscience will have something effective to explain. Once you do gain a reasonable working level grasp of the architectural models that I’ve developed, you’ll also be able to effectively assess them.

        Liked by 1 person

        1. Eric,
          On “preferences”, do you have suggestions of another way to refer to a plant’s tendency to grow its roots toward water? Or a single celled organism turning toward nourishment and away from noxious substances? Or a phone programmed to increasingly nag the user as its battery levels get low?

          On neuroscience and psychology, this is old ground between us. I’ll just note that neuroscience continues to make steady progress, despite a lot of philosophers and psychologists insisting that it’s a waste of time. There are philosophers and psychologists contributing to that progress, but they’re the ones engaged with the science.

          “The brain creates a conscious entity as I define the term, as long as it creates an affective state. I’m saying that I consider this definition useful ontologically.”

          What about that definition makes it useful? What attribute, if it were missing, would make it no longer useful? In other words, what about this answer makes it something other than a definitional fiat on the difference between the consciousness of a locked-in patient vs a computer running a particular software package?

          Liked by 1 person

      13. Mike,
        Regarding another way to refer to things like roots growing towards water and so on, it stumped me for a bit. Good question. What I don’t like here is taking a formal term such as “valence”, and then defining it as loosely as “preferences”. To me this formally implies agency. But what would be a better definition? I’d be fine with “teleonomy”, or the phony appearance of purpose, though this is defining one esoteric term by means of another.

        When I was in school I recall taking a great cosmology course that may apply. The professor would commonly speak of stars in human terms, such as what a given one “wants” and so on. At one point however he felt the need to clarify that he only spoke this way as a standard convention rather than literally. So that might be a helpful way to go. Here we’d overtly speak of desires and preferences for non-conscious function, though with the general understanding that it not literal. Going this way would be easier however if there were reasonable agreement what wants, preferences, and so on, ultimately reduce back to.

        I shouldn’t have implied that searching for the “How?” of affect is a wast of time, and I certainly don’t believe that neuroscience itself is a waste of time. I agree that it does seem to make steady progress. It’s just that there are some big questions outside of neuroscience that clearly need answering.

        For example I’ve criticized the ideas of Lisa Feldman Barrett here plenty. I dislike the perception that including some neuroscience for a psychological claim will also cause it to be more worthy. This is exactly backwards, I think. Though no child psychologist worth their salt would treat babies as if they’re emotionless until socially taught to feel things like fear, as things stand today she’s able to use neuroscience to nevertheless bring this position legitimacy.

        Regarding the potential usefulness of my definition for consciousness as affective states, I should probably get into that over at your most recent post. This is the one which notes how our computers fail when we try to get them to do various things which are quite simple for a human to do (and I’d say that most insects probably have our computers beat in some ways as well). So I’ll probably head over there soon.

        But yes, a paralyzed person who is thus “locked in”, might be considered similar to a computer that accepts inputs and processes them, but produces no output function. (Actually a locked in person will produce non-conscious output, such as heart function or affects like worry, but not conscious output such as being able to talk.) Note that virtually no one is suggesting that any modern software packages produce affects, whereas brains clearly do. So the difference should be that existing as the function of one of our software packages should not be personally consequential, while existing as a locked in person should tend to be horrible. Surely you agree that one should be conscious while the other should not be? But if the theory that generic computer processing alone which follows a specific procedure can produce “thumb pain” and so on, well that would be a different story. This seems like an amazing claim and I don’t know of any supporting evidence for it.

        Liked by 1 person

        1. Eric,
          “I dislike the perception that including some neuroscience for a psychological claim will also cause it to be more worthy. This is exactly backwards, I think.”

          But wouldn’t you agree that psychological theories have to at least be compatible with neuroscience? If we have a theory of innate emotions, and we can’t find consistent neural correlates, you don’t think that has consequences for that theory?

          “Though no child psychologist worth their salt would treat babies as if they’re emotionless until socially taught to feel things like fear”

          I don’t think Barrett argues that children don’t have emotions. That’s really a straw man version of her position. Although she might say that very young infants only have affects. (It’s a matter of philosophy whether the distinction between affects and emotions is meaningful.)

          I’m sure, for moral reasons, a clinical psychologist will assume a subject has conscious feelings. But in terms of understanding, aren’t you the one arguing that we shouldn’t let morality cloud our judgement in these matters?

          “Surely you agree that one should be conscious while the other should not be?”

          For purposes of these discussions, assume I’m a heartless Vulcan who can only be convinced with solid logical reasons, and is utterly unmoved by the emotions of the situation. (I’m not a heartless Vulcan, but can put the hat on when necessary.)

          “This seems like an amazing claim and I don’t know of any supporting evidence for it.”

          If the primary business of nervous systems is information processing, then the claim falls out as a natural consequence. Maybe someone will eventually find evidence that something else besides information processing is happening, but I haven’t seen any yet (aside from processes that physically support the information processing), at least not any that are widely accepted.

          Liked by 1 person

    2. To put what Mike said into my terms: Searle is the one saying that computers that operate on syntax are equivalent to lookup tables. Computationalists like me say no, what the computer has to do instead of using lookup tables, in order to actually get the job done, is do specific kinds of computations, and those specific kinds of computations are exactly what we call conscious-type computations. It’s possible that some computations can produce output without being conscious-type computations (at least not the conscious-type computations humans use) and nevertheless appear like a good approximation of human consciousness. Thus, a thirteen year old, non_native English speaker. But Searle postulated a system able to pass a robust Turing test, and I don’t see how that would be possible except by using sophisticated human-like conscious-type computation.

      *

      Like

      1. I never caught that Searle regarded syntax as only corresponding to using lookup tables. He doesn’t appear to mention lookup tables in the 1980 paper. But he definitely seems to regard it as incapable of intentionality, although he doesn’t get into what about intentionality puts it out of reach.

        Like

        1. At the lowest level, a computer is a lookup device. All operations the CPU performs are based on microcode instructions it looks up based on the current instruction. Any math or logic operation can be looked up using the operands as table indexes (although that’s not usually how it’s done).

          All that stuff about Chalmers and the CSA… a big point there was that any given sequence of computer operation is fixed and determined from a fairly narrow range of inputs. Such a sequence can be easily translated to a table and those inputs used as indexes.

          There is a direct mapping between a state diagram and a lookup table.

          Like

          1. At a certain level, any system that takes in inputs and produces outputs could be said to be a lookup device, including biological neural networks.

            But to me when we use the phrase “lookup table”, we’re talking about something like the input being received, one table lookup happening, and then the output.

            As soon as we start talking about numerous lookups in sequences, selections, iterations, hierarchies, or in parallel, the overall operation isn’t a lookup table anymore but something more sophisticated.

            Like

          2. I agree any IPO (or as JamesOfSeattle would say it, I→[M]→O) can implement the [P]rocess as a lookup table, but I’m not sure why such a system ever requires more than one lookup table so long as a given pattern of inputs always indexes a table entry of outputs.

            More on point, we’re not talking about any IPO, we’re talking about computers, and I meant what I said literally. Executing software really is little more than a sequence of table lookups.

            (What might be a sticking point is whether working memory is seen as part of the lookup process. If by “lookup process” we mean nothing more than indexing a table for an entry, it’s a lot harder, but I think it’s still possible. No one would actually implement such a system, although RISC systems sort of step in that direction.)

            Like

        2. I didn’t mean to say Searle thought syntax corresponded to lookup tables. I meant to say he thought they were equivalent with respect to their relation with consciousness, in that they both necessarily provide none.

          *

          Liked by 1 person

      2. “Computationalists like me say no, what the computer has to do instead of using lookup tables, in order to actually get the job done, is do specific kinds of computations, and those specific kinds of computations are exactly what we call conscious-type computations.”

        Do you mean the sorts of things computers can do now, or are you talking about some putative computations currently beyond our ken?

        Like

        1. To answer question with question, when you say “things computers can do now” do you mean would be capable of doing now if programmed correctly, or do you mean have done already? Obviously no one has programmed a computer sufficiently to mimic a human. But they have programmed the pieces separately (neural networks, semantic pointers, etc.)

          *

          Like

          1. “Obviously no one has programmed a computer sufficiently to mimic a human.”

            Right. So my question is whether by “specific kinds of computations” you mean traditional computations as we know them (but arranged in a way we don’t know how to do to create consciousness).

            You weren’t referring to hardware that hasn’t been invented but to software?

            Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.