Is consciousness really a problem?

The Journal of Consciousness Studies has an issue out on the meta-problem of consciousness.  (Unfortunately, it’s paywalled, so you’ll need a subscription, or access to a school network that has one.)

As a reminder, there’s the hard problem of consciousness, coined by David Chalmers in 1995, which is the question of why or how we have conscious experience, or as described by others, how conscious experience “arises” from physical systems.

Then there’s the meta-problem, also more recently coined by Chalmers, on why we think there is a hard problem.  The meta-problem is an issue long identified by people in the illusionist camp, those who see phenomenal consciousness as an illusion, a mistaken concept.

The JCS issue has papers from a number of illusionists, which include the usual suspects, like Daniel Dennett and Keith Frankish discussing the virtues of the illusionist outlook.  It also has an entry by Michael Graziano discussing his attention-schema theory of consciousness.  Graziano gave a description of the meta-problem in his book Consciousness and the Social Brain back in 2013, although he didn’t call it that.  (Graziano has a new book out, which appears to be an updated look at his theory, which I might have to read at some point.)

Picking through the entries, two in particular caught my attention.  One by Hakwan Lau and Matthias Michel (whose work I’ve highlighted a lot lately), look at the meta-problem from a socio-historical perspective.  The main gist is that on some subjects, such as consciousness, we psych ourselves out, collectively convincing ourselves that it is unsolvable.

This has the perverse effect of making scientists reluctant to work on it, and attracting senior scientists, often from other fields, at the end of their career interested in making a revolutionary breakthrough, which often leads to outlandish ideas.  This in turn sets up a feedback loop leading to a breakdown of peer review, credibility, and funding.

The solution Lau and Michel contend, is to work on incremental gains, attempt to build up empirical evidence.  Gradually this will diminish many of the mysteries and give the idea that the problems are not as insoluble as they might appear.

Of course, many will never accept the theories produced by such an approach, but as Lau and Michel point out, this is often true in science, where Isaac Newton’s contemporaries were uneasy about the action-at-a-distance implied in Newtonian gravity, or Einstein’s reluctance to accept the results of quantum mechanics, but the old guard was eventually replaced by a newer generation of scientists who didn’t find the new theories objectionable.

I think there’s a lot to this view.  But I also think the incremental approach been happening in psychology and neuroscience for a long time.  I’ve read plenty of neuroscience material that were studies of aspects of consciousness, but that studiously avoided the word “consciousness”, focusing instead on specific capabilities and the neural wiring underpinning it.  Much of this material is what personally convinced me that the consciousness problem is overstated.

The other paper that caught my eye has a similar theme.  Justin Sytsma and Eyuphan Ozdemir challenge Chalmers’ contention that perception of the hard problem is widespread among the lay public, that it’s part of our folk psychology.  (There’s a free preprint available.)

They provide evidence showing that people are only slightly less likely to attribute phenomenal experiences like seeing red to a robot as opposed to a human.  The data do show that they’re less likely to attribute pain to the robot, but not as much as might be implied by a widespread feeling that phenomenal consciousness is uniquely human or biological.

In other words, according to the authors, most of the lay public don’t appear to hold a concept of the philosophical version of phenomenal consciousness, and so most of them don’t have the intuitive concern about the hard problem.

If true, I can’t say I find it particularly surprising.  Keith Frankish recently asked on Twitter if people recalled thinking about consciousness as a child.  I didn’t respond, mostly because I wasn’t sure what I remember thinking about consciousness as a child.  I certainly wouldn’t have used the word “consciousness”, but I was trying to think if I might have pondered it in some pre-terminological fashion.

But the truth is, prior to about ten years ago, I didn’t really give consciousness much thought.  The word “conscious” to me meant little more than being awake.  (Even back in my younger days, when I was a dualist.)  I suspect for a lot of people, that’s about the limit of what they consider about it.  Most of them have no problem conceiving of a robot as conscious.

What do you think?  Did you ponder consciousness as a child?  Or before you were interested in philosophy?  In other words, before you read it was a problem, did you actually perceive the problem?

57 thoughts on “Is consciousness really a problem?

  1. When I was maybe 7 or so (I am really bad with dates), it occurred to me that my whole life might be a dream, and maybe I was still a little baby in a crib. I think that counts as pondering consciousness. Although not specifically about robots and brains, I was playing the Descartes card pretty hard there (not that I had heard of him yet).

    Liked by 1 person

    1. Thanks. I agree. It does count as pondering it. In truth, I probably had similar thoughts. But the only thing I can remember, after seeing a computer in action, but before learning any programming, was a creepy feeling about any possible mind being in there.


  2. I remember as a kid, wanting to be a whale. I thought it would be interesting to see what it was like.
    I still don’t think I was wrong to expect that I could know what it was like to ,say, bite a seal in half and swallow it. It would be like biting a hot dog in half and swallowing it. In other words, the two are functionally, and so psychologically, analogous.
    I think I am getting Dretske’s take on such things right here: seal swallowing and hot dog swallowing have similar aspectual shapes, and if I have any right to proclaim that I know what it is like to swallow a hot dog, then I have the same grounds to claim that I know what it is like to swallow a seal.
    But having the seal swallowing experience itself is something else. Even the whale could not say what swallowing a seal was ‘like’ in that sense. Neither of us could ever be a child or whale as a whale or child, dining on hot dogs and seals and then switch around, contextual references intact, to do the converse. We would always be some iteration of ‘child-as-whale’ or vice versa, never having the true experience of the other.
    I understood that instinctively as a child, as I think most children do.
    That means I didn’t really see it as a problem, in the sense that someone like Nagel might see it; as a mystery regarding how such a situation might ‘arise’ in an otherwise publicly accessible world.
    Not a problem, more like – a condition of participation.

    Liked by 2 people

  3. ‘The advantage of the emotions is that they lead us astray, and the advantage of science is that it is not emotional’ – Oscar Wilde (The Portrait of Dorian Gray)
    ‘Consciousness’ to my way of thinking is the be all and end all as I alluded to in my article which noone read here ‘Is it too early to rule out the Copenhagen Classic interpretation’.


      1. When people were responding to my previous post I noticed that noone had appeared to have even opened my article based on the WordPress stats. Thank you for viewing it. It just adds my 2 cents based on what I gleaned from my foray into the subject.
        I appreciate the tremendous work you do on your blog. I feel privileged to read your articles. Thanks.

        Liked by 1 person

  4. “In other words, according to the authors, most of the lay public don’t appear to hold a concept of the philosophical version of phenomenal consciousness, and so most of them don’t have the intuitive concern about the hard problem.”

    My reaction here is basically, “Duh!” (Compare this idea with the topic of your previous post. 😉 )

    Most of the lay public has no clue why reconciling QFT and GR is so hard (nor even what those amount to). There are many technical topics in many areas where this is true. One has to study a topic to truly understand it. (There is probably something of a Dunning-Kruger effect here. The idea being it takes a certain amount of education on a topic to even understand the depth of your ignorance on it.)

    “Did you ponder consciousness as a child?”

    I think probably in grade school, and certainly by high school. I suspect that has mostly to do with discovering, and taking to, science fiction at a very young age. It comes up in a number of stories, Asimov’s robot stories, for instance.

    I ran into Descartes fairly young, too, again probably due to SF.

    Liked by 1 person

    1. That’s true, but it’s also a common assertion that the hard problem represents a deep universal intuition. An intuition that we only develop after reading the literature, and that no scientific evidence can substantiate, looks a lot less compelling than one everyone supposedly has. That said, I’m not entirely sure that the authors’ data isn’t an artifact of how they asked the questions.

      As I noted to Paul above, I did have a creepy feeling on seeing a computer work that maybe there was a mind there. What’s interesting is that this was after seeing tons of science fiction where of course machines had minds. That feeling faded as I learned more about computers. But it remains striking to me that I never pondered it in the way philosophers often assert that everyone ponders it.


      1. “That’s true, but it’s also a common assertion that the hard problem represents a deep universal intuition.”

        I don’t know that I’m at all on this “intuition” channel. As you say, a perception of the hard problem comes from reading the literature, and at that point, I’m not sure it’s right to disdain it as merely an intuition (which suggests “guess”).

        The bottom line (for me) is that there is unquestionably something it is like to be me. Say all you like about the “falsity” of qualia, but I experience them every second I’m awake (as well as in dreams — I had a real doozy last night).

        Nothing we currently understand explains how phenomenal experience can occur, but nothing is like a brain, and we don’t fully understand the brain. Given the clear and present reality of phenomenal experience, the intuition that something interesting seems to be happening seems cogent and coherent.

        If anything, perhaps the intuition that “there’s nothing to see here folks” is just as suspect. It can be read as a second round of Newtonian-Laplacean “the universe is just a clock” thinking that we now understand was wrong.

        “I’m not entirely sure that the authors’ data isn’t an artifact of how they asked the questions.”

        Heh, yeah, surveys. #saynomore

        “I did have a creepy feeling on seeing a computer work that maybe there was a mind there.”

        It seems very human, once one arrives at a theory of mind that recognizes other minds, to see mind in other things. Our distant ancestors saw it in many inanimate objects.

        I was into technology, especially electronics, since the early 1960s and worked with early crude systems (plus I understood how they worked), so I never saw mind in machines. It was only during the initial AI excitement that I began to wonder if they might be capable of it someday.

        By the 1990s or so, I was more-or-less accepting it was possible, and it’s only the last decade or so that really thinking about convinced it wasn’t. At least not in the computational sense. I still think a Positronic brain might work.

        On the flip side, dogs were always a part of my life, more so as I got older, so I’ve always had a sense of mind regarding animals.

        Liked by 2 people

        1. Nothing we currently understand explains how phenomenal experience can occur

          You mean besides the process of representation. A process of representation has all the hallmarks of phenomenal experience. Maybe the pattern/category represented constitutes the object of the “feeling”. Maybe when “redness” is represented and interpreted as such, that is the “feeling” of redness.



          1. That seems awfully tautological to me in that representation seems to imply phenomenal experience without explaining it.

            To me, the hard problem (which I believe is, indeed hard) is that nothing in physics or technology explains how there can be something it is like to be an information processing system. No such system we know of (except brains) shows any sign of it.


        2. Nothing we currently understand explains how phenomenal experience can occur

          But we can explain physically how representational processes occur, and we can explain that a system which uses such processes and also remembers them, and refers to them, without knowing that they are “representational processes”, essentially will refer to the
          “represented objects” of those processes, and the language used to make such references will be the equivalent of “feels” or phenomenal consciousness. What else needs to be explained?



          1. “But we can explain physically how representational processes occur, and we can explain that a system which uses such processes and also remembers them, and refers to them,…”

            I agree with you so far…

            “…the language used to make such references will be the equivalent of ‘feels’ or phenomenal consciousness.”

            And that’s where you lose me. Because nothing in physics or technology accounts for why there should be “something it is like” to be an information system working with representations, the language used to describe such systems not withstanding.

            I’ve done a lot of software-based modeling of various systems, and those systems represent things, but nowhere have I ever seen anything suggesting phenomenal experience. I have no reason to believe there is anything it is like to be those systems.

            Of course, no system we’ve ever created comes anywhere close to the brain, so maybe there is a threshold of some kind when a system gets complex enough. Maybe there is something it is like to be a sufficiently complex system. We can’t know until either our physics or technology allows us to test such a proposition.

            (I’ll add that I’ve always thought it was possible in a sufficiently complex physical system — one that is essentially isomorphic to the brain. But I don’t see it ever happening in a computational one.)

            P.S. I’ve been meaning to ask… what’s with the trailing asterisk? Is it meant to be a signature of some kind?


          2. “… nothing in physics or technology explains how there can be something it is like …”
            Wyrd, aside from the I-wish-it-would-go-away ‘what-it’s-likeness’, the implementation of consciousness is biological and Neurobiology is most likely the correct approach for finding the explanation.


        3. The process of representation can involve conceptualization in that a single representation can be a combination of other representations. So a single representation might involve a combination such as (affordance:food(food-type:fruit)+projectile, size:hand-size, color:green, name:”apple”, etc.). This representation would be “like” another which shares some but not all of it’s sub-concepts.

          But not all representations will be equal. Some will be flat, not made of combinations. For these, the question of “what it’s like” isn’t useful, because it’s not “like” anything else. The experiences of bacteria, and Mike’s “reflexes”, and I expect your computer programs, will be like this. It’s not that these representations aren’t like anything. They are like themselves. They’re just not like anything else. They’re unique.

          [yes, a signature. My first handle on BBS’s was Spydyr, which eventually got cut down to a spider emoji: *. I thought continuation would be good.]


          1. “It’s not that these representations aren’t like anything. They are like themselves. They’re just not like anything else. They’re unique.”

            Sure, but that’s not what I mean by “something it is like” — I was referring to the phrase due to Nagel (1974). It’s a reference to phenomenal experience.

            I’m fine with the idea that our mental content is a representation of reality (or, in some cases, a representation of unreality, as in a dream or optical illusion). That idea goes back at least to Kant.

            But I don’t see how it even begins to explain phenomenal experience.


        4. I think that’s the illusion, that there is anything else to explain. If you build a system that does the same kind of representing that the brain does, without regard to how it does it, that system will claim it has phenomenal experience, and for the same reasons.



          1. Well, yeah! As I said above, if it’s isomorphic to the brain, I think it is likely to work like the brain.

            But I don’t call that an “illusion” so much as just something we don’t understand, yet. (And, since we can’t yet test it, it might not be true after all.)

            For me the hard problem is just that, a hard problem. One we haven’t solved and don’t yet understand based on physics as we know it.


          2. A number of folks repeat “Physics” in pondering the implementation of consciousness, but I suggest that Biology is the appropriate domain. We don’t look to Physics to understand digestion.


        5. I have no doubt that AI is possible, in the most strict sense: Artificial. I see artificial intelligence in my neighbors, coworkers and lover every day, beside myself in worst, or most normal, moments. I’ve learned to be and remain amused by the insight, and not travail over the contests it seems to inspire.

          In the end, I may be glad to reply with the ancient Poets in this regard, and quote a famous Latin in this respect:

          No more bemoaned the wars of Dark and Light
          Nor proud deny the factions of my care;
          For all this folly, I cannot but own
          In every way, but for my faith to bear.

          (Quintius Flaccus?)


      2. The Hard Problem is rooted in mind-body dualism and I suspect most people believe that their thoughts are something ghostly—their soul—and much different in character than their bodily feelings. Religious upbringing often solidifies that dualist impression.

        I didn’t think about consciousness as a child but I’ve always been attracted to unconsciousness … 😉


        1. I agree the hard problem is rooted in dualism. I actually think discussion of consciousness, indeed most concepts of consciousness itself, are hopelessly tangled up with lingering intuitions of dualism, even among people who intellectually reject dualism.


        2. Or perhaps, it’s unconsciousness that is attracted to consciousness, like a magnet to its polar Compliment…spooky action at a distance? We see in part, and understand in part, peering through a shady glass but dimly perceiving our objects, even when, or especially when, peering upon our Dual self, if THAT what it Be, or upon the Shadow of our waking reminiscence. Selah*


    1. One philosopher wondered if illusionists weren’t zombies who actually doesn’t have phenomenal consciousness, and therefore can’t see what the big deal is. As someone near the illusionist camp, I think we definitely do see it, we just don’t think our intuitions, without corroborating evidence, should be trusted.


  5. In my mid teens at school I wrote a poetic English essay about the problem of knowing things in themselves. Basically, we see into things not by cutting new surfaces into them, but by watching them move. It was primitive stuff (how we abstract from motion was not addressed). And it was inspired by reading SciFi. Now, thinking that robots see red things as red, like we do, is an obvious anthropomorphism. Thinking that any machine would just be activated by its detectors, like a burglar alarm going off, and then thinking that a robot is just a machine, simply requires a bit of thinking. Realizing that other people may well not see red things just the way one does oneself is a bit more sophisticated. And thinking that if we are biochemical machines, then we would no more see red things than a robot would, is like a line from a movie. Perhaps such thoughts are common knowledge, like quotes from Shakespeare.


    1. Thanks Martin. I think you’re right about each of those points being progressively more sophisticated.

      I generally find Shakespeare quote incomprehensible, but any quotes from any classic art that indicated a concept of phenomenal consciousness, at least before philosophers started writing about it, would be extremely interesting.


  6. “What do you think? Did you ponder consciousness as a child? Or before you were interested in philosophy? In other words, before you read it was a problem, did you actually perceive the problem?”

    Oh boy…here’s a can of worms! You bet, I thought about the Perception of Self-Consciousness when very young…at least by 5yrs old. Whether this amounts to a Philosophic Awareness of the implications, imagined or real, I’ll leave others to judge, but goes like this:

    Stop the Train, I want off!…the world at large is Other than ME…I am ‘watching’ the ‘things’ about me, trailing me or being trailed by my interests. Many are near, more are afar…some leaving, some coming, some still. I am sight…I see…I am the Eye of Awareness, naked to the world as it flows into my Vision…I turn this way, then that way, then another, orchestrating and being orchestrated in a dance, and all things a part-ner in this dance. I cannot stop, I cannot seize the day nor make the Sun set still at noon, but I am in IT, of IT, through IT always, seeing and unseeing, awake and asleep. How am I the Eye both seeing and that which is seen? This is sooo weird…can it be real as I am real…or am I real as it is real?…This is sooo weird.

    Something like that.


      1. As to the “can of worms,” I have long entertained Memory of earlier events, some of which I only recovered after years of reflection upon what has gone before, as if opening a drawer to familiar thoughts I could know as my own, and some which resided in my Memory as a Dream State from the very earliest of times…as when just rolling over on the bed was a Hurculean exertion, beside the wide-eyed wonder of being conscious of the wall spider I might like to taste.

        I remember crawling also, and just learning to NOT cry over spilled milk. (Big boys don’t cry, ya know; and I had a big brother). I don’t believe that I ever would have maintained nor recovered these very early memories were it not for the isolation of those first five ears of childhood, living like a hermit when the brother wasn’t around, and escaping great dangers, sometimes not unscathed, that have ended many an innocent career by then. Anecdotal Evidence, withal.

        c’est la vie


    1. Same here Durandus, but I learned to ascribe all of that to my Asperger’s. … 😉

      Is “normal” consciousness on the autism spectrum?


      1. Sure it is, but in minute moments, barely grasped but often hinted at in the general milieu of common men and women. They just don’t dare to be Weird, like the truly animated.


  7. I don’t know if as a child I would’ve thought of consciousness as a problem, but I did have a few moments of being aware of my own ‘subjective’ experience. In retrospect, I would call this experience a sense of wonder at my own volition. It felt baffling, mysterious. I came across a fantastic passage in Ian McEwan’s novel, Atonement, that, to my surprise, described the exact same experience I had as a child—finger and all. After seeing this in print, I wondered if maybe my ponderings as a child were not unusual. Here it is… I would exclude the bit about the soul from my own experience, but the part about trying to catch myself out is so spot on:

    “She raised one hand and flexed its fingers and wondered, as she had sometimes before, how this thing, this machine for gripping, this fleshy spider on the end of her arm, came to be hers, entirely at her command. Or did it have some little life of its own? She bent her finger and straightened it. The mystery was in the instant before it moved, the dividing moment between not moving and moving, when her intention took effect. It was like a wave breaking. If she could only find herself at the crest, she thought, she might find the secret of herself, that part of her that was really in charge. She brought her forefinger closer to her face and stared at it, urging it to move. It remained still because she was pretending, she was not entirely serious, and because willing it to move, or being about to move it, was not the same as actually moving it. And when she did crook it finally, the action seemed to start in the finger itself, not in some part of her mind. When did it know to move, when did she know to move it? There was no catching herself out. It was either-or. There was no stitching, no seam, and yet she knew that behind the smooth continuous fabric was the real self — was it her soul? — which took the decision to cease pretending, and gave the final command.”

    Liked by 1 person

    1. That’s an interesting aspect to ponder. It seems like the simplest thing, but it reminds me that there are brain pathologies that play havoc with our sense of bodily self. People with certain brain lesions can become convinced that their arm is not really their arm, to the extent of seeking amputation. Or they can become convinced that they’re not alive. V.S. Ramanchandran covers a lot of these bizarre cases in his book, ‘The Tell-Tale Brain’.

      I forgot about it until now, but one thing I did ponder as a boy was the inner monologue in my head. I thought I could remember a time when I didn’t have it (although childhood memories are the least reliable of all memories). I wondered if anyone else had the monologue going on, or was I just weird.

      I didn’t find out for sure that most people do have it until I was an adult, when I was looking into speed reading, with the need to bypass that inner speech. I remember telling the speed reading teacher that I didn’t know if I actually wanted to silence my inner voice. I see it as a friend.

      Liked by 1 person

      1. Is that how speed reading is done? I think turning off the inner monologue would ruin novels, but I guess it makes sense for rapidly absorbing non-fiction, depending on what it is.

        I wonder how someone can become convinced that they’re not alive? Do they not believe they’re experiencing some sort of afterlife at least?

        As for inner monologue, I never questioned that other people had it too, but then again, I watched a lot of TV, and voice-overs were fairly common (Wonder Years, for instance…If only my inner voice were a much wiser future version of me narrating my thoughts…)

        As a child, you never did the finger test? Or some version thereof?

        Liked by 1 person

        1. On speed reading, that’s part of it. By bypassing the inner monologue, you’re able to take in information faster. Another big aspect is compromising on the level of comprehension you get, which makes sense but clues me in that there’s no free lunch. Personally, I hated the reading experience (fiction or non-fiction) when trying to speed read, and decided that it just wasn’t for me.

          On someone being convinced they’re not alive, it’s called Cotard’s syndrome.
          It’s tempting to see if as possibly just psychological, but Ramanchandran notes that patients with the condition are often indifferent to pain. They feel the sensation, but aren’t distressed by it, which implies damage of connections along pathways between the insula and anterior cingulate cortex.

          On the finger test, I can’t recall. I do remember once wondering what was going on with habits. Why were they so hard to break? What part of me was deciding to engage in them? (I’ve now read enough neuroscience to have insights into them, but back then they seemed mysterious and vexing.)

          I’m also reminded of a time, as a teenager, when I was listening to a preacher on TV, who said if I just quieted my mind, God would speak to me. So I tried to quiet my mind as much as possible, but every time I possibly detected God’s voice, I couldn’t rule out that it wasn’t just my own voice; indeed, God’s voice and attitude seemed suspiciously like my father’s. I sometimes wonder what might have happened if I hadn’t been concerned with ruling out my own voice.

          Liked by 1 person

    2. I can relate to that Ian McEwan passage as well Tina. At the age of seven or so it seems to me that I’d look at my hand and marvel at how I could control it, and yet have absolutely no idea how I was doing it. This made no sense to me. Surely if I could do something, then I ought to understand how!

      My adult explanation goes like this: My brain is essentially a neuron based computer which thus operates my body, somewhat like a standard computer operates a robot. The big difference however is that my own brain produces sentience as well, or a punishment/ reward dynamic (which I don’t consider any of our robots to produce). I view sentience as subjective experience itself, or the essence of primary consciousness. Apparently evolution added senses like vision to the auxiliary experiencing entity, as well as memory of past experiences, to create a functional second computer which is outputted by the first. Thus it now make sense to me why I can move my hand without understanding any associated mechanics — apparently I’m being serviced by an amazingly advanced machine in my head.

      And why might evolution have created sentient entities like us? I think because we thus provide the system as a whole with purpose driven function — here environments which are more “open” may be dealt with more effectively. (Note that our robots only function effectively in more “closed” environments, such as the game of Chess.) Essentially the vast supercomputer brain does all the work, while the tiny sentience based computer takes the credit. Or perhaps it’s mainly the adult variety which tends to take credit, that is once childhood curiosities vanish.

      Liked by 1 person

      1. One “adult variety” might exist for the concert pianist (or equivalent thereof). I know from my very minimal experience from trying to teach myself a Chopin nocturne that once you’ve got something down in your muscle memory, you’d better not let yourself wonder, “How am I doing this?” Or else you won’t be able to do it.

        Liked by 1 person

    3. Yes good example Tina. The model here is that the conscious computer teaches the non-conscious computer to play a Chopin nocturne by means of repetition. Thus a person with less experience could easily screw it up with extraneous thoughts like “How the hell am I doing this?” Conversely an accomplished pianist should have a given piece down so well that he or she can let the vast supercomputer do the playing while consciously think about all sorts of mundane things at the same time. Given that many of us have 20+ years of daily experience driving, I also consider this to be an effective example. Though driving can be fatal, it becomes so reflexive for the non-conscious computer that we’re often able to do so while letting our minds wander.

      Anyway my point is that I’ve developed some brain architecture (or a conceptual drawing rather than neuroscientific engineering) from which to address our mutual childhood curiosity. Here the conscious entity may be envisioned as a tiny computer which does less than one thousandth of one percent as many calculations as the vast neuron based supercomputer by which it is produced. While the supercomputer should take in countless forms of input and process them for countless forms of output, the tiny conscious computer should simply constitute what we perceive of existence.

      And how specifically does consciousness function? In the end I think it boils down to sentience. I consider this as the motivation which drives conscious function, somewhat like electricity drives technological computers, or neuron dynamics drive brains. Then secondly there are informational senses such as hearing and touch. Thirdly there is the input of degraded past conscious experiences, or memory. Theoretically the conscious processor interprets these three varieties of input and constructs scenarios from which to best promote its sentience based interests, and does so by means of just one form of output, or muscle function. (And as mentioned earlier, from this model it’s not really us operating muscles. The non-conscious supercomputer takes these conscious decisions as input from which to get those muscles functioning appropriately.)

      If you have any questions about my psychology based brain architecture, please do ask…

      Liked by 1 person

      1. Well, I’m not sure what you mean by sentience as a motivation for conscious function.

        Maybe you’re driving example is better than the piano example. It’s hard to imagine someone playing a difficult piece while thinking about what they plan to pick up at the grocery store. Even if you’ve got it down pat, there’s still a pretty high level of concentration going on, albeit concentration of a different sort. But it’s not at all far-fetched to imagine some mundane thought process happening while driving.

        Liked by 1 person

        1. (Sorry Mike, scratch the last again if you get time.)
          Let’s see if I can provide a better sketch of my model.

          Apparently virtually everyone in the business considers brains to function as neuron based computers. I agree. But while others try to integrate consciousness directly into the brain for a single form of function, I take a different approach. I consider consciousness to exist as a distinctly different virtual computer which is produced by the non-conscious brain. While the vast non-conscious computer is good at doing non-conscious things, the conscious computer is purpose driven, or seeks to feel as good as it possibly can from moment to moment. Thus while non-conscious entities seem to fail under more “open” circumstances, conscious entities may be able to succeed given their teleological form of function. This is to say that personal purpose (or sentience), permits them to go beyond just programming instructions. While a tree may not need to go beyond its genetic programming, fish probably live in open enough environments to also require a conscious form of function.

          Try considering the thoughts that you have as the processing element for this virtual second computer, with a motivational input (sentience like pain), an informational input (senses like sight), and a recall input (memory of past conscious experiences). Thus your thought will interpret such inputs in the quest to feel better in general, with your only such output as muscle function. While an experienced driver may drive while thinking about all sorts of things at the same time, one shouldn’t be able to silently read a book while having a telephone conversation. Some things can’t be passed off to a non-conscious computer.

          So back to piano, I suspect that conscious practice helps free a person from concentrating quite as much — here the supercomputer should gain more proficiency. But of course this is a computer which never experiences beauty. A good pianist should be able to add conscious interpretations to their work when moved by how a given song makes them feel. So perhaps you’re right that driving is a better example. Human speech would be another. The supercomputer should need to master some incredible muscular dexterity in order for us to speak as we do. Usually we don’t need to consciously think about how to pronounce our words, we just talk. Babies on the other hand should need to learn to speak by means of conscious practice.

          Any other questions or comments? I’m curious if this broad conception of conscious function seems consistent with your own experiences. Or not.

          Liked by 1 person

          1. Thanks Stephen, that’s a great article. I’ve found Searle’s perspective encouraging where I’ve read him, though haven’t yet looked as hard as I should. If you’ll notice though, he’s actually quite clear in it that he does consider the brain to function as a neuron based computer. Apparently the reason that so many in the field oppose his position is that they consider it most effective to conceptualize consciousness in terms of information alone — “brain software”. I instead take the side that you and he do.

            Actually in recent months I’ve been having some fun with a different version of his Chinese room thought experiment. Imagine if Searle were to accept written notes (and I guess they could even be in English for this one), and then look up responses from an extensive manual in order to provide output notes that reflect what our brains do to cause (get this!) “thumb pain”. Can you imagine the stuff that you feel when your thumb gets whacked, being produced by a machine that does nothing more than accept certain symbol laden paper notes, and then processes them to spit out other such notes? Apparently this would be possible if the only thing that our brains do to cause thumb pain, is process input information in the proper way. Like Searle, instead I suspect that dedicated mechanisms exist in my head which cause thumb pain. Thus brain processing would animate such mechanisms, and somewhat like such processing animates heart function, or even like normal computer processing animates computer screens.

            My most recent discussion about this can be found here if you’re interested:


          2. Sorry it’s taken me so long to reply!

            In general I’d say what you’re talking about with conscious effort does jive with what I’ve experienced. But as far as consciousness goes, I don’t think of it narrowly as a computer the way a lot of people do, though I understand why that works as a metaphor for the brain.

            Liked by 1 person

          3. Tina,
            It’s good to hear that this roughly presented model jibes with your own experience. Actually beyond me I don’t know of anyone who thinks of consciousness as a distinct form of computer which is produced by the brain form of computer. Generally people seem to think of brain function as computational as a whole, or a system where consciousness exists as one potential process under the domain of a unified computer.

            For example below you might have noticed James of Seattle’s suggestion for me to instead describe consciousness in terms of a higher order variety of software that emerges from a lower order variety. If push comes to shove I may have to resort to such a scheme some day, though I suspect it would sacrifice quite a bit of my message. For example standard computers already use higher and lower varieties of software, though it’s all essentially microchip based function. I’m instead saying that the neuron based computer in my head produces a fundamentally different variety of computer that does not function on the basis of neurons. This tiny second computer (consciousness), is instead driven by a unique need to feel good and not feel bad, or what I consider to be reality’s most amazing stuff.

            Furthermore Mike talks about consciousness emerging from non-conscious survival circuits — as if the entire machine is working in concert as a single computer. I present a different message, and it’s psychological rather than neuroscience based. I believe that we’ll need to get a better grasp our psychological function in order for neuroscientific findings to be interpreted more effectively. The analogy would be that engineers are unable to design great buildings on their own. Instead we use architects who commonly know little about engineering, and they provide these specialists with guidance.

            Try not to worry about when or if you get back to me for this or any other conversation. I simply will not be annoyed. Actually it would only bother me if I though you were concerned, so don’t be.

            I think it was a couple of springs ago that I went pretty far into your own blog, and thus I do feel that I know you pretty well in that capacity. Since I don’t post myself this is one sided of course, but so it goes. Furthermore in years past I recall having long conversations with Mike, and apparently you would politely wait until it petered out to start your own such conversations with him. Though neither of us would have minded, you do have some credit with me.

            Liked by 1 person

          4. I had a long response that I just now accidentally deleted. So this will have to be a shortened version of that…too bad.

            Keep in mind that I don’t know anything about computers and I’m taking your two-computer theory as a metaphor for the conscious-unconscious (or subconscious). So whether or not hardware or software is involved is beyond me.

            I tend not to view consciousness as a computer, except in these discussions for the sake of the discussion. And what I have in mind is pretty much limited to the user-end.

            “I present a different message, and it’s psychological rather than neuroscience based.”

            Well, that’s a relief to me!

            “I believe that we’ll need to get a better grasp our psychological function in order for neuroscientific findings to be interpreted more effectively.”

            That makes to me. Ultimately, I think it has to.

            I, for one, would like to see an end to this view of consciousness as simply a brain in the vat, an neuron-based information processor which has no body or environment (outside of, perhaps, its own simulation). This is where the computer metaphor fails for me.

            And thanks for understanding, and for checking out my blog! I appreciate it. Someday, I suppose, I’ll get back to posting again. I hope.

            Liked by 1 person

          5. Yes too bad about your deleted reply Tina. It does sound like you’re interested however, so I’ll continue. And when I do inevitably lose your interest, no worries. (I like to write in MS Word for a number of reasons, one of which is that my work doesn’t seem to get deleted.)

            I don’t know too much about computers either. But yes, my dual computers model of brain function is a metaphor for the non-conscious and conscious varieties of function. The theory is that the vast non-conscious brain produces a tiny auxiliary computer, or the user end through which we perceive existence.

            You have no use for vatted brains? Nor does nature, I think.

            So you’re more interested in psychology than neuroscience? Me too. My perception is that philosophy is also dear to you. That’s where my own project essentially begins. I believe that our mental and behavioral sciences remain soft largely because they do not yet have generally accepted principles of philosophy from which to build. I propose one principle of metaphysics, two principles of epistemology, and one principle of axiology (or essentially ethics and aesthetics). It’s not that I’d end the “art” of traditional philosophy. I love that stuff too! But in addition I believe that a respectable community of philosophers must emerge whose only mission is to develop agreed upon principles from which to better found the institution of science. Otherwise I believe that our soft sciences shall continue to suffer.

            The most fundamental hurdle, I think, resides in the institution of morality. Consider the following scenario:

            Life evolved. Then non-conscious brains evolved to better operate more mobile varieties. But just as our non-conscious robots suck at doing things that they weren’t specifically programmed to do, these creatures would have sucked as well. They would have had senses just as our robots do, though pre-programmed responses would limit their function.

            I suspect that evolution added sentience, or something not inherently functional. This “good/bad” stuff progressively became functional as the conscious form of computer however, and so helped life deal with novel circumstances that it wasn’t otherwise programmed to. This purpose based form of computation would evolve into the tiny computer by which you and I experience existence, facilitated through our vast supercomputer brains.

            Here’s my point however. We’re perceptive enough to realize that happiness is all that matters to us, and also have enough theory of mind skills to presume this of others. Therefore instead of directly admitting “My own happiness is all that matters to me”, we are socially pressured to follow the paradigm of morality, or to believe that good people are not selfish, while bad people are.

            Just as the science of physics is neither moral nor immoral, I believe that we’ll need an amoral exploration of human function which is thus able to formally admit that “happiness” constitutes the value if existing for anything that’s sentient. With such founding amoral theory from which to describe our function, our soft sciences should finally begin hardening.

            So that’s my project in general.

            Liked by 1 person

          6. I’m not sure I followed all that you said, but from what I gather, you want a science of ethics? Or a scientific ethics?

            “My own happiness is all that matters to me”, we are socially pressured to follow the paradigm of morality, or to believe that good people are not selfish, while bad people are.”

            You sound like Thrasymachus in the Republic. 🙂

            I’m not sure about that. I think, as social creatures, our happiness derives in part from being social (in general), which means I care about other people’s happiness too (more or less)…and where my selfish pursuit of happiness or pleasure ends and begins is really a blurry line.

            My husband’s book might interest you:

            Liked by 2 people

          7. It’s not so much an independent science of ethics that I seek Tina, but rather that I’d like standard psychology to gain such an element in order for the field to improve as a whole. If the human happens to be purpose driven, how might we understand its function without formally reducing the nature of its purpose to a manageable idea? And to be clear, I consider the purpose of anything sentient, whether individual or social, to feel as good as it possibly can, for as long as it possibly can.

            I suspect that the social tool of morality prevents psychology from formally grasping this basic element of our nature, therefore rendering it a primitive science. Regardless of that position however, note that the ideas of Freud have failed, along with B. F. Skinner’s behaviorism, and on, and on, and on. In terms of basic theory, the field of psychology remains wide open.

            One popular modern proposal, for example, is Lisa Feldman Barrett’s theory of constructed emotion. Given that neural correlates for standard emotions have not yet been found, she proposes that emotions (like “sadness”) need to be taught to us by others (verbally no less) in order to actually be experienced. Thus she concludes, for example, that babies cannot be “afraid” since they’ve not yet been (orally!) taught to feel such an emotion. Though it’s quite clear to me that basic human emotions should be felt regardless of what has or hasn’t been taught to us by others, in the current academic environment this theory seems to be doing reasonably well. I consider this to be a huge red flag. Somehow I suspect that your husband feels similarly.

            Mind you that I currently know nothing about his book other than that short Amazon description. In hindsight it seems to me that 1993 would have been a horrible time to propose psychotherapeutic strategies — apparently he wrote it at the end of that particular era. As I understand it few psychiatrists today provide traditional psychology based advice to their patients — instead substance based medication has become the standard. (I’d also love to know your husband’s perspective on this turn of events, and yours if different.)

            Personally I’m not as pessimistic here as you might suspect. I get the sense that psychotherapy wasn’t helping people nearly as much as its high cost warranted. Of course this conforms with my position that without broad purpose based theory, psychotherapy has been a doomed endeavor. But once the field of psychology is able to overcome this hurdle, I believe that newly effective psychotherapeutic methods should find their place along side modern psychiatric drug administration.

            It’s interesting that you bring up Thrasymachus. Last semester I helped my son (then a freshman in high school) work through the Republic. Yes it was a bit unsettling that I would often identify better with Plato’s goat than Plato himself! There are some important nuances to address about this however.

            I certainly agree that as social creatures our happiness tends to be extremely dependent upon others. “Care” bonds us with them by causing us to somewhat feel what we perceive them to feel. Furthermore there is “empathy”, or a theory of mind ability to grasp what others are thinking. Clearly we’re highly affected by what we suspect others think about us, which can get complicated. It’s not merely that it feels good for us to perceive that others think well of us. It also feels good to feel respected by others, and sometimes even without their good will. For example when we perceive being wronged, getting our revenge can feel very good to us even given increased ill will — that’s actually natural with revenge.

            So in care and empathy we have important nuances under my proposal that we’re all self interested products of our circumstances in the end. This is to say that personal happiness should constitute all that matters to you personally, though your perceptions of the state of others should affect your state of happiness as well. Witnessing the abuse of a child should distress you directly given your natural care for such a being. Furthermore by understanding that there are plenty of conditions in our society today which should naturally foster child abuse, this could motivate you to try to help improve this situation. My point is that regardless of how altruistic a given person may seem to be, it should all reduce back to his or her personal happiness in the end, or the fuel/ purpose which I consider to drive the conscious form of “computer”.

            So now let’s get into what I propose as our “morality paradigm”, or the social tool which I perceive to have prevented the field of psychology from formally acknowledging us to all be self interested products of our circumstances. If my own happiness is all that matters to me in the end, and likewise for everyone else, then does it help me to display my selfishness to others? No, that would tend to vilify me given that other people have their own such interests to look after. Thus instead it should be best for me to credibly portray myself in altruistic ways in order to make others feel more comfortable trusting me.

            That, in a nutshell, is what I see as our morality paradigm. I believe that social pressure to portray selflessness has been so strong, that scientists in the field of psychology have not yet been permitted to formally theorize and test the idea that we’re all self interested products of our circumstances. Plato’s goat Thrasymachus is but an ancient Greek display of this paradigm, though modern science seems not to have dented it in the slightest. Just as other branches of science are permitted to function amorally, I believe that psychology must be permitted to go this way as well. And as we thus become better able to understand our nature, I believe that we’ll have far more ability to develop more productive remedies for our countless ills.


      2. Eric, I wonder if you would consider changing your metaphor of the two computers. The thing is, most people see “computer” as applying to hardware (see Searle, and Stephen), and so when you say one computer creates another, even if the other is virtual, people expect hardware-type functionality of the second computer. I think a software analogy would serve you better, specifically, computer programming languages. Your unconscious computer would be the equivalent of machine language (0’s and 1’s), or possibly assembler, which is just mnemonic codes for machine language. You could then say Consciousness requires coding in a higher level language, such as Python. Then, all of the low-level unconscious operations happen with machine level code. When consciously learning a task, like playing a particular Chopin piece, or driving, you have to use the high level language to organize the movements, but practicing the same movements over and over essentially compiles the process to machine level code, and eventually the high level programming can be skipped altogether.


        Liked by 1 person

        1. Actually James, as far as I can tell I’m in agreement with Searle. As one of Searle’s supporters I’m hoping that Stephen will at some point gain enough interest in my brain architecture to get a pretty good grasp of it. But I also haven’t given up on you. I appreciate how you’ve converted your conception of my model into a software based version that uses higher and lower levels of coding languages. Given the interest that people seem to have in software based notions of consciousness, that might be helpful. Sometimes however things just aren’t that convenient. My model does not concern a non-conscious computer which causes software to create consciousness, but rather this computer creating a different variety of computer that functions in a fundamentally different way.

          Consider your “thought” itself, or roughly the words and whatnot that you think in daily life. I consider this sort of thing to exist as the processor for a distinctly different variety of computer (whether or not language exists for a given entity). This element will both interpret inputs (affects, senses, and memories) and constructs scenarios (or options), in the quest to do what makes you feel as good as you can from moment to moment.

          If I continue calling this dynamic “a computer”, I understand how people will naturally ask where it is, or say that I’m fooling everyone because any computer that can’t be touched (or whatever), will not actually exist. Unfortunately for me they’ll have to get over this hang up. As I propose it this exists as a distinct computer which functions on the basis of, not neurons, and not electricity, but rather the most amazing stuff in the universe, or affect. Like the other two this stuff doesn’t inherently compute, though apparently evolution used it to build something that shouldn’t otherwise exist, or an agency based (or teleological) variety of computer.


          1. I’m sorry that you’re not interested Stephen. You do still support the work of John Searle though don’t you? Each of us merely use the “computer” term for brain function as an analogy, not to imply that it’s buildable through humanly fabricated instruments. For example the heart is a machine that’s controlled by the brain. Similarly one of our computers might be used to control the function of a critical water pump.

            Many people seem to think that they can reverse engineer the brain by means of neuroscience. Well maybe… though it seems to me that we’ll need effective architectural drawings from which to guide our engineering efforts. My own ideas concern the architecture side.

            Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.