Is consciousness really a problem?

The Journal of Consciousness Studies has an issue out on the meta-problem of consciousness.  (Unfortunately, it’s paywalled, so you’ll need a subscription, or access to a school network that has one.)

As a reminder, there’s the hard problem of consciousness, coined by David Chalmers in 1995, which is the question of why or how we have conscious experience, or as described by others, how conscious experience “arises” from physical systems.

Then there’s the meta-problem, also more recently coined by Chalmers, on why we think there is a hard problem.  The meta-problem is an issue long identified by people in the illusionist camp, those who see phenomenal consciousness as an illusion, a mistaken concept.

The JCS issue has papers from a number of illusionists, which include the usual suspects, like Daniel Dennett and Keith Frankish discussing the virtues of the illusionist outlook.  It also has an entry by Michael Graziano discussing his attention-schema theory of consciousness.  Graziano gave a description of the meta-problem in his book Consciousness and the Social Brain back in 2013, although he didn’t call it that.  (Graziano has a new book out, which appears to be an updated look at his theory, which I might have to read at some point.)

Picking through the entries, two in particular caught my attention.  One by Hakwan Lau and Matthias Michel (whose work I’ve highlighted a lot lately), look at the meta-problem from a socio-historical perspective.  The main gist is that on some subjects, such as consciousness, we psych ourselves out, collectively convincing ourselves that it is unsolvable.

This has the perverse effect of making scientists reluctant to work on it, and attracting senior scientists, often from other fields, at the end of their career interested in making a revolutionary breakthrough, which often leads to outlandish ideas.  This in turn sets up a feedback loop leading to a breakdown of peer review, credibility, and funding.

The solution Lau and Michel contend, is to work on incremental gains, attempt to build up empirical evidence.  Gradually this will diminish many of the mysteries and give the idea that the problems are not as insoluble as they might appear.

Of course, many will never accept the theories produced by such an approach, but as Lau and Michel point out, this is often true in science, where Isaac Newton’s contemporaries were uneasy about the action-at-a-distance implied in Newtonian gravity, or Einstein’s reluctance to accept the results of quantum mechanics, but the old guard was eventually replaced by a newer generation of scientists who didn’t find the new theories objectionable.

I think there’s a lot to this view.  But I also think the incremental approach been happening in psychology and neuroscience for a long time.  I’ve read plenty of neuroscience material that were studies of aspects of consciousness, but that studiously avoided the word “consciousness”, focusing instead on specific capabilities and the neural wiring underpinning it.  Much of this material is what personally convinced me that the consciousness problem is overstated.

The other paper that caught my eye has a similar theme.  Justin Sytsma and Eyuphan Ozdemir challenge Chalmers’ contention that perception of the hard problem is widespread among the lay public, that it’s part of our folk psychology.  (There’s a free preprint available.)

They provide evidence showing that people are only slightly less likely to attribute phenomenal experiences like seeing red to a robot as opposed to a human.  The data do show that they’re less likely to attribute pain to the robot, but not as much as might be implied by a widespread feeling that phenomenal consciousness is uniquely human or biological.

In other words, according to the authors, most of the lay public don’t appear to hold a concept of the philosophical version of phenomenal consciousness, and so most of them don’t have the intuitive concern about the hard problem.

If true, I can’t say I find it particularly surprising.  Keith Frankish recently asked on Twitter if people recalled thinking about consciousness as a child.  I didn’t respond, mostly because I wasn’t sure what I remember thinking about consciousness as a child.  I certainly wouldn’t have used the word “consciousness”, but I was trying to think if I might have pondered it in some pre-terminological fashion.

But the truth is, prior to about ten years ago, I didn’t really give consciousness much thought.  The word “conscious” to me meant little more than being awake.  (Even back in my younger days, when I was a dualist.)  I suspect for a lot of people, that’s about the limit of what they consider about it.  Most of them have no problem conceiving of a robot as conscious.

What do you think?  Did you ponder consciousness as a child?  Or before you were interested in philosophy?  In other words, before you read it was a problem, did you actually perceive the problem?

This entry was posted in Zeitgeist and tagged , , , , . Bookmark the permalink.

51 Responses to Is consciousness really a problem?

  1. paultorek says:

    When I was maybe 7 or so (I am really bad with dates), it occurred to me that my whole life might be a dream, and maybe I was still a little baby in a crib. I think that counts as pondering consciousness. Although not specifically about robots and brains, I was playing the Descartes card pretty hard there (not that I had heard of him yet).

    Liked by 1 person

    • Thanks. I agree. It does count as pondering it. In truth, I probably had similar thoughts. But the only thing I can remember, after seeing a computer in action, but before learning any programming, was a creepy feeling about any possible mind being in there.

      Like

  2. keithnoback says:

    I remember as a kid, wanting to be a whale. I thought it would be interesting to see what it was like.
    I still don’t think I was wrong to expect that I could know what it was like to ,say, bite a seal in half and swallow it. It would be like biting a hot dog in half and swallowing it. In other words, the two are functionally, and so psychologically, analogous.
    I think I am getting Dretske’s take on such things right here: seal swallowing and hot dog swallowing have similar aspectual shapes, and if I have any right to proclaim that I know what it is like to swallow a hot dog, then I have the same grounds to claim that I know what it is like to swallow a seal.
    But having the seal swallowing experience itself is something else. Even the whale could not say what swallowing a seal was ‘like’ in that sense. Neither of us could ever be a child or whale as a whale or child, dining on hot dogs and seals and then switch around, contextual references intact, to do the converse. We would always be some iteration of ‘child-as-whale’ or vice versa, never having the true experience of the other.
    I understood that instinctively as a child, as I think most children do.
    That means I didn’t really see it as a problem, in the sense that someone like Nagel might see it; as a mystery regarding how such a situation might ‘arise’ in an otherwise publicly accessible world.
    Not a problem, more like – a condition of participation.

    Liked by 2 people

  3. ‘The advantage of the emotions is that they lead us astray, and the advantage of science is that it is not emotional’ – Oscar Wilde (The Portrait of Dorian Gray)
    ‘Consciousness’ to my way of thinking is the be all and end all as I alluded to in my article which noone read here ‘Is it too early to rule out the Copenhagen Classic interpretation’.
    l

    Like

  4. Wyrd Smythe says:

    “In other words, according to the authors, most of the lay public don’t appear to hold a concept of the philosophical version of phenomenal consciousness, and so most of them don’t have the intuitive concern about the hard problem.”

    My reaction here is basically, “Duh!” (Compare this idea with the topic of your previous post. 😉 )

    Most of the lay public has no clue why reconciling QFT and GR is so hard (nor even what those amount to). There are many technical topics in many areas where this is true. One has to study a topic to truly understand it. (There is probably something of a Dunning-Kruger effect here. The idea being it takes a certain amount of education on a topic to even understand the depth of your ignorance on it.)

    “Did you ponder consciousness as a child?”

    I think probably in grade school, and certainly by high school. I suspect that has mostly to do with discovering, and taking to, science fiction at a very young age. It comes up in a number of stories, Asimov’s robot stories, for instance.

    I ran into Descartes fairly young, too, again probably due to SF.

    Liked by 1 person

    • That’s true, but it’s also a common assertion that the hard problem represents a deep universal intuition. An intuition that we only develop after reading the literature, and that no scientific evidence can substantiate, looks a lot less compelling than one everyone supposedly has. That said, I’m not entirely sure that the authors’ data isn’t an artifact of how they asked the questions.

      As I noted to Paul above, I did have a creepy feeling on seeing a computer work that maybe there was a mind there. What’s interesting is that this was after seeing tons of science fiction where of course machines had minds. That feeling faded as I learned more about computers. But it remains striking to me that I never pondered it in the way philosophers often assert that everyone ponders it.

      Like

      • Wyrd Smythe says:

        “That’s true, but it’s also a common assertion that the hard problem represents a deep universal intuition.”

        I don’t know that I’m at all on this “intuition” channel. As you say, a perception of the hard problem comes from reading the literature, and at that point, I’m not sure it’s right to disdain it as merely an intuition (which suggests “guess”).

        The bottom line (for me) is that there is unquestionably something it is like to be me. Say all you like about the “falsity” of qualia, but I experience them every second I’m awake (as well as in dreams — I had a real doozy last night).

        Nothing we currently understand explains how phenomenal experience can occur, but nothing is like a brain, and we don’t fully understand the brain. Given the clear and present reality of phenomenal experience, the intuition that something interesting seems to be happening seems cogent and coherent.

        If anything, perhaps the intuition that “there’s nothing to see here folks” is just as suspect. It can be read as a second round of Newtonian-Laplacean “the universe is just a clock” thinking that we now understand was wrong.

        “I’m not entirely sure that the authors’ data isn’t an artifact of how they asked the questions.”

        Heh, yeah, surveys. #saynomore

        “I did have a creepy feeling on seeing a computer work that maybe there was a mind there.”

        It seems very human, once one arrives at a theory of mind that recognizes other minds, to see mind in other things. Our distant ancestors saw it in many inanimate objects.

        I was into technology, especially electronics, since the early 1960s and worked with early crude systems (plus I understood how they worked), so I never saw mind in machines. It was only during the initial AI excitement that I began to wonder if they might be capable of it someday.

        By the 1990s or so, I was more-or-less accepting it was possible, and it’s only the last decade or so that really thinking about convinced it wasn’t. At least not in the computational sense. I still think a Positronic brain might work.

        On the flip side, dogs were always a part of my life, more so as I got older, so I’ve always had a sense of mind regarding animals.

        Liked by 2 people

        • Nothing we currently understand explains how phenomenal experience can occur

          You mean besides the process of representation. A process of representation has all the hallmarks of phenomenal experience. Maybe the pattern/category represented constitutes the object of the “feeling”. Maybe when “redness” is represented and interpreted as such, that is the “feeling” of redness.

          *

          Like

          • Wyrd Smythe says:

            That seems awfully tautological to me in that representation seems to imply phenomenal experience without explaining it.

            To me, the hard problem (which I believe is, indeed hard) is that nothing in physics or technology explains how there can be something it is like to be an information processing system. No such system we know of (except brains) shows any sign of it.

            Like

        • Nothing we currently understand explains how phenomenal experience can occur

          But we can explain physically how representational processes occur, and we can explain that a system which uses such processes and also remembers them, and refers to them, without knowing that they are “representational processes”, essentially will refer to the
          “represented objects” of those processes, and the language used to make such references will be the equivalent of “feels” or phenomenal consciousness. What else needs to be explained?

          *

          Like

          • Wyrd Smythe says:

            “But we can explain physically how representational processes occur, and we can explain that a system which uses such processes and also remembers them, and refers to them,…”

            I agree with you so far…

            “…the language used to make such references will be the equivalent of ‘feels’ or phenomenal consciousness.”

            And that’s where you lose me. Because nothing in physics or technology accounts for why there should be “something it is like” to be an information system working with representations, the language used to describe such systems not withstanding.

            I’ve done a lot of software-based modeling of various systems, and those systems represent things, but nowhere have I ever seen anything suggesting phenomenal experience. I have no reason to believe there is anything it is like to be those systems.

            Of course, no system we’ve ever created comes anywhere close to the brain, so maybe there is a threshold of some kind when a system gets complex enough. Maybe there is something it is like to be a sufficiently complex system. We can’t know until either our physics or technology allows us to test such a proposition.

            (I’ll add that I’ve always thought it was possible in a sufficiently complex physical system — one that is essentially isomorphic to the brain. But I don’t see it ever happening in a computational one.)

            P.S. I’ve been meaning to ask… what’s with the trailing asterisk? Is it meant to be a signature of some kind?

            Like

          • Stephen Wysong says:

            “… nothing in physics or technology explains how there can be something it is like …”
            Wyrd, aside from the I-wish-it-would-go-away ‘what-it’s-likeness’, the implementation of consciousness is biological and Neurobiology is most likely the correct approach for finding the explanation.

            Like

        • The process of representation can involve conceptualization in that a single representation can be a combination of other representations. So a single representation might involve a combination such as (affordance:food(food-type:fruit)+projectile, size:hand-size, color:green, name:”apple”, etc.). This representation would be “like” another which shares some but not all of it’s sub-concepts.

          But not all representations will be equal. Some will be flat, not made of combinations. For these, the question of “what it’s like” isn’t useful, because it’s not “like” anything else. The experiences of bacteria, and Mike’s “reflexes”, and I expect your computer programs, will be like this. It’s not that these representations aren’t like anything. They are like themselves. They’re just not like anything else. They’re unique.

          *
          [yes, a signature. My first handle on BBS’s was Spydyr, which eventually got cut down to a spider emoji: *. I thought continuation would be good.]

          Like

          • Wyrd Smythe says:

            “It’s not that these representations aren’t like anything. They are like themselves. They’re just not like anything else. They’re unique.”

            Sure, but that’s not what I mean by “something it is like” — I was referring to the phrase due to Nagel (1974). It’s a reference to phenomenal experience.

            I’m fine with the idea that our mental content is a representation of reality (or, in some cases, a representation of unreality, as in a dream or optical illusion). That idea goes back at least to Kant.

            But I don’t see how it even begins to explain phenomenal experience.

            Like

        • I think that’s the illusion, that there is anything else to explain. If you build a system that does the same kind of representing that the brain does, without regard to how it does it, that system will claim it has phenomenal experience, and for the same reasons.

          *

          Like

          • Wyrd Smythe says:

            Well, yeah! As I said above, if it’s isomorphic to the brain, I think it is likely to work like the brain.

            But I don’t call that an “illusion” so much as just something we don’t understand, yet. (And, since we can’t yet test it, it might not be true after all.)

            For me the hard problem is just that, a hard problem. One we haven’t solved and don’t yet understand based on physics as we know it.

            Like

          • Stephen Wysong says:

            A number of folks repeat “Physics” in pondering the implementation of consciousness, but I suggest that Biology is the appropriate domain. We don’t look to Physics to understand digestion.

            Like

        • BeingQuest says:

          A nicely stated reply. Well done.

          Like

        • BeingQuest says:

          I have no doubt that AI is possible, in the most strict sense: Artificial. I see artificial intelligence in my neighbors, coworkers and lover every day, beside myself in worst, or most normal, moments. I’ve learned to be and remain amused by the insight, and not travail over the contests it seems to inspire.

          In the end, I may be glad to reply with the ancient Poets in this regard, and quote a famous Latin in this respect:

          No more bemoaned the wars of Dark and Light
          Nor proud deny the factions of my care;
          For all this folly, I cannot but own
          In every way, but for my faith to bear.

          (Quintius Flaccus?)

          Like

      • Stephen Wysong says:

        The Hard Problem is rooted in mind-body dualism and I suspect most people believe that their thoughts are something ghostly—their soul—and much different in character than their bodily feelings. Religious upbringing often solidifies that dualist impression.

        I didn’t think about consciousness as a child but I’ve always been attracted to unconsciousness … 😉

        Like

        • I agree the hard problem is rooted in dualism. I actually think discussion of consciousness, indeed most concepts of consciousness itself, are hopelessly tangled up with lingering intuitions of dualism, even among people who intellectually reject dualism.

          Like

        • BeingQuest says:

          Or perhaps, it’s unconsciousness that is attracted to consciousness, like a magnet to its polar Compliment…spooky action at a distance? We see in part, and understand in part, peering through a shady glass but dimly perceiving our objects, even when, or especially when, peering upon our Dual self, if THAT what it Be, or upon the Shadow of our waking reminiscence. Selah*

          Like

  5. James Cross says:

    Can an illusionist even ponder consciousness?

    Like

    • One philosopher wondered if illusionists weren’t zombies who actually doesn’t have phenomenal consciousness, and therefore can’t see what the big deal is. As someone near the illusionist camp, I think we definitely do see it, we just don’t think our intuitions, without corroborating evidence, should be trusted.

      Like

  6. Martin Cooke says:

    In my mid teens at school I wrote a poetic English essay about the problem of knowing things in themselves. Basically, we see into things not by cutting new surfaces into them, but by watching them move. It was primitive stuff (how we abstract from motion was not addressed). And it was inspired by reading SciFi. Now, thinking that robots see red things as red, like we do, is an obvious anthropomorphism. Thinking that any machine would just be activated by its detectors, like a burglar alarm going off, and then thinking that a robot is just a machine, simply requires a bit of thinking. Realizing that other people may well not see red things just the way one does oneself is a bit more sophisticated. And thinking that if we are biochemical machines, then we would no more see red things than a robot would, is like a line from a movie. Perhaps such thoughts are common knowledge, like quotes from Shakespeare.

    Like

    • Thanks Martin. I think you’re right about each of those points being progressively more sophisticated.

      I generally find Shakespeare quote incomprehensible, but any quotes from any classic art that indicated a concept of phenomenal consciousness, at least before philosophers started writing about it, would be extremely interesting.

      Like

  7. BeingQuest says:

    “What do you think? Did you ponder consciousness as a child? Or before you were interested in philosophy? In other words, before you read it was a problem, did you actually perceive the problem?”

    Oh boy…here’s a can of worms! You bet, I thought about the Perception of Self-Consciousness when very young…at least by 5yrs old. Whether this amounts to a Philosophic Awareness of the implications, imagined or real, I’ll leave others to judge, but goes like this:

    Stop the Train, I want off!…the world at large is Other than ME…I am ‘watching’ the ‘things’ about me, trailing me or being trailed by my interests. Many are near, more are afar…some leaving, some coming, some still. I am sight…I see…I am the Eye of Awareness, naked to the world as it flows into my Vision…I turn this way, then that way, then another, orchestrating and being orchestrated in a dance, and all things a part-ner in this dance. I cannot stop, I cannot seize the day nor make the Sun set still at noon, but I am in IT, of IT, through IT always, seeing and unseeing, awake and asleep. How am I the Eye both seeing and that which is seen? This is sooo weird…can it be real as I am real…or am I real as it is real?…This is sooo weird.

    Something like that.

    Like

    • Thanks for sharing. You started early.

      Liked by 1 person

      • BeingQuest says:

        As to the “can of worms,” I have long entertained Memory of earlier events, some of which I only recovered after years of reflection upon what has gone before, as if opening a drawer to familiar thoughts I could know as my own, and some which resided in my Memory as a Dream State from the very earliest of times…as when just rolling over on the bed was a Hurculean exertion, beside the wide-eyed wonder of being conscious of the wall spider I might like to taste.

        I remember crawling also, and just learning to NOT cry over spilled milk. (Big boys don’t cry, ya know; and I had a big brother). I don’t believe that I ever would have maintained nor recovered these very early memories were it not for the isolation of those first five ears of childhood, living like a hermit when the brother wasn’t around, and escaping great dangers, sometimes not unscathed, that have ended many an innocent career by then. Anecdotal Evidence, withal.

        c’est la vie

        Like

    • Stephen Wysong says:

      Same here Durandus, but I learned to ascribe all of that to my Asperger’s. … 😉

      Is “normal” consciousness on the autism spectrum?

      Like

      • BeingQuest says:

        Sure it is, but in minute moments, barely grasped but often hinted at in the general milieu of common men and women. They just don’t dare to be Weird, like the truly animated.

        Like

  8. I don’t know if as a child I would’ve thought of consciousness as a problem, but I did have a few moments of being aware of my own ‘subjective’ experience. In retrospect, I would call this experience a sense of wonder at my own volition. It felt baffling, mysterious. I came across a fantastic passage in Ian McEwan’s novel, Atonement, that, to my surprise, described the exact same experience I had as a child—finger and all. After seeing this in print, I wondered if maybe my ponderings as a child were not unusual. Here it is… I would exclude the bit about the soul from my own experience, but the part about trying to catch myself out is so spot on:

    “She raised one hand and flexed its fingers and wondered, as she had sometimes before, how this thing, this machine for gripping, this fleshy spider on the end of her arm, came to be hers, entirely at her command. Or did it have some little life of its own? She bent her finger and straightened it. The mystery was in the instant before it moved, the dividing moment between not moving and moving, when her intention took effect. It was like a wave breaking. If she could only find herself at the crest, she thought, she might find the secret of herself, that part of her that was really in charge. She brought her forefinger closer to her face and stared at it, urging it to move. It remained still because she was pretending, she was not entirely serious, and because willing it to move, or being about to move it, was not the same as actually moving it. And when she did crook it finally, the action seemed to start in the finger itself, not in some part of her mind. When did it know to move, when did she know to move it? There was no catching herself out. It was either-or. There was no stitching, no seam, and yet she knew that behind the smooth continuous fabric was the real self — was it her soul? — which took the decision to cease pretending, and gave the final command.”

    Liked by 1 person

    • That’s an interesting aspect to ponder. It seems like the simplest thing, but it reminds me that there are brain pathologies that play havoc with our sense of bodily self. People with certain brain lesions can become convinced that their arm is not really their arm, to the extent of seeking amputation. Or they can become convinced that they’re not alive. V.S. Ramanchandran covers a lot of these bizarre cases in his book, ‘The Tell-Tale Brain’.

      I forgot about it until now, but one thing I did ponder as a boy was the inner monologue in my head. I thought I could remember a time when I didn’t have it (although childhood memories are the least reliable of all memories). I wondered if anyone else had the monologue going on, or was I just weird.

      I didn’t find out for sure that most people do have it until I was an adult, when I was looking into speed reading, with the need to bypass that inner speech. I remember telling the speed reading teacher that I didn’t know if I actually wanted to silence my inner voice. I see it as a friend.

      Liked by 1 person

      • Is that how speed reading is done? I think turning off the inner monologue would ruin novels, but I guess it makes sense for rapidly absorbing non-fiction, depending on what it is.

        I wonder how someone can become convinced that they’re not alive? Do they not believe they’re experiencing some sort of afterlife at least?

        As for inner monologue, I never questioned that other people had it too, but then again, I watched a lot of TV, and voice-overs were fairly common (Wonder Years, for instance…If only my inner voice were a much wiser future version of me narrating my thoughts…)

        As a child, you never did the finger test? Or some version thereof?

        Liked by 1 person

        • On speed reading, that’s part of it. By bypassing the inner monologue, you’re able to take in information faster. Another big aspect is compromising on the level of comprehension you get, which makes sense but clues me in that there’s no free lunch. Personally, I hated the reading experience (fiction or non-fiction) when trying to speed read, and decided that it just wasn’t for me.

          On someone being convinced they’re not alive, it’s called Cotard’s syndrome. https://en.wikipedia.org/wiki/Cotard_delusion
          It’s tempting to see if as possibly just psychological, but Ramanchandran notes that patients with the condition are often indifferent to pain. They feel the sensation, but aren’t distressed by it, which implies damage of connections along pathways between the insula and anterior cingulate cortex.

          On the finger test, I can’t recall. I do remember once wondering what was going on with habits. Why were they so hard to break? What part of me was deciding to engage in them? (I’ve now read enough neuroscience to have insights into them, but back then they seemed mysterious and vexing.)

          I’m also reminded of a time, as a teenager, when I was listening to a preacher on TV, who said if I just quieted my mind, God would speak to me. So I tried to quiet my mind as much as possible, but every time I possibly detected God’s voice, I couldn’t rule out that it wasn’t just my own voice; indeed, God’s voice and attitude seemed suspiciously like my father’s. I sometimes wonder what might have happened if I hadn’t been concerned with ruling out my own voice.

          Liked by 1 person

    • I can relate to that Ian McEwan passage as well Tina. At the age of seven or so it seems to me that I’d look at my hand and marvel at how I could control it, and yet have absolutely no idea how I was doing it. This made no sense to me. Surely if I could do something, then I ought to understand how!

      My adult explanation goes like this: My brain is essentially a neuron based computer which thus operates my body, somewhat like a standard computer operates a robot. The big difference however is that my own brain produces sentience as well, or a punishment/ reward dynamic (which I don’t consider any of our robots to produce). I view sentience as subjective experience itself, or the essence of primary consciousness. Apparently evolution added senses like vision to the auxiliary experiencing entity, as well as memory of past experiences, to create a functional second computer which is outputted by the first. Thus it now make sense to me why I can move my hand without understanding any associated mechanics — apparently I’m being serviced by an amazingly advanced machine in my head.

      And why might evolution have created sentient entities like us? I think because we thus provide the system as a whole with purpose driven function — here environments which are more “open” may be dealt with more effectively. (Note that our robots only function effectively in more “closed” environments, such as the game of Chess.) Essentially the vast supercomputer brain does all the work, while the tiny sentience based computer takes the credit. Or perhaps it’s mainly the adult variety which tends to take credit, that is once childhood curiosities vanish.

      Liked by 1 person

      • One “adult variety” might exist for the concert pianist (or equivalent thereof). I know from my very minimal experience from trying to teach myself a Chopin nocturne that once you’ve got something down in your muscle memory, you’d better not let yourself wonder, “How am I doing this?” Or else you won’t be able to do it.

        Liked by 1 person

    • Yes good example Tina. The model here is that the conscious computer teaches the non-conscious computer to play a Chopin nocturne by means of repetition. Thus a person with less experience could easily screw it up with extraneous thoughts like “How the hell am I doing this?” Conversely an accomplished pianist should have a given piece down so well that he or she can let the vast supercomputer do the playing while consciously think about all sorts of mundane things at the same time. Given that many of us have 20+ years of daily experience driving, I also consider this to be an effective example. Though driving can be fatal, it becomes so reflexive for the non-conscious computer that we’re often able to do so while letting our minds wander.

      Anyway my point is that I’ve developed some brain architecture (or a conceptual drawing rather than neuroscientific engineering) from which to address our mutual childhood curiosity. Here the conscious entity may be envisioned as a tiny computer which does less than one thousandth of one percent as many calculations as the vast neuron based supercomputer by which it is produced. While the supercomputer should take in countless forms of input and process them for countless forms of output, the tiny conscious computer should simply constitute what we perceive of existence.

      And how specifically does consciousness function? In the end I think it boils down to sentience. I consider this as the motivation which drives conscious function, somewhat like electricity drives technological computers, or neuron dynamics drive brains. Then secondly there are informational senses such as hearing and touch. Thirdly there is the input of degraded past conscious experiences, or memory. Theoretically the conscious processor interprets these three varieties of input and constructs scenarios from which to best promote its sentience based interests, and does so by means of just one form of output, or muscle function. (And as mentioned earlier, from this model it’s not really us operating muscles. The non-conscious supercomputer takes these conscious decisions as input from which to get those muscles functioning appropriately.)

      If you have any questions about my psychology based brain architecture, please do ask…

      Liked by 1 person

      • Well, I’m not sure what you mean by sentience as a motivation for conscious function.

        Maybe you’re driving example is better than the piano example. It’s hard to imagine someone playing a difficult piece while thinking about what they plan to pick up at the grocery store. Even if you’ve got it down pat, there’s still a pretty high level of concentration going on, albeit concentration of a different sort. But it’s not at all far-fetched to imagine some mundane thought process happening while driving.

        Liked by 1 person

        • (Sorry Mike, scratch the last again if you get time.)
          Tina,
          Let’s see if I can provide a better sketch of my model.

          Apparently virtually everyone in the business considers brains to function as neuron based computers. I agree. But while others try to integrate consciousness directly into the brain for a single form of function, I take a different approach. I consider consciousness to exist as a distinctly different virtual computer which is produced by the non-conscious brain. While the vast non-conscious computer is good at doing non-conscious things, the conscious computer is purpose driven, or seeks to feel as good as it possibly can from moment to moment. Thus while non-conscious entities seem to fail under more “open” circumstances, conscious entities may be able to succeed given their teleological form of function. This is to say that personal purpose (or sentience), permits them to go beyond just programming instructions. While a tree may not need to go beyond its genetic programming, fish probably live in open enough environments to also require a conscious form of function.

          Try considering the thoughts that you have as the processing element for this virtual second computer, with a motivational input (sentience like pain), an informational input (senses like sight), and a recall input (memory of past conscious experiences). Thus your thought will interpret such inputs in the quest to feel better in general, with your only such output as muscle function. While an experienced driver may drive while thinking about all sorts of things at the same time, one shouldn’t be able to silently read a book while having a telephone conversation. Some things can’t be passed off to a non-conscious computer.

          So back to piano, I suspect that conscious practice helps free a person from concentrating quite as much — here the supercomputer should gain more proficiency. But of course this is a computer which never experiences beauty. A good pianist should be able to add conscious interpretations to their work when moved by how a given song makes them feel. So perhaps you’re right that driving is a better example. Human speech would be another. The supercomputer should need to master some incredible muscular dexterity in order for us to speak as we do. Usually we don’t need to consciously think about how to pronounce our words, we just talk. Babies on the other hand should need to learn to speak by means of conscious practice.

          Any other questions or comments? I’m curious if this broad conception of conscious function seems consistent with your own experiences. Or not.

          Like

          • Stephen Wysong says:

            Apparently virtually everyone in the business considers brains to function as neuron based computers

            Not at all the case Eric. Searle in particular:

            http://beisecker.faculty.unlv.edu//Courses/PHIL%20330/Searle,%20Is%20the%20Brain's%20Mind%20a%20Computer%20Program.pdf

            … and many others disagree. I’m with them.

            Liked by 1 person

          • Thanks Stephen, that’s a great article. I’ve found Searle’s perspective encouraging where I’ve read him, though haven’t yet looked as hard as I should. If you’ll notice though, he’s actually quite clear in it that he does consider the brain to function as a neuron based computer. Apparently the reason that so many in the field oppose his position is that they consider it most effective to conceptualize consciousness in terms of information alone — “brain software”. I instead take the side that you and he do.

            Actually in recent months I’ve been having some fun with a different version of his Chinese room thought experiment. Imagine if Searle were to accept written notes (and I guess they could even be in English for this one), and then look up responses from an extensive manual in order to provide output notes that reflect what our brains do to cause (get this!) “thumb pain”. Can you imagine the stuff that you feel when your thumb gets whacked, being produced by a machine that does nothing more than accept certain symbol laden paper notes, and then processes them to spit out other such notes? Apparently this would be possible if the only thing that our brains do to cause thumb pain, is process input information in the proper way. Like Searle, instead I suspect that dedicated mechanisms exist in my head which cause thumb pain. Thus brain processing would animate such mechanisms, and somewhat like such processing animates heart function, or even like normal computer processing animates computer screens.

            My most recent discussion about this can be found here if you’re interested: https://selfawarepatterns.com/2019/09/08/layers-of-consciousness-september-2019-edition/#comment-34329

            Like

      • Eric, I wonder if you would consider changing your metaphor of the two computers. The thing is, most people see “computer” as applying to hardware (see Searle, and Stephen), and so when you say one computer creates another, even if the other is virtual, people expect hardware-type functionality of the second computer. I think a software analogy would serve you better, specifically, computer programming languages. Your unconscious computer would be the equivalent of machine language (0’s and 1’s), or possibly assembler, which is just mnemonic codes for machine language. You could then say Consciousness requires coding in a higher level language, such as Python. Then, all of the low-level unconscious operations happen with machine level code. When consciously learning a task, like playing a particular Chopin piece, or driving, you have to use the high level language to organize the movements, but practicing the same movements over and over essentially compiles the process to machine level code, and eventually the high level programming can be skipped altogether.

        *

        Liked by 1 person

        • Actually James, as far as I can tell I’m in agreement with Searle. As one of Searle’s supporters I’m hoping that Stephen will at some point gain enough interest in my brain architecture to get a pretty good grasp of it. But I also haven’t given up on you. I appreciate how you’ve converted your conception of my model into a software based version that uses higher and lower levels of coding languages. Given the interest that people seem to have in software based notions of consciousness, that might be helpful. Sometimes however things just aren’t that convenient. My model does not concern a non-conscious computer which causes software to create consciousness, but rather this computer creating a different variety of computer that functions in a fundamentally different way.

          Consider your “thought” itself, or roughly the words and whatnot that you think in daily life. I consider this sort of thing to exist as the processor for a distinctly different variety of computer (whether or not language exists for a given entity). This element will both interpret inputs (affects, senses, and memories) and constructs scenarios (or options), in the quest to do what makes you feel as good as you can from moment to moment.

          If I continue calling this dynamic “a computer”, I understand how people will naturally ask where it is, or say that I’m fooling everyone because any computer that can’t be touched (or whatever), will not actually exist. Unfortunately for me they’ll have to get over this hang up. As I propose it this exists as a distinct computer which functions on the basis of, not neurons, and not electricity, but rather the most amazing stuff in the universe, or affect. Like the other two this stuff doesn’t inherently compute, though apparently evolution used it to build something that shouldn’t otherwise exist, or an agency based (or teleological) variety of computer.

          Like

          • Stephen Wysong says:

            Eric, I’m interested in the biology of consciousness and not interested in non-empirical philosophical theories about it.

            Liked by 1 person

          • I’m sorry that you’re not interested Stephen. You do still support the work of John Searle though don’t you? Each of us merely use the “computer” term for brain function as an analogy, not to imply that it’s buildable through humanly fabricated instruments. For example the heart is a machine that’s controlled by the brain. Similarly one of our computers might be used to control the function of a critical water pump.

            Many people seem to think that they can reverse engineer the brain by means of neuroscience. Well maybe… though it seems to me that we’ll need effective architectural drawings from which to guide our engineering efforts. My own ideas concern the architecture side.

            Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.