This lecture from Steven Pinker has been around for a while, but it seems to get at a question a few people have asked me recently: how does the information processing of neurons and synapses lead to conscious perception? Pinker doesn’t answer this question comprehensively (that would require a vast series of lectures), but he answers facets of it to the extent that it’s possible to see how the rest of the answer might come together.
Be warned: this lecture is very dense. If the concepts are entirely new to you, you might have to re-watch portions to fully grasp some of the points. And the visual illusions he shows, unfortunately, don’t seem to come through, but the point they make does.
Of course, people who insist that there has to be something more than just the physical processing won’t be convinced. But if you’re interested in what mainstream neuroscience knows about this stuff, well worth a watch.
14 thoughts on “Steven Pinker: From neurons to consciousness”
In his lecture “From Neurons to Consciousness” Steven Pinker describes how neuroscience is trying to bridge the gap between phenomena that we experience, and how the physical brain functions itself. In these efforts I see bridge work from either side of the water. From the physical function side he details how neurons display “and”, “or”, and “not” logic statements, and so the human is displayed as a computer. Then from the conscious side that we know of existence, he explains various standard illusions by means of the also computational “lateral inhibition”, “opponent processes”, and “habituation”. But even given these explanations, there is clearly still plenty of missing bridge left to complete.
I believe that neuroscience and associated fields in general will require better architecture from which to truely progress. By this I mean, for example, that the Wikipedia consciousness page will need to provide generally accepted definitions and understandings rather than its current mishmash of speculation. The following is a broad overview of my own proposed architecture, and I’d love to go deeper if there are any question.
In the diagram above I partition “mind” into two distinct classifications. The main one, which I suspect is more than 99% of the total, is not conscious. It functions essentially as our computers do, and so algorithmically processes inputs that provide associated outputs. In his lecture Pinker gave us a wonderful anatomical demonstration of this by describing neurons and their networks. Apparently this normal type of computer wasn’t sufficient however, since an auxiliary “conscious” computer was built as well. (If anyone is interested, I do have theory regarding why non-conscious function alone was not sufficient.)
Then moving over to the relatively small conscious computer (which functions through the non-conscious one), this seems to emerge at a higher level than those neuron incited logic statements. I consider it crucial to note that this sort of computer instead functions by means of a punishment/reward dynamic. While existence seems to have no personal implications to anything else, dead or alive, existence can be good/bad for a functioning conscious computer. This element is classified in my diagram as “affect/utility/happiness.” Once this aspect of existence becomes accepted to constitute the welfare of any defined subject, I believe that it will formally become implemented. There is surely nothing that we are in greater need of than an effective ideology from which to lead our lives and structure our societies.
I left the non-conscious side of the diagram essentially open, since I presume that this computer takes in countless forms of input, uses many processing instruments to algorithmically go through them, and then provides countless associated outputs. One thing that the non-conscious mind seems to control, for example, is the beating of a heart. Temperature, activity, chemical substances, feeling nervous, and so on, are some of many non-conscious inputs that alter heartbeat once processed.
Fortunately I’m able to get far more specific regarding the conscious computer. I’ve already mentioned the “motivation input”, which is theorized to constitute all that’s personally good/bad to anything that exists. Then the “information input” addresses things like vision, hearing, and so on, but without any value element. Thus a bad taste will provide two kinds of input — both punishment, as well as information associated with a chemical signature of what’s tasted. Then the last type of input to the conscious processor, exists as a degraded recording of past conscious processing. I won’t go into it unless questioned, though memory seems quite crucial for effective conscious function.
I theorize the conscious processor to function in a specific way. It (1) interprets inputs, and (2) constructs scenarios, in the quest to (3) promote present personal value. A person might identify a sound, for example, construct a plausible scenario about why it was heard, and then go on to various responses given the quest to promote present happiness. The only non thought form of conscious output that I’ve been able to identify, is “muscle operation”.
Why would I assert that consciousness concerns present value exclusively, when we know that people also do things for their future welfare? I make this assertion because concern about the future seems to occur through the present reward of “hope”, and the present punishment of “worry”. Here existence has a temporal component going forward, while the provided “memory” input adds the past as well.
I’ll also briefly mention the diagram’s “Learned Line”. You may recall Pinker assert that neurons can develop very specific conditions from which to fire. Apparently a neuron in someone was even found that would only do so by means of an image of Jennifer Aniston (which surely was conditioned rather than inherent). Furthermore I’m quite sure that I have an extensive set of neurons from which to identify police cars, since while driving I seem to instantly recognize them and reasonable copies. I consider the conscious processor to be extremely limited, though a great deal of what’s taken credit for consciously, like identifying people and police cars, seems to be farmed out to the non-conscious processor by means of this line. Why does it take so long to become a good driver? Because it takes years to get the neurons effectively conditioned for this so called conscious task.
Notice that by consciously opening and closing your hand, you’re not really doing the many amazing things that make this occur, but rather telling the non-conscious computer to take care of it for you. So while there is surely a vast supercomputer in our heads, it isn’t conscious, and regardless of how smart we like to consider ourselves. I consider myself pretty clever for figuring out the mind architecture as summarized above, though that’s mostly taking credit for the theorized feat of a machine that isn’t even conscious.
LikeLiked by 1 person
Thanks Eric. We’ve discussed many of your points before, so I’ll try to focus on areas we haven’t, or at least that I can’t recall us discussing.
I wouldn’t judge the state of neuroscience by the Wikipedia consciousness page. I can’t say I’m intimately familiar with that article, but I know it tries to take a broad inclusive approach, which means a lot of the less scientific views probably get more coverage than either of us would care for. Still, if it’s the first thing someone has ever read on the subject, it’s hard to argue they shouldn’t get at least passing references to those viewpoints. I think many of those viewpoints are nonsense, but a new reader has the right to reach their own conclusions on them.
We’ve discussed the differences in how we see the division between consciousness and non-consciousness. I see the division between them as much blurrier than you do. I’m mentioning it here because my view might have evolved slightly since we last discussed it.
My current thinking is that, for the demarcation between consciousness and non-consciousness, we can divide brain information processing into three broad categories: processing that takes place autonomously (heart rate, breathing, hormones, etc), including when we’re asleep; processing that only takes place when we’re awake and interacting with the world; and processing accessible to introspection, that we are or can be “conscious” of. Crucially, if we can’t introspect something, are we conscious of it?
Introspection seems to be most closely focused on what simulations we’re currently running.
I do have one quibble about your comments on memory, particularly to the extent you were referring to episodic memory. Episodic memory is not a recording, but is a reconstruction of a past event based on semantic memory points. In other words, it’s a simulation just like the simulations we run for potential future scenarios That’s why it’s so unreliable, particularly for long ago events.
On your diagram, I think we’ve discussed this before, and you seem to allude to it in your last paragraph, but just in case, it seems like muscle operation can be an output of non-conscious processing, and I’m not just talking about the heart here. Think about all the times we’re physically doing something on more or less “automatic” while we’re consciously pondering something completely different.
As always, appreciate the discussion!
LikeLiked by 1 person
First the easy stuff. I completely agree on memory, though it can be difficult to not use the “recording” term here in some capacity from time to time. In a technical sense I actually define memory no more specifically than “Past consciousness, that remains.” Of course we alter our memories each time we go back to them, so they change on us. I believe that this was mentioned in those crash course psychology videos as well. Anyway from my definitions memory is one of three potential inputs to the conscious processor, and it may be that I’m the first person to ever assert such a thing.
Also I didn’t mean to imply that the non-conscious mind doesn’t operate muscles. Muscle operation wasn’t explicitly listed it in that non-conscious output box, since I presume that there are countless forms of output that this processor impliments. Indeed, I believe that the conscious mind must “ask” the non-conscious mind to operate the muscles that it wants operated. The human may not display quite as much fully automated muscle function as some animals do, but this potential should still exist. When you sneeze, just try to keep your eyes open.
I realize that I probably shouldn’t reference Wikipedia quite as much as I do, though I like to use it as an extremely accessible source of modern thought. They get right to the point. Anyway I merely wanted to demonstrate something that isn’t actually disputed, which is to say that the topic of consciousness remains wide open today. The Stanford Encyclopedia of Philosophy provides a more long winded demonstration of this, though the outline was plenty for me.
My issue is that no functional model of mind has become accepted in these fields today. Above I’ve briefly summarized my own suggestion. I most certainly would have each of these encyclopedias discuss ideas that you and I consider silly and/or wrong, given that that’s how things happen to be today. Apparently Giulio Tononi is a panpsychist. David Chalmers is a dualist. Daniel Dennett believes that pain does not exist… Though you have shown me that there is good work being done by some in these fields, this doesn’t overcome the absense of functional architecture itself. That so many theorists today consider it important for us idiot humans to build consciousness, even before we have effective definitions for consciousness, seems bizarre to me. Let’s at least define what we’re talking about!
Furthermore today there are various notable people who make quite a good living as “consciousness gurus”, such as Mr “Consciousness Explained” himself, Daniel Dennett. In today’s muddy water environment I find it quite difficult to illustrate the nature of my own ideas. You, however, seem to be getting it! Hopefully once you’re able to digest what I bring to the table, and I’m able to digest what you bring to the table, then we’ll partner up for a shot at sorting this mess out for ourselves. You’d be our brains while I’d be our brawn. 🙂 History is certainly not going to wait forever for this jewel of a discovery to last!
Earlier I referenced consciousness as an emergent product of the non-conscious, so it may be that I consider the conscious and non-conscious demarcation to be just as blurry as you do? Regardless you’ve now proposed dividing mental processing up into three sorts of distinction rather than my current two. It could be that once you have a better grasp of my models, parsimony will take you back down to my dual processors? Or perhaps I’ve left some things unaccounted for that make three useful? (There is no “true” answer here.) I somewhat hope that your distinction survives our scrutiny, since that should tend to give you more of an ownership position (beyond simply mentoring me).
Regarding my own associated ideas here, I’m not sure that we’ve yet discussed a distinction that I’ve invented called “sub-conscious.” It’s spoken with a slight pause in order to be orally distinguished from the standard “subconscious” idea. (I consider the subconscious term useful to represent things which aren’t fully acknowledged. You may have good/bad feelings about someone that you aren’t aware of, and so behave this way to that person without realizing it, for example.) I instead define “sub-conscious” as a degraded state of consciousness, like being asleep, hypnotized, drunk, and so on, and to the extent that the consciousness dynamic has been degraded. I don’t know why I haven’t come across an existing term for this idea.
I define consciousness to exist to the extent that the thought processor is functioning, which is to say, to the extent that something is “thinking”. But I do have a bit of a problem here. If it’s ever possible to not interpret inputs (affect, senses, and memory) or construct scenarios (and regardless of sleep degradation), then from this definition a person will not be conscious for that period. But I’d still like to differentiate a person who is receptive to conscious input from a person who has been perfectly sedated and thus has no potential to interpret any conscious inputs. One of these subjects is of course appropriate for surgery, while the other is not.
Another issue that I have here is pain itself. Clearly the non-conscious mind must produce these signals as an input for the conscious mind to experience, though there won’t be actual “pain” until consciously experienced. In the past I’ve said that it’s the interpretation of these signals that creates pain, similar to interpreting the image of a person, but that might not be the best approach. Perhaps it would be better to say that the non-conscious mind creates pain directly through these signals, even without “conscious interpretation,” thus mandating consciousness even without my former definition of “thought”. Then the thought processor has the potential to interpret this existing conscious punishment, which it naturally should attempt given that if that it f**king hurts!
There is also perfect numbness. If a person could not feel bad or good, but did still have the input of a sense, such as an image, what would happen? I suspect that because there would be nothing to drive the though processor, the visual image would not actually be interpreted and thus there would be no consciousness here. But would there be any interpretation regardless of my speculation? Perhaps.
You’ve mentioned all processing under sleep to not be conscious, though that may be ripe for revision. Dreams, for example, seem to be a product of the degraded conscious thought processor. Regardless it’s not yet clear to me what distinction you’re making between waking consciousness and introspection consciousness.
LikeLiked by 1 person
Thanks for the clarifications. Sounds like we’re on the same page as far as most of that goes.
I think Dennett and other illusionists would argue that “pain does not exist” is something of a caricature of their actual position, which would be better described as “pain and other experiences are not the fundamental thing they appear to be”. I don’t agree with them on everything, but I find their thinking far clearer than the majority of philosophers of mind, certainly more than Chalmers and the panpsychists.
Good point about dreams. That did occur to me as I typed that segment, but I omitted it for simplicity, and should have realized you’d call me on it. Yes, dreams do blur the line between the awake vs asleep processing.
And yet, there seems something different about the processing of the heart rate, non-conscious breathing, hormonal balances, and other autonomous processing from, say, the act of walking to the car while pondering something unrelated. Walking requires activation of the thalamocortical system that maintaining testosterone, estrogen, or adrenaline balances doesn’t.
But what seems to separate what we normally call conscious experience and action from unconscious walking, is the introspection mechanisms. It seems like things that are available for introspection, including dreams, are what we normally include in consciousness, and things unavailable to it aren’t. As we’ve discussed before, I think the things available to it are the simulations (whether they be of past or future action scenarios, or of immediate ones currently being undertaken). Indeed, I’d venture to say that introspection is a feedback mechanism to fine tune those simulations.
Of course, any divisions that we draw, whether they be into two, three, or more layers, are arbitrary to some degree, an attempt on our part to make sense of a system that is the most complicated thing in nature that we know about so far.
On building consciousness, I agree that it’s a pointless goal, and most actual AI researchers aren’t focusing on it (with a few notable exceptions). Most are focused on individual capabilities, such as being able to recognize particular types of patterns or to have the movement intelligence of the simplest land vertebrates.
LikeLiked by 1 person
You’re giving me exactly what I need — opportunities to test the usefulness of my ideas. If we can improve them then great, but I also appreciate that your speculation permits me to demonstrate how my models practically function.
In awake versus asleep processing, my “sub-conscious” concept has plenty to say. Alcohol slides a person into a sub-conscious state, in the sense that consciousness progressively becomes degraded the more that is consumed. Sigmund Freud surely found this true of his cocaine as well. Caffeine can fight off the sub-conscious state of drowsiness and sleep, but can impart a jittery sub-conscious state as well. Also the body produces all sorts of substances that can have various effects. Personally I’m a bit of a hot pepper junky, given the associated adrenaline rush. Furthermore the medical field of psychiatry has quite recently transformed itself to provide almost entirely chemical based treatment. Even when these substances have beneficial effects, they might also put a person into a somewhat sub-conscious state, or at least when taken in higher doses.
I mention all this because it seems useful to consider sleep as a degraded form of consciousness. Of course sleep seems necessary to conscious function at some point, and specifically for recuperation. (If insects “sleep”, then to me this at least points in the direction of conscious function.) Sleep seems to degrade rather than halt consciousness however. Furthermore the conscious processor seems to remain receptive to conscious input, unlike the effects of anesthesia or death.
Input receptivity still troubles me regarding my definitions. I’m good with saying that something that is presently thinking is conscious, but then when it isn’t thinking but is still able to accept conscious inputs, saying that it’s fully not conscious seems to go too far. If we did call subjects that aren’t interpreting inputs or constructing scenarios “not conscious”, then I think we’d at least have to give them an “input receptive versus unreceptive” notation. Perhaps “non-conscious R” versus “non-conscious U”.
Beyond my sub-conscious distinction, I think you’ll find my “learned line” useful. Yes there is something very different from the non-conscious mind making the heart beat, and it helping us walk, or drive, or properly move a vast array of muscles to speak. Heart function isn’t learned, but most of what we do, is. The premise here is that the conscious mind is tiny, though it subcontracts out most of what it does to a vast non-conscious mind. I don’t know anything about the thalamocortical system, but I do know that the human begins conditioning its non-conscious mind for conscious function, as soon as possible. Many other animals come far more programmed.
Regarding “illusionists”, educated people are suppose to already realize that human perceptions are just that, products mind. We know that “colors” don’t exist as what we perceive, and on and on. But if I am interpreting the input of “pain”, the punishment that I experience cannot possibly be an aspect of reality that does not exist. I think, therefore I am. Everything beyond my thought itself, may be a lie.
The thing about Daniel Dennett, is that he’s too damn clever to be trusted. He can build up an idea with layer upon layer of fabrication, in a way that makes people believe he must really know what he’s talking about. That’s not to mention his seductive or hypnotic “Garrison Keillor” voice.
One year into my blogging, and just over two years ago, a friend convinced me to read Dennett’s “Consciousness Explained”. It was a good educational exercise for me, since there were so many terms that I needed to get familiar with. My short assessment is that my own “non-conscious mind” idea, fully gets to what he alluded to with his “multiple drafts” idea, though my account is simple, and his seems more like a tool from which to entrance an audience. Of course that book was written thirty years ago, and “multiple drafts” remains just another piece of garbage floating in the consciousness industry’s wake.
LikeLiked by 1 person
Good point about degraded consciousness with sleep, alcohol, etc. There is a qualitative difference between wakefulness and being asleep though. The flow of the senses to the neocortex and motor output are shut down during sleep (normally). But I do think dreams are simulations with the access to the perception models compromised, probably because brain waves throughout the cerebrum aren’t as synchronized as they tend to be when awake.
I’m not sure I’d agree that anything thinking is conscious, but it depends on what we mean by “thinking”. If you mean conscious thought, then I agree, but it makes the statement somewhat tautological. But I then have to wonder what we call subconscious information processing.
Sorry for springing “thalamocortical” on you. It just means the thalamus, cingular cortex, and neocortex, in other words, the cerebrum.
On the illusionists, I agree that people should understand that perceptions are mind dependent, but I think the audience the illusionists are speaking to don’t understand that yet. And I’ve encountered enough very articulate people here on the blog who don’t accept it yet, so it’s by no means universally agreed upon. In other words, I think the illusionists still have a valid point although I remain uncertain about the communication strategy of using the word “illusion.”
I actually don’t recommend ‘Consciousness Explained’ anymore. As you noted, it’s pretty old now and I’m not sure it even accurately reflects Dennett’s views at this point. I never did finish his latest book (the information is just too basic for me at this point), but I can’t recall him mentioning the multiple drafts theory in the last several years. Dennett’s main ongoing contribution, I think, is calling BS on the “hard problem”.
Theories of consciousness are indeed legion. I suspect none are going to be completely right, but I expect some to be a lot closer than others. I think people like Damasio, Goldberg, Graziano, and Dennett are a lot closer to the truth than Tononi, Koch, Chalmers, or Penrose, mainly because the former group isn’t looking for or depending on magic. Only time and lots of additional scientific work will tell.
LikeLiked by 1 person
So then how do I define “thought”? The term is listed in my diagram as “the conscious processor”, so yes as I define it, there is no thinking which isn’t also conscious. I can see how you might want to say that some of your thoughts aren’t conscious, but this is really just a matter of definition — it has no true answer. I’ve found it useful to define thought to be conscious exclusively however.
I theorize a unique conscious processor (thought) that preforms two essential tasks. One of them is to interpret conscious inputs, and they come in the forms of affect (the motivation input), senses (the information input), and memories (the recall input). I consider the non-conscious mind to produce such inputs for the conscious mind to potentially interpret. If an input is produced but not consciously interpreted, then there will be consciousness, but no thought.
I consider the second task of the thought processor to construct scenarios, or simulations, regarding conscious existence. This will be the attempt to make sense of things in order to decide what to do. And why are inputs interpreted and scenarios constructed? Apparently the punishment and reward that’s associated with conscious existence, serves as motivation from which to do so. Otherwise I don’t believe that this sort of computer functions.
Subconscious information processing is a bit squishy as I see it. One interpretation is for this to exist as a compromised conscious acknowledgement. Someone might make you angry for example, though you may not explicitly acknowledge that you are angry. Nevertheless your feelings should affect your behavior, given that they are felt, and so may be termed to have a “subconscious” effect. I believe this definition is actually somewhat standard. Furthermore I don’t mind this term also being used to reference a blend of conscious and non-conscious processing. (Either way it’s different from my “sub-conscious” term, which references a degraded state of consciousness, as in sleep or chemical alteration.)
I don’t actually mind you throwing neurology terms in your responses to me, especially since you don’t seem to quiz me about the stuff, but rather present how you see things from your more neurological perspective. This is one of the many ways in which I consider your perspective valuable to me in particular.
I suppose I went a bit too hard on Dennett above. It was good for me to read his book as I mentioned, and in truth I didn’t generally consider what he said to be outright wrong. Instead it more made me cringe at how laborious the whole thing was, often I think because he didn’t realize how much more effective it is to acknowledge a vast non-conscious mind that complements a tiny conscious one. The more I learn about the ideas of the elite, the more convinced I become that my own ideas are superior. Thus it’s been frustrating that it has been so difficult to show others the nature of my ideas. You’re the first person who seems to truly be getting it. As you become progressively able to tell me what I think on a host of associated issues, you’ll have more ability to effectively edit these ideas where they seem to fail. This would be like a doubling of perspective.
By the way, over the night I’ve made a correction to something that had been bugging me for maybe a decade. I used to say that conscious interpretation was required for consciousness to exist, and so equate consciousness with thought. Thus the non-conscious mind might signal for a pain, for example, but the pain wouldn’t exist unless consciously interpreted by means of thought. But then how do I consciously take raw signals, and convert them into things like pain? I don’t! The non-conscious mind must be responsible for doing this, and so directly creates pain and all conscious inputs. Then my conscious mind will have the chance to make sense of these inputs. This doesn’t change much practically, but it has bugged me for quite a while.
LikeLiked by 1 person
My only issue with Dennett these days is that his target audience remains people who need to be convinced away from various forms of substance dualism. He’s never far from his New Atheist thought leader role. I’m long past that point, which makes his writing a bit redundant for me these days.
“You’re the first person who seems to truly be getting it. ”
Thanks, but my views really are more or less a reflection of mainstream neuroscience, so I’m far from unique. I think we’ve discussed this before, but I don’t find your ideas to be the radical break with that mainstream you take them to be, at least in terms of consciousness. (Of course, this could still be due to me not understanding them sufficiently to see the distinctions.)
On the generation vs feeling of pain, that’s similar to a switch I made sometime last year, realizing that there are parts of the brain involved in generating experience, and parts involved in receiving those experiences, modules which contribute to the production of the Cartesian theater vs modules that are the audience for that production. Conscious pain, pleasure, etc, can’t happen without both of those portions.
LikeLiked by 1 person
I agree that perhaps “radical” is the wrong term to characterize my ideas regarding mind. (Though I see that you did leave open the thought that I might have radical ideas in other regards.) But if it’s true that my ideas here are not problematic for mainstream thinkers, then let’s take your thought further. Above I’ve provided a bare bones outline of “mind” that you’ve implied to be sensible (to the extent that you understand it, and there is quite a bit more that I can’t wait to acquaint you with!). So who beyond me has ever made these sorts of reductions, and hopefully even provided a tight associated diagram? If any of the big names were to come up with something like this (rather than the standard fare that you know so well), it would of course be scrutinized heavily. Furthermore if professionals turned out to be fine with such a model, just as you seem fine with your initial perception of what I’ve developed, then what would happen? What would happen if Wikipedia and other such references were to provide generally accepted positions regarding the concepts that I’ve addressed above, and so psychologists, psychiatrists, sociologists, and so on, had generally accepted positions here from which to do their work? Would this not be a revolution for these fields? Would this not be something on the order of how chemistry changed once an effective model of the atom became accepted?
Gosh I almost forgot that just like L.A. author Sam Harris, Dennett is one of “the four horsemen of New Atheism”! This grand marketing title conforms nicely with the picture that I’ve already painted of him — more showman than substance. I became a very strong atheist by about the age of 12, though I think the novelty of the whole thing wore off in college. Fortunately you seem to have gotten over that sort of thing as well.
LikeLiked by 1 person
On similarities with other thinkers, I mentioned in other threads Damasio’s biological value concept, Jaak Panksepp’s work, and you yourself saw resonance with F&M’s affect consciousness concept. I think much of evolutionary psychology is about studying the relationship between instincts and behavior, a relationship that is inherently about the things you discuss.
Studies of the divisions between consciousness and non-consciousness go back to the 19th century. Freud is a well known figure, but others had studied it before him. Modern scientists I’ve read on it include Michael Graziano, Michael Gazzaniga, and probably some others I can’t think of right now.
Does anyone have your exact conception? No, but then anyone who has someone else’s exact theory isn’t really thinking on their own but simply following someone else, at least until scientific evidence conclusively establishes one of those theories as more accurate.
I think the difference is in what you see as the normative consequences of these ideas. I know you say you’re not doing normative reasoning, but I submit that when you talk about it revolutionizing the things you list, it is inherently normative. The vast majority of people who explore these idea are far more cautious.
On the New Atheists, I went through a period a few years ago where I felt their fervor, but it didn’t last. I do think they’ve done one important service, one that Dennett himself predicted years ago in The Atheism Tapes, that having a cadre of hard core atheists as the front part of a cultural shock wave would open up space behind them for more moderate nonbelievers to comfortably exist. People don’t like the New Atheists, but society overall seems much more accepting of moderate nonbelievers overall than they were in 2003.
LikeLiked by 1 person
Once again, I agree that perhaps “radical” is the wrong term to characterize my ideas regarding mind. As we’ve each observed over these months, there are some notable people out there who have models that conform with mine. My point however was that I don’t know of anyone who has made the sorts of reductions that I have, and certainly not organized them into such a tight package. Furthermore my point was that once the scientific community agrees upon reasonably effective models here, a revolution should occur in associated fields that will be about like the one that occurred in chemistry as the nature of the atom gained a consensus understanding.
I do see the potential that my own such ideas will reach consensus some day, but let’s forget about that for a moment. Since each of us believe that we aren’t talking about anything magic here, effective definitions for this stuff should be developed soon enough (given that science is still quite young). But surely to do so a person will need to acknowledge this potential. As you’ve just implied, few theorists today seem to have much vision. Though I believe that Sigmund Freud dug these fields into a hole that they still haven’t quite escaped, at least this was a man with vision!
In truth it was only once I became satisfied with my theory regarding the nature of good/bad existence that I decided to explore mind at all. I wanted to see if I could use my theory to develop an effective understanding of consciousness, given the naturalistic connection I see between all things real. This went far better than I’d imagined, but with one great problem. Over my three plus years talking with people educated in these fields, I’ve been met with incredible displays of defense. This is why I’ve been so fortunate to come across a person such as yourself!
By the way, any thoughts about Ned Block? His “phenomenal consciousness” might be interpreted as my “interpretation of inputs,” and his “access consciousness” might be interpreted as my “construction of scenarios.” Of course without motivation he’s left only with the model for a normal rather than conscious computer, but I’m pleased that he’s at least gotten that far. I’d be interested to know if you have any thoughts about him.
LikeLiked by 1 person
I haven’t read Block directly, just various summaries of his positions (which aren’t entirely consistent), so my reaction to it might not be well informed. As I understand it, p-con (phenomenal consciousness) is raw ineffable experience, where a-con (access consciousness) is the informational components. I think he’s done a service by explicitly stating what so many people seem to assume.
That said, while I agree with the distinction that p-con is ineffable and a-con can be discussed, I think the idea that p-con is something other than information processing is wrong. Within our introspective view, p-con is the most basic information being processed, where a-con is more composite information.
But p-con, “raw” experience, is itself composite information. It is constructed from lower level signalling by circuits whose processing is outside the purview of our introspection mechanisms. In other words, we have conscious access to the primitive components of a-con, but not to the more primitive components of p-con, but that doesn’t mean those primitive aren’t there.
And we don’t appear to have introspective access to the most primitive sensory information coming in. We only get that introspective access to layers where it’s adaptive for us to have it.
I don’t think I would equate p-con with interpretation of input and a-con with scenario simulations. I think both p-con and a-con are involved in both perception modeling and imaginative scenario simulations. When I imagine driving a red car, it seems like the redness of the car remains p-con level information.
I agree that Block’s distinction doesn’t seem to cover the emotional aspects of consciousness. Philosophers often ask, “Why does it feel like something to have experience X.” The reason is because our emotional circuits react to things and the sensory perception and emotional “gut” reaction arrive within our consciousness in a unified fashion. But that unification is itself a construction.
LikeLiked by 1 person
I’ve decided that you’re right that it’s not useful to classify phenomenal consciousness as the interpretation of inputs, and access consciousness as the construction of scenarios. Ned Block did a private interview a couple of years ago at the now defunct Scientia Salon blog site. (Part one of two is found here: https://scientiasalon.wordpress.com/2015/05/18/ned-block-on-phenomenal-consciousness-part-i/) But given your above observation, I see that I muffed that one. Note that I’m only four days into repairing whatever damage has been done through a former convention, or that conscious processing is required in order for there to be any conscious experience. But if p-con and a-con happen to be useful terms, and if it’s true that I’ve nevertheless developed an effective model of mental dynamics, then I should be able to reduce these terms into my own model, and so move things forward in general. Furthermore I’ll attempt to use my model to address the question that you’ve just presented, or “Why does it feel like something to have experience X?”. So let’s get to it!
It seems to me now that my motivation input to the conscious mind (“affect”) ought to address p-con, and my information input to it (“senses”) ought to address a-con. (This doesn’t get into a third input component which concerns past conscious processing, or “memory”.) If you recall I divide inputs, like what is seen, into both an informational component that should be effable, and a punishment/reward component that should not be. A color does not just provide information to potentially use (as in the “stop” that a normal computer might implement), but can also have a punishment/reward dynamic that can’t otherwise be conveyed (as in the “beautiful” that a normal computer can’t access). A punishment/reward might be experienced directly, as in “pain”, or it might be experienced once consciously interpreted, such as realizing that you’ve lost a great deal of money. The defining element however will be that there is a good to bad feeling that contributes to the personal value of existence. Furthermore notice that toe pain doesn’t simply provide p-con, but also a-con in the sense that location information comes as well.
So Mary the scientist who knows everything there is to know about the physical mind state that we call “color,” even though she’s never experienced any, should gain this sort of punishment/reward once she’s permitted to actually see colors. Does this mean that she’s now learned something about reality which is not physical? No, it should simply mean that this particular human gains conscious experiences that she hadn’t previously received. Must she also experience all possible color scenarios in order to complete the theorized perfect education? Obtaining such an education is just as ridiculous, as the provided thought experiment concept of “knowing all there is to know.” We seem to get quite caught up into various anthropocentric epistemological notions to the theme of, if a human can’t understand something, then that sort of thing must not be physical. But just like p-zombie thought experiments, this one seems to instead reflect failure in the modern field of epistemology. (I propose theory from which to improve that discipline as well!)
(You and I may suffer from confirmation bias regarding our physicalism (and I’m even this way in terms of quantum mechanics) but this shouldn’t mandate us to be wrong. Nevertheless we physicalists should need to get our positions together regarding consciousness, given that this topic harbors extremely fertile ground from which to nurture supernatural notions.)
I would say that it feels like something to have experience X, because the type of computer that we’ve evolved to be is required to feel things in order for it to effectively function. If existence didn’t feel like anything, as I presume for the computer that I’m now typing on, then existence would instead be personally inconsequential, and so we wouldn’t consciously function. Yes I do realize that this account can feel tautological, though I also believe that it would be useful for scientists to implement this particular definition regarding conscious existence.
What does it mean for existence to not be personally inconsequential, and thus for there to be consciousness? As I see it this means that existence can be good and can be bad for the subject. Thus I believe that science will need to formally come to terms with this aspect of our nature so that we can use these understandings to effectively explore ourselves. This is where my ideas actually began, which is to say, the radical (rather than just revolutionary) notion that science must formally identify what constitutes good/bad existence. It’s a position that I believe we’ll ultimately use to lead our lives, as well as structure our societies.
LikeLiked by 1 person
On confirmation bias, it’s a danger we always have to be cognizant of. My solution, once I’ve thought through a subject sufficiently, is to post my reasoning and see if anyone can find holes in it. It’s the only way I know to test and see if I have a blind spot.
On physicalism in particular, as I indicated in a post a while back, a large part of the problem is defining it. Often what we’re really talking about is whether the mind obeys the regularities we observe in nature, whether it is in fact a part of that nature.
The evidence for it being a natural system is so vast, and the counter-evidence so illusory, that I have to admit I’m starting to lose interest in that debate. My post asking what about subjective experience implies anything non-physical was largely a finalish invitation for anyone to point out what I might be missing with that assumption.
Of course, many people will continue believing in a non-physical mind. That’s going to be true no matter what we do. The emotional reasons guarantee it to some extent. For me, this is starting to feel somewhat like the God question, a subject I mostly lost interest in debating several years ago.
LikeLiked by 1 person