Steven Pinker: From neurons to consciousness

This lecture from Steven Pinker has been around for a while, but it seems to get at a question a few people have asked me recently: how does the information processing of neurons and synapses lead to conscious perception?  Pinker doesn’t answer this question comprehensively (that would require a vast series of lectures), but he answers facets of it to the extent that it’s possible to see how the rest of the answer might come together.

Be warned: this lecture is very dense.  If the concepts are entirely new to you, you might have to re-watch portions to fully grasp some of the points.  And the visual illusions he shows, unfortunately, don’t seem to come through, but the point they make does.

Of course, people who insist that there has to be something more than just the physical processing won’t be convinced.  But if you’re interested in what mainstream neuroscience knows about this stuff, well worth a watch.

This entry was posted in Zeitgeist and tagged , , , , , . Bookmark the permalink.

2 Responses to Steven Pinker: From neurons to consciousness

  1. In his lecture “From Neurons to Consciousness” Steven Pinker describes how neuroscience is trying to bridge the gap between phenomena that we experience, and how the physical brain functions itself.  In these efforts I see bridge work from either side of the water. From the physical function side he details how neurons display “and”, “or”, and “not” logic statements, and so the human is displayed as a computer. Then from the conscious side that we know of existence, he explains various standard illusions by means of the also computational “lateral inhibition”, “opponent processes”, and “habituation”. But even given these explanations, there is clearly still plenty of missing bridge left to complete.

    I believe that neuroscience and associated fields in general will require better architecture from which to truely progress. By this I mean, for example, that the Wikipedia consciousness page will need to provide generally accepted definitions and understandings rather than its current mishmash of speculation. The following is a broad overview of my own proposed architecture, and I’d love to go deeper if there are any question.

    In the diagram above I partition “mind” into two distinct classifications. The main one, which I suspect is more than 99% of the total, is not conscious. It functions essentially as our computers do, and so algorithmically processes inputs that provide associated outputs. In his lecture Pinker gave us a wonderful anatomical demonstration of this by describing neurons and their networks. Apparently this normal type of computer wasn’t sufficient however, since an auxiliary “conscious” computer was built as well. (If anyone is interested, I do have theory regarding why non-conscious function alone was not sufficient.)

    Then moving over to the relatively small conscious computer (which functions through the non-conscious one), this seems to emerge at a higher level than those neuron incited logic statements. I consider it crucial to note that this sort of computer instead functions by means of a punishment/reward dynamic. While existence seems to have no personal implications to anything else, dead or alive, existence can be good/bad for a functioning conscious computer. This element is classified in my diagram as “affect/utility/happiness.” Once this aspect of existence becomes accepted to constitute the welfare of any defined subject, I believe that it will formally become implemented. There is surely nothing that we are in greater need of than an effective ideology from which to lead our lives and structure our societies.

    I left the non-conscious side of the diagram essentially open, since I presume that this computer takes in countless forms of input, uses many processing instruments to algorithmically go through them, and then provides countless associated outputs. One thing that the non-conscious mind seems to control, for example, is the beating of a heart. Temperature, activity, chemical substances, feeling nervous, and so on, are some of many non-conscious inputs that alter heartbeat once processed.

    Fortunately I’m able to get far more specific regarding the conscious computer. I’ve already mentioned the “motivation input”, which is theorized to constitute all that’s personally good/bad to anything that exists. Then the “information input” addresses things like vision, hearing, and so on, but without any value element. Thus a bad taste will provide two kinds of input — both punishment, as well as information associated with a chemical signature of what’s tasted. Then the last type of input to the conscious processor, exists as a degraded recording of past conscious processing. I won’t go into it unless questioned, though memory seems quite crucial for effective conscious function.

    I theorize the conscious processor to function in a specific way. It (1) interprets inputs, and (2) constructs scenarios, in the quest to (3) promote present personal value. A person might identify a sound, for example, construct a plausible scenario about why it was heard, and then go on to various responses given the quest to promote present happiness. The only non thought form of conscious output that I’ve been able to identify, is “muscle operation”.

    Why would I assert that consciousness concerns present value exclusively, when we know that people also do things for their future welfare? I make this assertion because concern about the future seems to occur through the present reward of “hope”, and the present punishment of “worry”. Here existence has a temporal component going forward, while the provided “memory” input adds the past as well.

    I’ll also briefly mention the diagram’s “Learned Line”. You may recall Pinker assert that neurons can develop very specific conditions from which to fire. Apparently a neuron in someone was even found that would only do so by means of an image of Jennifer Aniston (which surely was conditioned rather than inherent). Furthermore I’m quite sure that I have an extensive set of neurons from which to identify police cars, since while driving I seem to instantly recognize them and reasonable copies. I consider the conscious processor to be extremely limited, though a great deal of what’s taken credit for consciously, like identifying people and police cars, seems to be farmed out to the non-conscious processor by means of this line. Why does it take so long to become a good driver? Because it takes years to get the neurons effectively conditioned for this so called conscious task.

    Notice that by consciously opening and closing your hand, you’re not really doing the many amazing things that make this occur, but rather telling the non-conscious computer to take care of it for you. So while there is surely a vast supercomputer in our heads, it isn’t conscious, and regardless of how smart we like to consider ourselves. I consider myself pretty clever for figuring out the mind architecture as summarized above, though that’s mostly taking credit for the theorized feat of a machine that isn’t even conscious.

    Liked by 1 person

    • Thanks Eric. We’ve discussed many of your points before, so I’ll try to focus on areas we haven’t, or at least that I can’t recall us discussing.

      I wouldn’t judge the state of neuroscience by the Wikipedia consciousness page. I can’t say I’m intimately familiar with that article, but I know it tries to take a broad inclusive approach, which means a lot of the less scientific views probably get more coverage than either of us would care for. Still, if it’s the first thing someone has ever read on the subject, it’s hard to argue they shouldn’t get at least passing references to those viewpoints. I think many of those viewpoints are nonsense, but a new reader has the right to reach their own conclusions on them.

      We’ve discussed the differences in how we see the division between consciousness and non-consciousness. I see the division between them as much blurrier than you do. I’m mentioning it here because my view might have evolved slightly since we last discussed it.

      My current thinking is that, for the demarcation between consciousness and non-consciousness, we can divide brain information processing into three broad categories: processing that takes place autonomously (heart rate, breathing, hormones, etc), including when we’re asleep; processing that only takes place when we’re awake and interacting with the world; and processing accessible to introspection, that we are or can be “conscious” of. Crucially, if we can’t introspect something, are we conscious of it?
      Introspection seems to be most closely focused on what simulations we’re currently running.

      I do have one quibble about your comments on memory, particularly to the extent you were referring to episodic memory. Episodic memory is not a recording, but is a reconstruction of a past event based on semantic memory points. In other words, it’s a simulation just like the simulations we run for potential future scenarios That’s why it’s so unreliable, particularly for long ago events.

      On your diagram, I think we’ve discussed this before, and you seem to allude to it in your last paragraph, but just in case, it seems like muscle operation can be an output of non-conscious processing, and I’m not just talking about the heart here. Think about all the times we’re physically doing something on more or less “automatic” while we’re consciously pondering something completely different.

      As always, appreciate the discussion!

      Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s