Massimo on consciousness: no illusion, but also no spookiness

Massimo Pigliucci has a good article on consciousness at Aeon.  In it, he takes aim both at illusionists as well as those who claim consciousness is outside the purview of science.  Although I’d say he’s more worked up about the illusionists.

However, rather than taking the typical path of strawmanning the claim, he deals with the actual argument, acknowledging what the illusionists are actually saying, that it isn’t consciousness overall they see as an illusion, but phenomenal consciousness in particular.

First, in discussing the views of Keith Frankish (probably the chief champion of illusionism today):

He begins by making a distinction between phenomenal consciousness and access consciousness. Phenomenal consciousness is what produces the subjective quality of experience, what philosophers call ‘qualia’

…By contrast, access consciousness makes it possible for us to perceive things in the first place. As Frankish puts it, access consciousness is what ‘makes sensory information accessible to the rest of the mind, and thus to “you” – the person constituted by these embodied mental systems’

He then presents a fork similar (although not identical) to the one I presented the other day.

Both camps agree that there is more to consciousness than the access aspect and, moreover, that phenomenal consciousness seems to have nonphysical properties (the ‘what is it like’ thing). From there, one can go in two very different directions: the scientific horn of the dilemma, attempting to explain how science might provide us with a satisfactory account of phenomenal consciousness, as Frankish does; or the antiscientific horn, claiming that phenomenal consciousness is squarely outside the domain of competence of science,

Actually I’m not sure I agree with the first part of the first sentence, that phenomenal consciousness is necessarily something separate and apart from access consciousness.  To me, phenomenal consciousness is access consciousness, just from the inside, that is, phenomenal consciousness is what it’s like to have access consciousness.

But anyway, Massimo largely agrees with the illusionists in terms of the underlying reality.  But he disagrees with calling phenomenal consciousness an illusion.  He describes the user interface metaphor often used by illusionists, but notes that actual user interfaces in computer systems are not illusions, but crucial causal mechanisms.  This pretty much matches my own view.

I do think illusionism is saying something important, but it would be stronger if it found another way to express it.  Michael Graziano, who has at times embraced the illusionist label, but backed away from it in his more recent book, notes that when people see “illusion”, they equate it with “mirage”.  For the most hard core illusionists, this is accurate, albeit only for phenomenal consciousness, although others use “illusion” to mean “not what it appears to be.”  It seems like the word “illusion” shuts down consideration.

It’s why my own preferred language is to say that phenomenal consciousness exists, but only subjectively, as the internal perspective of access consciousness.  It’s the phenomena to access consciousness’ noumena.

I do have a couple of quibbles with the article.  First is this snippet:

but I think of consciousness as a weakly emergent phenomenon, not dissimilar from, say, the wetness of water (though a lot more complicated).

I’m glad Massimo stipulated weak emergence here.  And I agree that the right way to think about phenomenal consciousness is existing at a certain level of organization.  (And again, from a certain perspective.)

But I get nervous when people talk about consciousness and emergence.  The issue is that, of course consciousness is emergent, but that in and of itself doesn’t really explain anything.  We know temperature is emergent from particle kinetics, but we more than know that it emerges, we understand how it emerges.  I don’t think we should be satisfied with anything less for consciousness.

The involved neurons also need to be made of (and produce) the right stuff: it is not just how they are arranged in the brain that does the trick, it also takes certain specific physical and chemical properties that carbon-based cells have, silicon-based alternatives might or might not have (it’s an open empirical question), and cardboard, say, definitely doesn’t have.

Massimo has a history of taking biological naturalism type positions, so I’m happy that he at least acknowledges the possibility of machine consciousness here.  And I suspect his real target are the panpsychists.  But I’m a functionalist and see functionality (or the lack of it) as sufficient to rule out those types of claims.  When people talk about particular substrates, I wish they’d discuss what specific functionality is only enabled by those substrates, and why.

But again, those are quibbles.

It follows that an explanation of phenomenal consciousness will come (if it will come – there is no assurance that, just because we want to know something, we will eventually figure out a way of actually knowing it) from neuroscience and evolutionary biology, once our understanding of the human brain will be comparable with our understanding of the inner workings of our own computers. We will then see clearly the connection between the underlying mechanisms and the user-friendly, causally efficacious representations (not illusions!) that allow us to efficiently work with computers and to survive and reproduce in our world as biological organisms.

Despite a caveat I’m not wild about, amen to the main point!

This entry was posted in Zeitgeist and tagged , , , , , . Bookmark the permalink.

63 Responses to Massimo on consciousness: no illusion, but also no spookiness

  1. paultorek says:

    Even though I think of phenomenal consciousness as something the brain *does*, substrate might still be important, insofar as what something can do depends on what it is. The whole-person level of behavior is too coarse, in my opinion, to guarantee consciousness-as-we-know-it. On the other hand, the molecular level is too fine grained. If (and it’s a big if) you can get a silicon-based “neuron” to talk to the other ~10,000 neurons it connects to in the exact same way a carbon-based one would, you’re golden. Phenomenal qualities are how one brain sub-network talks to the others. But if you take out too many of the dance moves, it’s not the same dance, even if it sounds the same to people outside the dance hall.

    Liked by 1 person

    • So the idea is that consciousness requires a physical neural network. And even if we manage to produce the same outputs with an equivalent software neural network, it wouldn’t be conscious. It would be a behavioral zombie.

      Perhaps, but what in particular does this physics enable that is lost with the alternate physics that produce the same information flow? And how could we ever test this proposition?

      Like

      • Wyrd Smythe says:

        “And even if we manage to produce the same outputs…”

        I suspect that’s the kicker, getting the anticipated outputs. (I’ve argued you might not.) I think if you do get the right outputs over time (the Rich Turing Test), then you’ve proved computationalism correct.

        Like

        • I agree. If we’re getting the right outputs, it becomes hard to argue we don’t have the inner stuff, or at least something similar.

          If we don’t get the right outputs, then the game becomes playing around with the architecture to figure out exactly what about the original topology is making the difference.

          I do think it’s very feasible that the software version could function, but be too slow for anything useful. Performance issues could drive us toward a hardware architecture more similar to biological brains, although with the much higher speeds of computer chips, it might not have to be nearly as parallel as the biological versions. Maybe.

          Like

          • Wyrd Smythe says:

            Somewhere I read about near-field effects — a neuron firing can affect nearby neurons even though they aren’t connected. It seems increasingly clear the connectome alone isn’t sufficient (but is almost certainly necessary).

            Certainly the synapses matter. Things like glial cells or myelin sheathing appear to matter. Near-field effects may matter (they happen so I would assume they matter), which means spatial geometry matters.

            That’s a lot to simulate. (It’s apparently what’s happening in Stephenson’s Fall; or, Dodge in Hell. It’s a physics sim running on entire server farms of quantum computers.) The connectome alone is a ton of data.

            Liked by 1 person

          • James Cross says:

            Regarding myelin sheathing, I think dendrites, including the apical dendrites, have no sheathing and even some axons can lack sheathing while the ones with sheathing have discontinuous sheathing.

            There may be a bigger role for glial cells since it was discovered they are not likely insulators or merely structural elements but can have an electrical potential themselves.

            If the connectome model isn’t sufficient, then it would seem to be that computational models still have a lot of explaining to do in regard to how information actually flows in the brain.

            Like

          • I posted a while back about a study that had managed to demonstrate neighboring neurons communicating with fields, but only in an artificial environment. It remains a speculative idea for neurons in actual brains, where neurons are typically separated by glia.

            One of the things we have to careful about, as amateurs reading studies or reports of studies, is the difficulty of fitting them into an overall context. It’s why I prefer getting my understanding from neuroscience books, which assimilate all that into a more coherent picture. They’re less up to date, but the authors usually have a good grasp on what are fringe results versus what has a lot of corroborating research. Lamentably, the books cost money (a lot of money for hardcore neuroscience ones), and take time to parse.

            Like

          • James Cross says:

            I have a comment under moderation with a lot of links to ephaptic coupling.

            Like

          • It got flagged because of the number of links.

            Despite the quantity, the recent neuroscience books I’ve read still don’t mention it, indicating it remains of questionable validity for most neuroscientists. Of course, that could change as more data is accumulated.

            Like

          • James Cross says:

            Yeah, I know why it got flagged. I have somebody (Wes) at my site who is always getting flagged for the same reason.

            The first article has Koch as a co-author and was published in Nature so the idea isn’t completely fringe.

            Like

          • And yet, Koch didn’t think it worth mentioning in his latest book, even when discussing local field potentials. He does mention it in his earlier book, and even cites that paper. I wonder what changed between 2012 and 2019.

            Like

          • James Cross says:

            Here’s another Koch paper from 2015.

            https://www.sciencedirect.com/science/article/abs/pii/S0959438814001809

            “There has been a revived interest in the impact of electric fields on neurons and networks. Here, we discuss recent advances in our understanding of how endogenous and externally imposed electric fields impact brain function at different spatial (from synapses to single neurons and neural networks) and temporal scales (from milliseconds to seconds). How such ephaptic effects are mediated and manifested in the brain remains a mystery. We argue that it is both possible (based on available technologies) and worthwhile to vigorously pursue such research as it has significant implications on our understanding of brain processing and for translational neuroscience.”

            It looks like Koch has his name on about 30-40 papers a year. This is not easy to study since we are trying to measure weak EM fields at a cellular level but any instruments we use to measure effects are likely to be intrusive to what we trying to measure.

            Like

          • If ephaptic coupling is a factor, I tend to think it will be in enhancing the peaks of waves. It might provide more oomph. The peak of the wave already makes neurons more likely to fire than at the trough. It might also make the trough less deep. Maybe.

            On Koch and papers, he’s the chief scientist for the Allen Institute, so I suspect he ends up reviewing the work of a lot of studies, which is probably why he shows up at the tail end of so many author lists.

            Like

          • James Cross says:

            “If ephaptic coupling is a factor…”

            Yes, certainly could be that. More neurons firing at the same time seem to be associated with conscious perception. There is an EM field feedback thing at work too. As more neurons fire, they trigger more neurons to fire which also increases the strength of the EM field which triggers more neurons to fire and could spread the firing to other parts of the brain. I think in my post I compared this to resonance between a tuning fork resonating because it is in close proximity to a vibrating one.

            Part of the theory would be that conscious perception occurs when the EM field reaches a certain threshold strength but that would also be likely the same time that a large number of neurons are firing in sync. So it could support either a theory that the neurons firing are responsible for consciousness or the EM field is. In a sense, the two may be opposite sides of the same argument.

            Like

          • When talking about consciousness, we always have to be careful to distinguish between arousal (wakefulness) and awareness. Higher frequency oscillations (gamma waves) are definitely associated with arousal. But as I understand it, even someone in a vegetative state can go through sleep/wake cycles, including transitioning between different wave frequencies, but with no awareness.

            Awareness seems to require more selective activation. It’s dependent on the overall oscillations, but as more of a perquisite than a transmission mechanism.

            Like

          • James Cross says:

            That gets tricky because some evidence suggests that some in an apparent vegetative state are partially conscious.

            You can also bring in Koch consciousness meter, which he does discuss in his latest book, that begins by zapping the brain with a magnetic pulse the analyzing the resulting EEG patterns.

            Like

          • The thing is, to detect the consciousness of those in an apparent vegetative state you need a brain scanner, one that will show differentiated activity compatible with what the patient is being asked to think about.

            There’s no doubt that a magnetic pulse can induce neural activity. The question is whether neurons themselves generate anything sufficient for that, and if so, whether it’s anything significant in terms of information processing.

            Like

      • paultorek says:

        By hypothesis, it’s different physics, so you can verify it by doing a physical analysis. Also, it’s *not* “the same information flow” except at a very coarse-grained, whole organism level. And *behaving* a certain way at the whole-organism level is not exhaustive of what I care about. I also care about *feeling* certain ways. And I resolve the semantic ambiguity of “feeling” by accepting that the meaning is spread across the many equally-good size/activity scales for that referent.

        Like

        • Right. But a system that behaves like the original would have to have similar intermediate processing states to that original. They may not be the same exact feeling states as the original, bu they’d have to have their own version of those states. But from the new system’s perspective, it would look at the original and wonder if it had the same feeling states as it. Neither system would have access to the other’s internal experience. They could only extrapolate from their own experience what the other system was feeling.

          If it was an mind uploading situation, the copy’s consciousness could be utterly different from the original’s. But the copy would remember its experience as the original in terms of its current experience. It would never be able to know the difference. If the original system felt pain, but it only feels pain-alt, its memory of feeling pain as the original would be in terms of pain-alt. It would never be able to make a comparison, or even know the difference.

          Like

          • paultorek says:

            “a system that behaves like the original would have to have similar intermediate processing states to that original.”

            I don’t see why. Show me the math?

            “from the new system’s perspective, it would look at the original and wonder if it had the same feeling states as it.”

            Yes, exactly. Like my robot from a few posts back, who pities us poor humans that lack compsciousness, making do with mere consciousness instead.

            Like

          • Show the math? If I could do that I’d be having it peer reviewed. 🙂 No doubt all kinds of different internals are possible, but to get the same outputs given the same inputs, at least equivalent information flows have to happen. There are many ways to write a word processor, but all of them are going to involve holding a representation of the document in some manner.

            I totally missed the compsciousness spelling before. Nice. I agree. Of course, whether compsciousness is just a variant of consciousness or another thing in and of itself is in the eye of the beholder.

            Like

          • paultorek says:

            I thought you were claiming more than that the input-output mapping, when treating the whole body as a black box, is the same. I thought you were implying that the top-level internal structures would have to be isomporphic too.

            Like

          • I don’t think it would have to be structural isomorphism, but it would have to have functional isomorphism, at least to some level of approximation/precision.

            Like

          • paultorek says:

            I am still unconvinced of the functional isomorphism implication. A Giant Look Up Table (GLUT) provides the same whole-organism input-output mapping, but strikes me as a very different functional arrangement.

            By the way, I just now got the email from Aeon offering a link to Massimo’s piece. It’s great, except for the way he sets up “the fundamental divide”. He leaves his own view out of the list of options in this “fundamental divide”! The situation in philosophy of mind is not quite as bleak as he paints it. Views like his and mine are not exactly rare, even if they don’t get as much attention as (duh!) they deserve.

            Like

          • The problem with GLUTs is that they rapidly grow out of hand. A GLUT large enough to do anything sophisticated quickly exceeds the size of the universe. Any possible system will have to use shortcuts and optimizations, with performance, capacity, and energy constraints causing a convergence to a limited number of options.

            I can’t say I have a very positive opinion on most of the philosophy of mind. The field does occasionally produce possibly useful hypotheses. Higher order thought and predictive coding theories come to mind. But too much of it seems preoccupied with rationalizing why dualism or other cherished intuitions are true after all, dreaming up one impossible “thought experiment” after another as justification.

            Like

          • paultorek says:

            A GLUT is not a physical possibility, but it is a mathematical one. For a more realistic analogy, consider electric cars vs internal combustion engines. Both types of car will get you to work and shopping, but they’re not functionally isomorphic under the hood. For one thing, electric cars often don’t need a transmission. Low end torque is no problem for them. That example isn’t drawn from psychology or neurology, but I don’t see why it can’t be relevantly analogous.

            Like

          • And yet they’re functionally equivalent enough for us to regard both of them as cars. We can go around on this forever, but ultimately there’s no fact of the matter. Functional equivalence will always be a matter of interpretation. Whether another system with different internals that can do conscious things is conscious will ultimately be a matter of philosophy.

            Like

    • Paul,
      Lately some of us have been entertaining ourselves over at James Cross’s blog by considering the potential for phenomenal existence to be caused by neuron based electromagnetic radiation. So here consciousness would not just be a product of generic information processing (and even by means of a Chinese room), but rather a dynamic which certain material based physics produce by means of associated causal function.

      Observe that if certain EM fields happen to be responsible, then appropriate radiation from all over the brain might contribute to create a single conscious entity, or one experiencer each moment rather than all sorts. So here the ethereal properties of EM waves could provide a ready made solution for the combination problem. If you haven’t yet taken a look, you might enjoy it.
      https://broadspeculations.com/2019/12/01/em-fields-and-consciousness/

      Liked by 1 person

    • James Cross says:

      “If (and it’s a big if) you can get a silicon-based “neuron” to talk to the other ~10,000 neurons”

      I’ll take your ~10,000 to be a correct estimate. But here is where I have a problem that has lingered in the back of my thoughts for a while.

      If neurons have that many connections and that many connections are required for consciousness, then how can neural signals using solely chemicals and ion flows possibly work fast enough and in sync over the billions of neurons and 10K * billions of connections to produce any sort of unified experience or a functioning consciousness. It might be fast enough in silicon but that wouldn’t explain how it works with chemistry.

      Like

    • James Cross says:

      Just to expand on this a little bit (and maybe Wyrd can do some good math with this).

      In the argument for perception in the frontal lobes

      Like

      • James Cross says:

        Let me try that again.

        To expand on this a little bit (and maybe Wyrd can do some better math with this than I can).

        In the argument for perception in the frontal lobes, visual perceptions form in the frontal lobe. I assume a significant amount of processing still takes place in the visual cortex. I think there are a few inches of brain between the visual cortex and the frontal lobe, so I would assume some number of neurons (100-200?) would need to fire almost in sync in the visual cortex and pass information through some number of connections (100?) to reach the frontal lob where the information would need to be passed around over another number of neurons (25?) to generate the perception. I think a synapse firing needs about 2-3 ms. Is there enough time for this to even be a possible way for information to flow in the brain?

        Like

        • James,
          You might be overestimating how fast the mind works. Consciousness in particular is pretty slow by nervous system standards. (Nonconscious processes actually happen much faster.)

          But action potentials reportedly propagate at somewhere between 1 and over 100 hundred meters per second (depending on the axon thickness and amount of glia insulation). So I don’t think the distance between the parts of the brain is that much of an issue.

          What might be more of a factor is the number of chemical synapses in the circuit, since the conversion from electric to chemical and back to electrical signaling takes time.

          I’d also say that the notion that there is a full resolution image of the perception in the frontal lobes is one I’m personally skeptical about. That seems computationally wasteful. I suspect it’s more complicated. I would think the frontal lobes have action plans, with parameters set from the imagery in the visual cortex and higher resolution sensory regions. In other words, each representation higher in order needs to add value on the transformation from sensory input to motor output. But I’ll admit this is my own speculation.

          Like

          • James Cross says:

            In the 1 – 100 meters per second does that factor in that the signal(s) may need to pass through multiple synapses? I mean it isn’t like there is a direct line from neurons in the visual to the neurons in the frontal cortex. Every neuron doesn’t have a direct connection to every other neuron. I am seeing 1-5 ms for a synapse to fire but if the overall information needs to go through 100 synapses (of course being modified along the way) then we are in the 100-500 ms.

            Like

          • I think this is where the importance of pyramidal neurons and the thalamus come in. They serve as the network hub and reduce the number of hops between disparate regions.

            Like

          • James Cross says:

            I guess I’m lacking some basic understanding of how signals move about but it might not be just me. I’m not exactly finding clear explanations anywhere else either.

            There is plenty stuff about how a single neuron fires. Most of these do not use pyramidal neurons as examples, however.

            To get from the visual cortex to the frontal cortex where do the signals propagate? Deep in the brain or on the surface of the cortex where the apical dendrites terminate? Some other way?

            The signals must go through multiple logic gates (if this is how it works) to get routed to right spot so there must be processing along the way.

            Like

          • From what I’ve read, most of the information between cortical regions flow through the thalamus, although there are also a lot of cortex to cortex connections (the axon drops out of the cortex but runs to another cortical region), and a lot of information flows through regions like the corpus collosum. There’s also a lot of indirect information that flows though other subcortical regions such as the basal ganglia, amygdala, etc.

            There is information that flows laterally through the cortex itself, but that really is one region affecting adjacent regions. I wouldn’t think it’s a meaningful pathway between, say, the occipital lobe and the frontopolar prefrontal cortex.

            That said, don’t trust anyone who confidently tells you they understand how all this flows yet. There’s a lot known on the connections, but still major gaps on what they mean.

            Like

  2. PJMartin says:

    Thinking in terms of how to engineer something phenomenally conscious, I’d want to pay more attention to that part of our mind that views and does something with the ‘illusion’ or the ‘user interface’. In effect then we can design this thing (the subset of mind that is the conscious entity that is waiting for some content) so that ‘what it is like’ for it to have certain conscious content presented to it translates directly into specific feature of its architecture. We should then be able to study it and see why it would feel like it does in a given circumstance. The feeling is real, but it is subjective to that entity and played out in what it does over time. For me that feeling is bound up with the relation between external senses, internal representations, positive or negative valence, potential attention/action sets and predicted outcomes, both sensory and valenced. (Sometimes wish I could insert a picture here!)

    Like

    • James Cross says:

      In general I like the user interface analogy but I think it breaks down at some point. I think the thing that does something with the user interface is part of the interface. It isn’t something different. If there is an illusory aspect to consciousness, it is the illusion of something apart from the interface.

      Like

    • I agree. I think there’s a lot to be said for looking at the boundary between sensory and action orient processing. Of course, that happens at multiple levels, for innate reflexive reactions, habitual reactions. But the level we’re interested in is the planning one, or at least where planning may potentially happen.

      On inserting a picture, if you have it on the web somewhere, if you paste in a url to a gif or jpg, WordPress usually renders whatever the image is. (Although not in the email notifications, just on the website.)

      Like

    • [been tryin to decide where to jump in … choosing here]

      My goal here is to explain my current best understanding of how this whole thing could be engineered. Take it as a thought experiment.

      First, I have to explain that, thanks to a tweet from Keith Frankish (the illusionist), I have become aware of Ruth Millikan’s most recent book: Beyond Concepts. I only read Part I so far, but that’s been enough. Millikan’s work is about semantics, and more specifically, how natural systems develop semantics, which she refers to as biosemantics and we here have occasionally called teleonomic semantics.

      The goal of Part I of her book is to explain unitrackers and unicepts. A unitracker is a mechanism [!!!] that tracks one thing, which she usually calls the target. It can get input from all over, multiple senses, memories, etc.. She avoids using the term “concept” because that term carries certain baggage in at least one of her fields (cognitive linguistics?), but using a naive, popular understanding, the “target” being tracked is essentially one concept (possibly a unicept, but see below). So there might be a unitracker for “cat”, and one for “tiger”, and one for “that tiger there”. Unitrackers can be essentially permanent (cat), temporary (that cat there), can change (that cat there is named “biscuit”), can go away (“no, I don’t remember a cat named Biscuit”). Unitrackers can track abstractions, like the president of the U.S.

      A unicept (coined by Millikan to get away from baggage mentioned above) is the target of a unitracker that can get into consciousness, i.e., a target you can become aware of. Millikan never really talks about consciousness, but this is my understanding of the difference between targets which are unicepts and targets which are not unicepts.

      Millikan does not speculate as to the neural mechanisms of unitrackers, but that doesn’t mean I can’t. I’ll be blunt and brief so that I can put it into the context of this discussion. I speculate the unit of the unitracker is the cortical column (more or less). Thus, the whole cortex is pretty much unitrackers. Some of these unitrackers track direct sensory inputs, so for vision they track whatever is coming over the optic nerve. Because some processing has happened in the retina, these aren’t necessarily just pixels. Some unitrackers use these primary trackers to track low level visual features, like edges, boundaries, etc. And then more unitrackers use these secondary trackers, and so on.

      Note: activation of specific unitrackers can affect the input unitrackers. So if the “cat” unitrackers is already activated, it may influence its input unitrackers, like “eyes”,”fur”, etc. You can consider that predictive feedback.

      So how do we get consciousness? We (some of us?) have talked about multidimensional vectors, and how they can be used to combine (and subtract) concepts. So, king – man + woman = queen. What if you had a reasonably small set of neurons, say, 500-1000, that could instantiate such vectors and the functional vector operations. Chris Eliasmith has demonstrated computer simulated biologically plausible neurons that can do this. What if any given unitracker sent exactly one axon into this vector network such that, by itself, it induced a unique vector. That unique vector would count as a representation of the target of the unitracker. Thus the vector network would have the capacity to represent each unitracker that sent in an axon. Now what about the output of this vector network? What if the output went broadly out to other unitrackers. These unitrackers could use this as input with regards to their individual targets. So if the vector network was showing “tiger”, the “cat” unitracker might become more activated, but the “sand” or “Trump” or “flying” unitrackers not so much.

      So unicepts are the targets of those unitrackers which can be instantiated in a vector network. The instantiation of a unicept in the vector network is an experience. The qualia associated with this experience is the unicept, i.e., the target of the unitracker.

      Note that this vector network is effectively a global workspace. All of the unicepts can be considered to be competing to be instantiated. Whatever controls which unicepts are in fact instantiated is essentially controlling attention.

      Note also there is no single “viewer” of this workspace. Instead, there is an audience of unitrackers.

      Note also that there are a wide variety of unitrackers. Some can be goals (get beer from fridge), some can be action plans (move body to fridge, open door, grab beer), some can be physical objects (fridge, beer), some can be words (“fridge”, “beer”).

      So is this plausible? Supported by evidence?

      *
      [of course, the Prefrontal Cortex is the home of goal-type unitrackers]

      Liked by 1 person

      • Much of it sounds plausible to me, although I’d have to read more to understand the exact value in the specific terminology. My issue with semiotics has been, not that I think it’s wrong, but that I’m not sure what value all its blizzard of definitions bring.

        The part I’d be most skeptical of is equating a unitracker with a cortical column. I doubt the mapping would be that clean. I tend to think a unitracker would be a vast hierarchy of activations, converging on a small area (Damasio’s CDZ zones), and diverging from them during imaginative or predictive retroactivation. The CDZ would likely be the unique core of the unitracker.

        I’ll have to give some thought to your description of the global workspace, but I think the idea that there is no single viewer is right, and matches the understanding of most GWT advocates.

        Like

        • I’m not sure identification of individual cortical columns is “clean”, but like I said, seems plausible.

          Damasio’s convergence zones could simply be unitrackers of parts converging on unitrackers of wholes.

          Keep in mind the number of unitrackers is, well, large. Consider you have a separate one for each word that you know, plus each phrase you commonly use, plus each person, place, or thing you know, etc.

          *

          Like

          • Oh, I fully understood that the number of unitrackers is large. To me, the term simply refers to concepts, and all their combinations, permutations, composites, hierarchies, and associations. It’s every thing we can perceive, imagine, believe, or plan.

            Like

        • But I’m not sure you’re keeping in mind that a unitracker is a mechanism, a physically isolatable (in theory) unit. Presumably a set of neurons, for each one.

          *

          Like

        • The alternative to unitrackers, where a single set of neurons is dedicated for tracking just one concept, is multitrackers, where a set of neurons can track multiple concepts, as in the vector-network (global workspace).

          So back to testable conjecture: I conjecture that the architecture of the cortex is most suitable for unitrackers whereas the architecture of some part of some un-named subcortical structure (that may or may not rhyme with Palamus) is suitable for multitrackers.

          *

          Like

      • PJMartin says:

        JamesOfSeattle: The terminology of ‘unitrackers’ is new to me but the concept fits with part of what is needed in an implementable account, and I like that it is getting us more in the direction of control systems, which is where we need to be in my view.

        Some thoughts building on this:
        – A unitracker is modelling what is persistent and useful in input data (or data in the next layer down even if we are not at the sensory level) and mapping that to useful discriminated output categories. The nature of this mapping is predictive and for a purpose.
        – Attention applies all the way down the stack, so that what a unitracker is pointed at to get its data (eg the pixels) is what it is attending to. The parameters of the unitracker are precisely the current values of its attention pointers.
        – We have mechanisms for initiating and terminating unitrackers.
        – Actions plans can amount to joining up an action controlling unitracker to a sensory unitracker, as in linking control of hand to sight of the beer. This does require a distinct mechanism for generating possible connections between unitrackers and instantiating the best one (eg reach with right or left hand, reach for beer or TV remote).
        – Valence is needed to drive the whole process forward, namely a measure of good and bad for the organism that determines what to pay attention to (that with highest positive or negative valence, ie highest salience) and what action is optimal (expected to result in highest future valence).
        – Self awareness requires that ones of the things we abstract and track is ourselves and our relationship to the world.
        – …and just to push this further, what is special about maths and logic, is that it is to do with the algebra of combining unitrackers, rather than being out in the world.

        Like

        • Peter, some thoughts on your thoughts …

          As you may know, my paradigm for mechanisms is Input—>[mechanism]—>Output, so when you say “mapping” I think “output”.

          I would say mapping (output) is always for a purpose, but not always predictive, except in the sense of “I predict this output will be a valuable thing to do”. So there are unitrackers for action plans, like “reach and grab”, and the outputs of those unitrackers are signals for reaching and grabbing, probably going to one unitracker for reaching and another for grabbing.

          I’m not sure that “attention” applies in the way you propose. To me, attention implies alternatives. I don’t see a unitracker paying attention to some inputs and ignoring others. I just see it adding up its inputs and deciding how activated it should be.

          With regard to choosing between unitrackers (right hand v. left hand) I anticipate it works as a competition, with both unitrackers being activated while other goal-type unitrackers exert suppression of one or the other.

          Regarding valence, I’m not sure that it is involved as much as you (and Eric) seem to think. I think it is mostly involved in the creation, modification, and destruction of unitrackers, as opposed to their operation. I.e., “good feelings” tend to create them while “bad feelings” tend to to destroy them? [Haven’t thought much about this. Will now. Thx]

          Finally, unitrackers for “self”? Yup. Pure abstractions? Yup.

          *

          Like

          • PJMartin says:

            JamesOfSeattle: Generally in agreement and where there’s a difference it’s probably just in terminology or different things that we are putting a box around. To follow up on a couple points:
            – Regarding unitrackers implementing attention, I agree there are higher level choices to be made on what to pay attention to, but this is what I have in mind regarding attention at the unitracker level: Imagine watching coloured shapes appearing and moving across a screen. A unitracker would track an object. What that means in practice, in terms of implementing it as a tracker, is that it is watching the current position of the object plus the places it could go next and is set up to detect if it moved and update the tracked position. In that sense it is paying attention to the right sensory information based on predicting where the object might be next. A similar idea would apply to tracking the colour of the object if that is changing, or the shape. In each case the unitracker is tracking the position, colour and shape parameters as they change, and paying attention to just those pixels that will tell it if there is a change it needs to track. Those attentional pointers are then exactly the parameters that you would read off if you want to know the location, colour and shape of that tracked object. I have implemented this in software and so had to think about how it would work at the practical level.
            – Regarding valence, this seems fundamental to me because without it the brain can do anything at all or nothing – it would have no idea what to attend to, what to do or what to learn. It sets the direction of travel for everything that the mind is doing.

            Like

        • Peter, the way I see it you are giving too much agency to the unitracker.

          Let’s take the example of two objects bouncing around a screen: a red circle and a blue square. Some unitrackers get activated off the bat :
          Objects, red, blue, (white, or whatever the background is)
          After some eye movements and time for movement, additional unitrackers will be activated:
          red+Circle+object+Moving, blue+Square+Object+Moving,
          As the objects persist for a certain time, more get added (without losing the old ones)
          (red+circle+object)+trajectory1, (blue+square+object)+trajectory2

          So over time the trajectories will change, and the unitracker of (red+circle+object)+trajectory1 will not be stable, but the unitracker for red+circle+object will be stable. I don’t think it’s correct to say that the unitracker is paying attention to its trajectory. I think it is better to say other unitrackers are paying attention to their inputs, one input possibly being the vector network showing red+circle+object+trajectory3. A unitracker for blue-circle-object might also be watching it’s inputs. If the circle changes from red to blue, the red+circle+object unitracker will stop becoming active and the blue+circle+object become active. However, the circle+object unitracker will remain active.

          Does this make sense?

          *

          Like

          • PJMartin says:

            JamesOfSeattle: Yes, I’m broadly OK with what you are saying, in that once initiated unitrackers can be fairly dumb and autonomous; and also that they are hierarchically connected, and only really get interesting at higher levels in the stack.

            Regarding the relationship to attention, when I implemented this in software I found the place at which attentional selection comes into play is in assigning a unitracker to a newly detected input. Once assigned it can track and terminate itself autonomously and in parallel with all other unitrackers. However, when a first detection is made and a unitracker has to be assigned, there is a competition for scarce resources (available unitrackers to take on what may be multiple new detections) and serial rather than parallel selection of the most salient new detection, then second most salient, and so on, until time in that cognitive cycle runs out. Interestingly this process seems to correspond to the gamma brainwave frequency and to there being about 8 different slots in working memory, in ways that drop naturally out of implementing it. There are about 8 opportunities to serially map unitrackers to newly detected inputs before time runs out and things move on.

            Valence comes in when assigning unitrackers, because salience (absolute value of valence) needs to be maximised. You have to have some criterion, even down to the lowest levels of processing, to decide what matters.

            For a really uniform model, the same concept can be extended up the stack to taking action, because action can also be controlled by a unitracker that is selectively connected back to the unitrackers of a sensed object – if you like, my hand controlling unitracker is paying attention to (taking its input from) the beer unitracker.

            I would say though that the above mentioned attention is subconscious. The attention that we are conscious of is when we deliberately intervene and override the above process, by taking the above process as input and modifying it as a mental action. This is where unitrackers that represent ourselves and our relationship to the world come into play, in representing ourselves to ourselves, and letting us selectively intervene in what are otherwise subconscious attentional processes.

            Like

        • Peter, that all sounds right (and awesome!). Is this written up anywhere? How much of this is in your book?

          So now I have questions like: how do you manage valence? Is that hard-coded? Can you establish goal-tracking unitrackers which influence valence, and therefor high-level attention?

          *

          Like

          • PJMartin says:

            JamesOfSeattle; Yes that is written up in my book (although not expressed as unitrackers, which terminology is new to me) although I am wary of mentioning it here as I know that advertising is frowned upon. Happy to send free PDF to anyone on here who lets me know a personal email address, as the discussions here are stimulating and appreciated.

            The fundamental valence calculation (indicating pleasure or pain) has to be calculated in a way that is hardwired to the needs of of the organism and genetically determined. The relation of this to particular situations and attention/action options can then be learned through experience, and quite simple learning tables, automatically updated, suffice to achieve this. I have implemented this learning too in software, in a very simple example, and it is striking how quickly such learning can go from a random starting state to useful behaviours, in respect of both attention and action.

            I have not yet implemented high level goal-tracking unitrackers in software, and will have more time for software experiments from April. I would see these as having explicit access to the subconscious processing and that it is this explicit access to tracked sensory data, attention options, action options and valences that give rise to qualia and the phenomenal nature of the consciousness.

            I looked again at the notes of your architecture, and I think we are very similar, just labelling things a bit differently in some cases, and being more focused on different subsets of the whole.

            Like

          • Peter,
            You’re good. I’m fine with you discussing your book, particularly if someone asks about it, as long as its part of the discussion, and not an outright sales pitch.

            Like

      • James Cross says:

        “I speculate the unit of the unitracker is the cortical column (more or less). Thus, the whole cortex is pretty much unitrackers.”

        So it would follow that if a large portion of cortex was removed then we would lose all the unitrackers found in that part of the cortex. This doesn’t seem to be the way the brain works. Many large sections can often be removed (although some are very critical for consciousness in general or particular capabilities) but most functionality is preserved. Memories and capabilities seem to be distributed throughout the brain with remaining sections able to plug the gaps.

        Sometimes people have a frontal lobe removed to treat epilepsy that can’t be controlled with medication. Results vary but some have no meaningful decline.

        “Forty-eight percent of patients did not demonstrate meaningful postoperative declines in cognition and an additional 42% demonstrated decline in 1 or 2 cognitive domains.”

        https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5317375/

        Like

        • A lot depends on whether the removals are just in one hemisphere versus bilateral. Someone with a functional corpus callosum can lose large segments of one hemisphere and often only see modest cognitive decline, although it usually leads to sensory or motor issues on the opposite side of the body.

          Like

        • Also, there is a large amount of plasticity. Unitrackers are created, modified, and removed all the time. In people who are blind because their eyes are damaged, the unitrackers normally reserved for vision get repurposed.

          Memories are distributed throughout the brain because the unitrackers for any given memory are usually multi-modal, involving objects, sights, sounds, people, etc., and so the associated unitrackers are distributed throughout the brain. I expect the activation of one episode memory involves the activation of all the associated unitrackers.

          *

          Like

  3. Interesting article. You might want to check out Brett Weinstein’s latest podcast with Sam Harris if you haven’t already. They discuss Determinism, Many-Worlds theory and ‘Free-Will’. Lots to do about consciousness. You can probably skip the first 20 minutes or so to get right into the action. Cheers.

    Liked by 1 person

    • Thanks for the recommendation! The only Bret Weinstein podcast with Sam Harris I can find is this one from 2017: https://samharris.org/podcasts/109-biology-culture/
      Is it the one you mean?

      Liked by 1 person

      • No, it’s the link I have below. Weinstein’s interpretation of the Many World’s theory and Determinism align with mine although he’s probably more scathing/dismissive than I’d be about it. The problem with Harris in these discussions is he is the ULTIMATE rationalist much like what happened with the Jordan Peterson 1st discussion on his podcast ‘What is truth?’ – which I still consider one of the greatest philosophical debates I have ever heard despite how disdained it is in social media circles.
        Weinstein keeps mentioning ‘Darwinian Fitness’ in his remarks in the below podcast which I also cannot detach from when I think of these topics and as I argued in my Copenhagen article. Cheers.

        Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.