The response schema

Several months ago Michael Graziano, and colleagues, attempted a synthesis of three families of scientific theories of consciousness: global workspace theory (GWT), higher order theory (HOT), and his own attention schema theory (AST).

A quick (crudely simplistic) reminder: GWT posits that content becomes conscious when it is globally broadcast throughout the brain, HOT when a higher order representation is formed of a first order representation, and AST when the content becomes the focus of attention and it is included in a model of the brain’s attentional state (the attention schema) for purposes of guiding it.

Graziano equates the global workspace with the culmination of attentional processing, and puts forth the attention schema as an example of a higher order representation, essentially merging GWT and HOT with AST as the binding, and contemplating that the synthesis of these theories approaches a standard model of consciousness.  (A play of words designed to resonate with the standard model of particle physics.)

Graziano’s synthesis has generated a lot of commentary.  In fact, there appears to be an issue of Cognitive Neuropsychology featuring the responses.  (Unfortunately it’s paywalled, although it appears that the first page of every response is public.)  I already highlighted the most prominent response in my post on issues with higher order theories, the one by David Rosenthal, the originator of HOT, who argues that Graziano gets HOT wrong, which appears to be the prevailing sentiment among HOT advocates.

But this post is about Keith Frankish’s response.  Frankish, who is the leading voice of illusionism today, makes the point that, from his perspective, theories of consciousness often have one of two failings.  They either aim too low, explaining just the information processing (a dig perhaps at pure GWT) or too high in attempting to explain phenomenal consciousness as if it actually exists, and he tags HOTs as being in this latter category.

His preferred target is to explain our intuitions about phenomenal consciousness, why we think we have it.  (I actually think explaining why we think we have phenomenal consciousness is explaining phenomenal consciousness, but that’s just my terminological nit with illusionism.)  Frankish thinks that AST gets this just right.

But he sees it as incomplete.  What he sees missing is very similar to the issue I noted in my own post on Graziano’s synthesis: the affective or feeling component.  My own wording at the time was that there should be higher order representations of reflexive reactions.  But I’m going to quote Frankish’s description, because I think it gets at things I’ve struggled to articulate.  (Note: “iconsciousness” is Graziano’s term for access consciousness, as opposed to “mconsciousness” for phenomenal consciousness.):

Suppose that as well as an attention schema, the brain also constructs a response schema—a simplified model of the responses primed by iconsciousness.  When perceptual information enters the global workspace, it becomes available to a range of consumer systems—for memory, decision making, speech, emotional regulation, motor control, and so on. These generate responses of various kinds and strengths, which may themselves enter the global workspace and compete for control of motor systems. Across the suite of consumer systems, a complex multi-dimensional pattern of reactive dispositions will be generated. Now suppose that the brain constructs a simplified, schematic model of this complex pattern. This model, the response schema, might represent the reactive pattern as  a multi-dimensional solid whose axes correspond to various dimensions of response (approach vs retreat, fight vs yield, arousal vs depression, and so on). Attaching  information from the model to the associated perceptual state will have the effect of representing each perceptual episode as having a distinctive but unstructured property which corresponds to the global impact of the stimulus on the subject. If this model also guides our introspective beliefs and reports, then we shall tend to judge and say that our experiences possess an indefinable but potent subjective quality. In the case of pain, for example, attended signals from nociceptors prime a complex set of strong aversive reactions, which the response schema models as a distinctive, negatively valenced global state, which is in turn reported as an ineffable awfulness.

Now, Frankish is an illusionist.  For him, this response schema provides the illusion of phenomenal experience.  My attitude is that it provides part of the content of that experience, which is then incorporated into the experience by the reaction of all the disparate specialty systems, but again that’s terminological.  The idea is that the response schema adds the feeling to the sensory information in the global workspace and becomes part of the overall experience.  It’s why “it feels like something” to process particular sensory or imaginative content.

This seems similar to Joseph LeDoux’s fear schema.  LeDoux’s conception is embedded in an overall HOT framework, whereas Frankish’s is more at home in GWT, but they seem in the same conceptual family, a representation, a schema of lower level reactive processes, used by higher order processes to decide which reflexive reactions to allow and which to inhibit.  It’s the intersection between that lower level and higher level processing that we usually refer to as feelings.

Of course, there is more involved in feelings than just these factors.  For instance, those lower level reflexive reactions also produce physiological changes via unconscious motor signals and hormone releases which alter heart rate, breathing, muscle tension, etc, all of which reverberate back to the brain as interoceptive information, which in turn is incorporated into the response schema, the overall affect, the conscious feeling of the response.  There are also a host of other inputs, such as memory associations.

And it isn’t always the lower level responses causing the higher level response schema to fire.  Sometimes the response schema fires from those other inputs, such as memory associations, which in turn trigger the lower level reactions.  In other words, the activation can go both ways.

So, if this is correct, then the response schema is the higher order description of lower level reflexive reactions.  It is an affect, a conscious feeling (or at least a major component of it).  Admittedly, the idea that a feeling is a data model is extremely counter-intuitive.  But as David Chalmers once noted, any actual explanation of consciousness, other than a magical one, is going to be counter-intuitive.

Similar to the attention schema, the existence of something like the response schema (or more likely: response schemata) seems inevitable, the attention schema for top down control of attention, and the response schema for deciding which reflexes to override, that is, action planning.  The only question is whether these are over simplifications of much more complex realities, and what else might be necessary to complete the picture.

Unless of course I’m missing something.

43 thoughts on “The response schema

  1. Yes, this combination of models is close to what I have started to implement, and has most the elements I would consider essential: brain-wide coherence, attention, valence, a control-oriented view of linking senses to motor action, and the mind looking at and modifying itself using all these mechanisms, which originally evolved to interact with the outside world.

    Liked by 1 person

  2. That quote is just a series “mays”, “supposes”, “mights”, and “ifs” with no solid facts to contact any of them to explain how it works.

    Where and how does the brain construct a simplified, schematic model of this complex pattern?

    What is a “solid” in the phrase “multi-dimensional solid whose axes correspond to various dimensions of response”? And where and how does it form?


    1. Frankish is clear in the paper that he’s speculating, hence the language you seem to be having issues with.

      If you’re actually interested, the usual locations for these kinds of models are thought to be the prefrontal cortex and anterior cingulate. Joseph LeDoux actually develops this idea a bit in his papers. Here’s an example. (In the case, focused on fear, since that’s his area of expertise.)

      Click to access LeDoux%20Pine%20Two%20Systsem.pdf


      1. Yeah, I’m familiar with speculation, hence my blog Broad Speculations. I deliberately called it that so I didn’t feel quite so restrained by needing to prove everything or stick to exactly what we know. I use that sort of language all of the time so it did jump out at me when I spotted it. Unfortunately, I’m afraid far too much of many of these theories is little more than thinly cloaked speculation.

        Regarding locations – how exactly does the model exist in those areas? Are you just say the processing is there? What is the rest of brain doing when the processing occurs or the model forms? Are there necessary things going on elsewhere to make the model exist? Where is boundary between the model and everything else?

        I need to spend some more time with Ledoux but I note this statement:

        “Importantly, as also noted above, in such studies, the
        amygdala is engaged even when people are not consciously
        aware of the threat and do not report feeling fear”.

        Were the people who did not consciously report awareness of a threat afraid or not? Is this another instance of the reporting problem? The model in the PFS is the reporting model not the fear model. This is where boundaries and interactions become important. I wouldn’t be surprised that the PFS would be involved but I also wouldn’t be surprised to see the amygdala involved also.


      2. I meant PFC above. But to add to my thoughts.

        If I perceive a threat, I would expect the PFC to be engaged in understanding and planning alternatives to the threat.

        Let me provide a more extended quote from Ledoux:

        “Importantly, as also noted above, in such studies, the amygdala is engaged even when people are not consciously aware of the threat and do not report feeling fear. This has two important implications. First, amygdala processing is dissociable from conscious awareness of threat. Second, conscious awareness of threats comes about in the same way as the conscious awareness of nonemotional stimuli.

        We thus propose that emotional and nonemotional states of consciousness both emerge from cortical consciousness networks.”

        I can see first implication possibly makes sense: amygdala processing is dissociable from conscious awareness of threat.

        The second implication does not pertain directly to emotional states but relates to perception of threats.

        The proposed conclusion doesn’t follow. Cortical consciousness networks could be involved in analyzing and considering alternatives to cope with the threat without being responsible directly for the state of consciousness, just as I might do the same in dealing with a threat in a video game without having a state of fear.


        1. You asked a lot of questions James, so this may be a bit long.

          “Regarding locations – how exactly does the model exist in those areas? Are you just say the processing is there?”

          You know how neural networks work. Both the processing and the data is there. Much of the data might represent James of S’ semantic pointers.

          “What is the rest of brain doing when the processing occurs or the model forms?”

          To the extent each region is excited, it’s doing its own special thing. The contest is always going on for attention and awareness, for fame in the brain.

          “Are there necessary things going on elsewhere to make the model exist?”

          Sure. If the response is in relation to a sensory perception, those regions may still be processing that sensory input. If it’s about something imagined, the sensory regions are still heavily recruited to provide the imagery. In any case, the subcortical regions (amygdala, hippocampus, hypothalamus, etc) are each reacting to whatever imagery is coming in. All of which are interconnected with the frontal regions, and so provide inputs into the various response models.

          “Where is boundary between the model and everything else?”

          Aside from the physical regions? I’d say it exists between the reflexive reactions and the simulations that run to decide which reflexes to allow or inhibit.

          “Is this another instance of the reporting problem? The model in the PFS is the reporting model not the fear model. This is where boundaries and interactions become important. I wouldn’t be surprised that the PFS would be involved but I also wouldn’t be surprised to see the amygdala involved also.”

          This is an area where I’m not sure there is a fact of the matter. If processing exists but isn’t available for report, what does it mean to say it is conscious? If it sometimes can become available for report, we might call it “preconscious”. But what if processing in a region, by itself, is never sufficient for report? It seems strange to call that “conscious”.

          “Cortical consciousness networks could be involved in analyzing and considering alternatives to cope with the threat without being responsible directly for the state of consciousness”

          It’s always possible to add additional postulates. But we know a functional cortex is necessary for consciousness, at least in normally developed humans, and conscious content depends on the processing in the various cortical regions. Which is to say, it’s all neutral toward your preferred theory. It doesn’t support it, but it also doesn’t challenge it, at least other than not needing it.

          Liked by 1 person

          1. Thanks for spending so much time elaborating.

            The reporting problem I’m referring to relates to the issue of untangling brain activity from conscious experience from brain activity involved in reporting the experience. Similar to the issue I raise of untangling activity in the experience of fear from the activity involved in coping with a threat.


          2. That reporting problem, I think, only comes up in a scenario where someone does report being conscious of something. The question then becomes which brain activity relates to conscious awareness of the content vs which is related to carrying out the reporting.

            But in a healthy subject, if the processing is not reported, the takeaway is that they’re not conscious of it. (Unless the subject is simply choosing not to report, always possible in individual cases, but unlikely en mass.)


          3. That is the reporting problem I’m talking about. But the problem extends to trying to pinpoint the locus of feelings too since it isn’t easy to untangle brain activity in cognitive actions in coping with threats (something we would expect to find in the PFC) from subjective feelings of fear (which might be in the amygdala) .

            Both the amygdala and the hippocampus have similar structures with pyramidal neurons to the cortex so it is hard to see why consciousness could not be present in them also. Still I keep going back to what is the difference between the “conscious” circuits and the “unconscious” ones, particularly since some of the ones believed to be “unconscious” have the same sort of structures and neurons that the “conscious” ones do. If they are not conscious, why not?

            At any rate, I’m not really sure affect and feelings are quite the same as visual perception. Sensory perceptions form and dissolve on a quick time scale but feelings tend to be longer lasting. Our cognitive attention to feelings and their effects may be much like sensory perception but the feelings tend to persist in the background of attention even while momentary awareness of them comes and goes. This is true even with pain where a momentary distraction can divert the attention of a child who has skinned her knee until the pain returns after the distraction is over.


          4. “But the problem extends to trying to pinpoint the locus of feelings too since it isn’t easy to untangle brain activity in cognitive actions in coping with threats (something we would expect to find in the PFC) from subjective feelings of fear (which might be in the amygdala) .”

            A couple of points. One is that I think it’s a mistake to think in terms of there being a consciousness finish line somewhere. What we decide was conscious seems to be a retrospective event, whether it eventually ended up affecting behavior (such as report). And second, I think subjective awareness and cognitive utilization of content is difficult to untangle because they’re the same thing. Our feeling of fear or pain is inherently part of the process of deliberating how to respond to it.

            “Both the amygdala and the hippocampus have similar structures with pyramidal neurons to the cortex so it is hard to see why consciousness could not be present in them also.”

            I would note that empirical findings always trump theoretical considerations. The studies LeDoux cites as disassociating amygdala response from conscious perception are important on this front.

            But I don’t think it’s about structures or pyramidal neurons in an of themselves, but about what they connect to, their causal interaction with the rest of the system. Under all the major scientific theories of consciousness (GWT, HOT, IIT), connectivity and effects throughout the network are key: global ignition in GWT, available to be the subject of a higher order representation in HOT, or causal effects throughout the system in IIT.

            “At any rate, I’m not really sure affect and feelings are quite the same as visual perception.”

            I agree. A lot of people consider affects to be a part of sensory consciousness, but I don’t think that’s right. Affects happen in the motorium, not the sensorium. They are part of the action response part of the brain, not the sensory processing part*. A feeling is a perception of reflexive or habitual responses to a situation, one that provides input into the upper level motorium, along with sensory information, for simulations and deciding which lower level impulses to allow or inhibit.

            * Granted, this division could be seen as artificial, since motor impulses and affects effect sensory perceptions.


          5. Regarding a finish line, it seems you have been lobbying hard for a finish line in the frontal cortex. I have been thinking in terms of a more whole brain approach.

            I could easily envision neurons in the PFC, the visual cortex, amygdala, and hippocampus all firing in sync to produce the experience of fear and coping with a threat.

            Of course, the significance of pyramidal neurons is they predominate in the cortex in the billions. So I would presume they are doing a good bit of work of consciousness unless consciousness is mostly unrelated to neurons. Since they are also in the amygdala and hippocampus, we would want to understand if there are differences in what they do there vs other parts of the brain.

            The amygdala and hippocampus are pretty connected to other parts of the brain too. So my hunch it isn’t an either/or situation as LeDoux seems to make it.




          6. Again, I don’t think there is a finish line. My current view is that something like GWT, and probably more specifically GNW, is true, which means that to be conscious, content most be widely propagated and processed throughout the thalamo-cortical system. Just about any content in the thalamo-cortical system might win the competition and become conscious. There are no conscious vs unconscious regions. There’s only what comes to dominate the system.

            But the competition isn’t an even one. Some regions have far more connectivity than others. And the evidence seems to indicate that subcortical regions (and some peripheral cortical ones) contribute little if any directly to the content that can win the contest. (Baars does see the hippocampus as a possible exception.)

            Based on their intros, neither of those papers contradict LeDoux’s argument about the amygdala. I will note that as a HOT advocate, LeDoux does think that nothing (not even sensory content) is conscious until it reaches the PFC, which I think is a weaker position.


          7. I guess I was trying to reconcile no finish line with your interest in the Templeton contest and various posts you’ve made about prefrontal activity proving or not proving something in relation to the contest.

            “Some regions have far more connectivity than others.”

            Okay. So it does seem you are pointing to the cortex for the direct contribution to consciousness. I guess I was thinking that was what you meant by finish line.

            Solms views the cortex also as the place where the representations consciousness form but also that it is actually unconscious itself. It is more like a etch-a-sketch.

            The papers were references to widespread connectivity of amygdala with the rest of the brain and the argument that connectivity was the critical factor. But maybe it is the kind of connectivity. If only we knew what the difference was.


          8. Well, the various theories make varying predictions, and it’s always possible I’m wrong. I’m interested in the Templeton contest because it’s probing those differences. GWT / GNW predict that consciousness is a cortex wide phenomenon. IIT predicts it’s clustered in the back of the cortex (the posterior hot zone). HOT predicts it would be clustered in the PFC. So establishing which activity correlates with conscious perception may break some logjams (maybe).

            The amygdala is heavily connected with cortical and subcortical regions. It might be simply a factor of the raw number of axon connections involved. The cortex has about 16 billion neurons. All the subcortical regions together add up to a billion at most, with the amygdala itself having about 24 million (12 million per hemisphere). It may simply need cortical partners to break in to the workspace (or to be noticed by higher order processes, or do whatever IIT would need it to do).

            Solms, as I understand it, is in the brainstem consciousness camp. The issue I have with that camp is it requires a conception of consciousness that is too reductionist. Remember the hierarchy:
            1. Reflexes and fixed action patterns
            2. Sensory perceptions
            3. Instrumental learning
            4. Imaginative deliberation
            5. Introspection
            Brainstem consciousness gets us 2, and a limited version of 2 at that, orientation and reacting to movement, but no discrimination or time sequenced predictions.


          9. I think Solms has a somewhat unique view but maybe makes some kind of sense when viewed as whole. This is how I think of it. Not sure if correct or not.

            If you think of consciousness as something like a flashlight shining into the dark, then the flashlight is in the brainstem but what is being illuminated is in the cortex. So representations form in the cortex and at any given point in time there may be many different representations capable of becoming conscious. But it is the flashlight that brings some selection of them into consciousness. This is why he says that the representations in the cortex are unconscious themselves. It is the “light” shining on them which brings them into consciousness.

            That there may be some coordinating mechanism lower in the brain possibly makes a lot of sense from several standpoints. For one thing, the senses go through the thalamus and a key part of consciousness is related to that. For another, the whole lower part has a long evolutionary history so any kind of primitive consciousness would have needed a coordinating/selecting mechanism to prioritize which aspects of potential consciousness need focus at a given point in time. This could have originated in the brainstem, which still maintains some degree of control in more complex organisms with additional refinements to the coordination evolving higher up in the brain as larger and more complex brains evolved.


          10. Interestingly enough, the idea that the brainstem is conscious, but all the content is generated and routed down there, is conceptually similar to GWT. The main difference is it either leaves all of the consumption of that content in a region with very little substrate for doing that consumption, or it posits that the content is subsequently routed back up to the forebrain for the consumption.

            In fact, in an early version of GWT, Baar actually posited that the workspace might be the RAS and thalamus. I think everyone flirts with the idea of consciousness existing in the basal regions of the brain. I know I went through my own period thinking that made a lot of sense. But the data forces most of us to move on.

            In terms of the senses, most of them go through the brainstem, but not all. Only something like 10% of the retinal axons go there, and none from color sensitive cone cells. Most of the rest go directly to the thalamus and then visual cortex. And smell goes directly from the olfactory bulb to the frontal lobes with a few connections to the thalamus.

            None of this is to say that the brainstem doesn’t play a crucial supporting role, with the RAS providing an on/off switch function, and other regions providing basal reflexive reactions and fixed action patterns.


          11. I’m not really saying Solms thinks the content is routed down to the brainstem. He may think that but it actually didn’t occur to me when I explained his thinking. I was thinking that the content remained in the cortex. It is just that the action of the ARAS/RAS is what makes it conscious.

            According to Wikipedia on reticular activating system, the ARAS has extensive connections to the cortex and there are some links to studies on connections to the PFC.

            There is also an interesting study on Gamma wave activity in the RAS.


            Also, I know senses except olfactory go through the thalamus but the ARAS is closely tied to the thalamus. Another name for it is extrathalamic control modulatory system and the ARAS is a part of the reticular formation and is mostly composed of various nuclei in the thalamus and a number of dopaminergic, noradrenergic, serotonergic, histaminergic, cholinergic, and glutamatergic brain nuclei.

            The RAS/ARAS seems to be a system that crosses various structures.


          12. If Solms is just saying that the RAS wakes the forebrain up and keeps it awake, then I think that’s widely accepted. (I was actually relieved to learn this a while back. I had been worried that it was possible for a disconnected forebrain to still be conscious, but without a RAS connection, it only fires at alpha wave frequencies, equivalent to deep dreamless sleep.)


    2. The “solid” mentioned above could be replaced by “vector”. Then you’re talking about semantic pointers, which have been demonstrated in biologically plausible neural networks by Eliasmith.

      Also, a “simplified, schematic model of this complex pattern“ might also be called a unitracker, a mechanism which recognizes specific input as indicating the complex pattern and generates appropriate output.


      Liked by 1 person

      1. Vector and unitrackers may be useful abstract concepts (although I doubt it) but I am looking for an answer in neurons, dendrites, etc – the things we can actually see and measure.


        1. Did I not mention biologically plausible neural networks. [Glances upthread] Oh look, I did.

          Also, a unitracker, i.e., a mechanism which recognizes a specific input as indicating a complex pattern, could also be referred to as a “pattern recognition” mechanism. To date, the best “pattern recognition mechanisms” seem to be neural networks.



          1. I thought your reference to neural networks was just of the software kind. The problem I have with computationalism is the delineation of the boundary between computations that are unconscious and those that are conscious. Maybe you can illuminate for me when the network goes conscious? Most pattern recognition is unconscious. Some predictions would be helpful.


        2. To put this in the terms you are using, a single computation in itself is not conscious or unconscious. In my current understanding, a conscious process would require at least two computations: one which creates a representation, and one which responds to the representation. To say a system is conscious is to say that the system is capable of generating representations and responding to them.

          Now just these two computations alone might be a conscious process, but it would not be a very interesting one because it doesn’t have the diversity of our consciousness. Things get interesting when you combine these kinds of processes in various ways. There can be subsystems within a greater system. These subsystems may be creating representations which are available to some other subsystems but not to all of the subsystems. So a unitracker would be a system which takes representations from one or more (almost certainly more) computations (possibly other unitrackers) and generates an output representation.

          Things get more interesting when you have a certain kind of mechanism which can take representations from multiple sources (unitrackers), either individually or in combination, and generate a unique representation for that unique combination of inputs. That is what a semantic pointer or a global workspace does. That unique representation then becomes (globally) available as input to further computations.

          So I’m saying a generic conscious process looks like:

          Inputs —> [mechanism]—>representation—>[mechanism]—>response,

          and the conscious processes we care most about looks like:

          Inputs (from various unitrackers)—>[global workspace]—>representation—>[multiple mechanisms]—>multiple responses, including reporting, systemic effects via hormones, etc.

          So to get back to your question, there may be multiple conscious type processes that lead into the unitrackers described above, and they would be conscious relative to the unitracker, but the unitracker has exactly one response: represent its target. These conscious processes, however, are not available to the mechanism we care most about, the global workspace, and so would be unconscious with respect to the system which responds to the representations coming from the global workspace. This system which involves the global workspace (or semantic pointers) has been referred to as the autobiographical self.

          So how might this work with physical neurons? Here are my current brazen predictions:
          1. The unit of the unitracker is the cortical column. The neocortex is essentially a vast array of unitrackers. Some of these unitrackers get inputs from sensory organs. Some get inputs only from nearby (and not so nearby) unitrackers.
          2. The global workspace, or semantic pointers, is in the thalamus. Each cortical column/unitracker sends one axon to the thalamus. Actually, there probably isn’t only one workspace in the thalamus. There’s probably one workspace in the thalamus for each modality in the cortex.

          Admittedly, the global workspace theorists, and Mike, put the workspace in the cortex, and more specifically (for the theorists anyway), in the neurons in layers 2 and 3, possibly with some involvement of layer 5. Each cortical column, and so unitracker, would include a local piece of the workspace, for what that’s worth. I accept that as a reasonable possibility, but I’m going with the thalamus until I have a reason not to.

          So how to test? First find a reportable unitracker/cortical column, such as the Jennifer
          Aniston (JA) column. I predict blocking the signal to the thalamus will block JA from entering consciousness. I predict stimulating the signal to the thalamus, even while blocking the layer 2 and 3 neurons, will put JA into consciousness. If Mike and the GNW theorists [a band name if I’ve ever heard one] are correct, the opposite would be seen.

          How’s that?



          1. “The unit of the unitracker is the cortical column. ”

            Seems unlikely to me since I understand neurons tend to work in larger groups.

            “The global workspace, or semantic pointers, is in the thalamus.”

            That’s interesting and I hadn’t thought of that. I would think the cortex.

            I would be surprised if there was a single Jennifer Aniston (JA) column. I expect memory will be widely distributed and redundant maybe more in the manner of RAID striping. It might also be somewhat like source code repositories where differences are stored rather complete copies – probably the opposite of an unitracker.


          2. BTW, the Jennifer Aniston (JA) column, I assume, would be a composite of other unitrackers with pointers to the others. Have you tried to figure out how many columns you would actually need to point to from the JA column?

            I wondering if you would discover that your brain with one column per unitracker only has a capacity for Friends and not much else.


      2. With the “solid” remark, I think Frankish was talking about the phenomenal aspect of what the schema was providing, the resulting content that makes it back into the workspace. Of course, that doesn’t mean vectors and semantic pointers aren’t part of the underlying implementation.

        If I’m understanding unitrackers correctly (as roughly synonymous with “concept”), I would think the schema overall converges on a unitracker, but it composed of a vast number of unitrackers.


        1. I’m not sure I understand the term “schema” here, so I’m pretty much equating it with “model”, which I pretty much equate with “unitracker”. So the attention schema becomes a unitracker whose target pattern is the combination of inputs which currently have achieved the status of being attended, i.e., have been broadcast from the global workplace. Again, the attention schema is a model of the content of the global workspace.

          As you say, the inputs to the global workspace are the outputs of various unitrackers, including sensory unitrackers as well as the “response” schema/model/unitracker(s).

          [conjecture much?]


          1. In IT, the schema usually refers to the structure of a database. So I’ve always taken “schema” to refer to a model, an outline, a structure of data, which is much sparser than the actual data. In that sense, “schema” might not be the right word, at least for what I conceive of what the response models are about. I think Frankish used it so it would line up with the attention schema, but “model” or “representation” might be more appropriate.

            [conjecture lot!] 🙂


  3. I very much like your blog, but consciousness is only rarely discussed as also what it is: the source of our anxiety. Anxiety is the motor/motive for purely human activity. Consciousness helps us solve problems at the price of forcing us to live in a world with “problems.” Animals have never sniffed, much less experienced, a “problem” in their self-identical lives. Whatever consciousness is must also include this. So, I find it odd that it is always considered so wonderful.

    Liked by 2 people

    1. Thanks! Definitely while consciousness provides all our joys and pleasures, it also provides our pain, sorrow, and suffering. It’s actually not clear pleasure has meaning without the existence of its opposite, suffering. Indeed, the purpose of consciousness could be said to enable us to maximize pleasure and minimize pain, impulses which evolved to coincide with adaptive and maladaptive conditions.

      I think that’s why happiness is so fleeting. It’s evolutionary purpose as a motivational mechanism requires that it be so.

      Liked by 1 person

    2. Obnubilation,
      I think you’re right that people in general consider consciousness some kind of wonderful gift, and yet forget that it can also be amazingly horrible. I personally suspect that the pain and suffering which exists in our world far outweighs the pleasures that are experienced. Thus if consciousness had never evolved, this might have ultimately been far better for life in general.


      Indeed, the purpose of consciousness could be said to enable us to maximize pleasure and minimize pain, impulses which evolved to coincide with adaptive and maladaptive conditions.

      To be a bit pedantic though, the purpose of consciousness should be “to promote survival”, though our purpose given this teleological dynamic should be “to feel as good as we can, for as long as we can”. I do appreciate your utilitarian theme however.

      I think that’s why happiness is so fleeting. It’s evolutionary purpose as a motivational mechanism requires that it be so.

      Yes, and economists refer to this dynamic as “the law of diminishing marginal utility”. There is plenty, I think, that psychologists could learn from economists if they’d take a cue from our hard sciences and so let their soft science go amoral.

      Liked by 1 person

      1. Eric,
        Given the track record of economics, it seems strange to see it put it forth as the hard science in comparison to psychology. I think both fields have their rigorous empirical wings, and both their political ones. Unfortunately, the political ones get a lot more attention.

        But we also have to admit that the empirical branches will never produce theories with predictions as accurate as physics. To criticize them for this is to fail to understand the unique difficulties of their subject matter.

        Liked by 1 person

        1. Mike,
          I’m not quite referring to economics as a “hard science”, though I do consider it one of the hardest of our mental and behavioral fields. The reason for its hardness, I think, is that it’s far enough off the center of our function (the center being psychology), that the social tool of morality hasn’t overly compromised it. Regardless of all the political hype that we observe associated with how best to run an economy and such, the field itself has solid micro, and even macro, theory from which to describe economic behavior. This is a science which is based upon utility maximization. The field of psychology hasn’t yet been able to follow it’s lead, I think, because its central position would cause it to more overtly conflict with the social tool of morality. (In essence this tool encourages us to spread the belief that it’s good for a person to do good for others.)

          For an example of this social tool at work, in academia no less, according to professor Eric Schwitzgebel one of the main reasons for giving business and law students ethics courses, is to make them “ethical” in the real world. See my comment, but the point is that indoctrinating people with untrue notions about our nature, also conflicts with the goal of understanding our nature:

          Liked by 1 person

          1. Eric,
            I agree that economics has a rigorous aspect (although it’s not called “the dismal science” for no reason). But my point is, so does psychology. If I recall correctly, you majored in economics, so you’re familiar with it’s more scientific aspects.

            You seem less familiar with psychology. If you read Steven Pinker’s How the Mind Works, much of which is at the psychological level, I think you’ll get an idea of that core empirical aspect. For the empirical approach used in the social branches of the field, check out Big Gods by Ara Norenzayen and/or The Righteous Mind by Jonathan Haidt. (That last directly addresses morality, including incidentally, the issues with homo economicus.)

            Liked by 1 person

        2. Mike,
          It is not my point that the science of psychology isn’t empirically rigorous enough. (Though a case might be made for this given its enormous reproducibility crisis. And note that while this crisis infects the vast majority of behavioral sciences, it’s not being observed in the field of economics. Note that the 1971 Stanford prison experiment is still being published to introduce younger students to psychology, and even though it’s been displayed as a scripted mess of pseudoscientific nonsense in recent years. But this is still not my point. Yes, psychology can be, and often is, explored in a rigorous way.)

          My point is more that psychology, unlike economics, does not yet have basic theory from which to assess our nature. From Freud, to behaviorism, to funky modern notions like Lisa Barrett’s theory of constructed emotion, all fail. And why not let parsimony work its magic? Why not adopt the theory that has succeeded in economics for the fundamental field of psychology? I suspect because it conflicts the social tool of morality in a far more personal way.

          I’d check out Hiadt’s problems with “homo economicus” right now, though I really should do a few other things. You could summarize the argument if you like however, or I can take a look at it later today.

          Liked by 1 person

          1. Eric,
            Haidt’s criticism is that homo economicus is not predictive of how humans actually behave. He cites empirical studies that contradict it, and makes a point by asking, assuming you knew there would be no consequences for you, how much money would you require to cause pain to a child you don’t know, or harm a cute baby animal. If we are truly homo economicus, it shouldn’t take much. But most of us either couldn’t do it, or would require a lot of money.

            This short summary doesn’t do justice to his points. And it’s been years since I read it, so I’m not prepared to debate it. If you’re interested, I recommend checking out his material.

            Liked by 1 person

        3. Mike,
          I googled around on homo economicus and noticed that this seems to be exactly what I was referring to, or the social tool of morality influencing us to dispute that we’re all self interested products of our circumstances. Evidence suggests that people believe what they want to believe far more strongly than what evidence suggests that they should believe.

          Apparently a common way to contradict homo economicus, and certainly in Haidt’s case, is to presume (or strawman) that economic theory holds that money equals happiness. It doesn’t of course. In the econommist’s eyes money is but a potential means to such an end. Do they believe that rich people are thus happy while poor people are thus unhappy? That’s ridiculous. But they do believe that money translates to power, and that people tend to use power in the quest to make themselves more happy (whether idiotically or not)? Yep.

          In asking people, even anonymously, how much money it would cost for them to do horrible things to others, note that there is a sympathy dynamic to contend with. It naturally hurts us to cause suffering when we’re forced to experience those effects (that is except for certain people with presumed defects in this regard). So costs here remain in line with homo economicus, since hurting others will also tend to hurt us through this mechanism. But does evidence suggest that people will tend to do more and more horrible things for more and more money when they’re less and less impacted by the suffering associated with their choices? Of course it does. Such evidence is what Haidt and others will tend not to discuss given that it contrasts with their agenda. (Ironic to their agenda, they also want to feel as good as they can!) In the end we seem to all be self interested products of our circumstances. Apparently the field of psychology suffers tremendously given that it’s constrained by the social tool of morality. Our soft mental and behavioral sciences remain in their early days, though I seek something better.

          Liked by 1 person

          1. I’ll go along with the “gaining money” theory far more readily that standard moral notions James. But I also consider it to exist as a relatively roughy heuristic. The “money” part, whether the gaining or losing of it, only tends to correlate with happiness given that it may be used in “powerful” ways to promote one’s perceived interests.

            For example, up in your county there’s Bill Gates who spent his career fostering a monopoly from which to hurt computer innovation to his own monetary benefit. I’m sure that he had fun with it lasted, but note that on the flip side he’s also been committed to giving away his vast wealth because it presumably makes him feel good to feel like he’s helping humanity in this way. So these contrary money dynamics demonstrate that it’s not about the gaining of money itself, but rather about feeling good rather than bad regardless of the money. Wealth seems to exist as a potential means rather than an end.


Your thoughts?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.