Assembly Calculus: the missing link between mind and brain?

Read any mainstream neuroscience book, and one of the things you’ll typically see is an admission that while a lot is known about the operations of neurons and synapses, and a lot about high level signalling patterns in various brain regions, along with a good amount on how sensory processing happens in some regions (such as the early visual centers), how it all comes together for cognition remains a mystery.

In an interesting paper, a team of computer scientists and a neuroscientist have come up with a proposed formalism to bridge the gap.  From the paper’s abstract:

Assemblies are large populations of neurons believed to imprint memories, concepts, words, and other cognitive information. We identify a repertoire of operations on assemblies. These operations correspond to properties of assemblies observed in experiments, and can be shown, analytically and through simulations, to be realizable by generic, randomly connected populations of neurons with Hebbian plasticity and inhibition. Assemblies and their operations constitute a computational model of the brain which we call the Assembly Calculus, occupying a level of detail intermediate between the level of spiking neurons and synapses and that of the whole brain. The resulting computational system can be shown, under assumptions, to be, in principle, capable of carrying out arbitrary computations. We hypothesize that something like it may underlie higher human cognitive functions such as reasoning, planning, and language. In particular, we propose a plausible brain architecture based on assemblies for implementing the syntactic processing of language in cortex, which is consistent with recent experimental results.

They identify a number of operations between assemblies: projection, association, pattern completion, reciprocal projection, and merge.

Projection is one assembly in a particular region exciting another assembly in another region, which if sustained can lead to the formation of a new assembly.  Or, in cases of pattern completion, exciting a small part of an existing assembly can lead to the entire assembly firing.  Reciprocal projection is when there are connections back to the originating assembly, leading to a recurrent firing patterns, causing them to reinforce each other.

Association is when connections between assemblies make one more likely to fire when the other one is.  Merge is formation of a new linkage assembly that is activated when two or more other assemblies are firing.  (An example might be an assembly for a dog firing when an assembly for a four legged animal and another for barking converge.)

They also throw in some more basic operations, such as read, fire, inhibit, disinhibit, etc., along with control operations like if and repeat.  The above assembly operations can actually be reduced to sequences of the more basic operations.  Some of these they admit may not map precisely to biological functionality, so they’re placeholders for now.

The result is a sort of programming language.  An example they provide in the discussion section, for language processing, looks like this:

find-verb(Im,MTL,x)(Im,MTL,x),

find-subj(Im,MTL,y)(Im,MTL,y),

find-obj(Im,MTL,z)(Im,MTL,z);

do in parallel:reciprocal.project(x,WVb,x)reciprocal.project(x,WVb,x′),

reciprocal.project(y,WSubj,y)reciprocal.project(y,WSubj,y′),

reciprocal.project(x,WObj,z)reciprocal.project(x,WObj,z′);

merge (x,z,Broca44,p)(x′,z′,Broca44,p);

merge (y,p,Broca45,s)(y′,p,Broca45,s).

In their model, they make a lot of simplifying assumptions, such as all neurons in a region being randomly connected, although they think their model will adjust appropriately to non-random connectivity.  But a lot of the assumptions make me wonder how well this will all map to biological realities.  If I had to make a guess, this initial version will need a lot of work.

On the other hand, I find the ambition of the authors refreshing and hope maybe it’s breaking a psychological barrier of sorts.  And many of the operations, based on my own neuroscience reading, seem pretty plausible.  As the model is tested empirically, hopefully it can be adjusted over time.  The final results may look substantially different.  But it’s a start.

If it does work, the benefits for artificial intelligence may be immense.  Only time (and additional research) will tell.

24 thoughts on “Assembly Calculus: the missing link between mind and brain?

  1. You write, “In their model, they make a lot of simplifying assumptions, such as all neurons in a region being randomly connected, although they think their model will adjust appropriately to non-random connectivity.” Why would the assumption that “all neurons in a region [were] randomly connected” simplify their model more so than the assumption of “non-random connectivity”? Maybe the connectivity starts out randomly but then different neurons adopt different roles or functions, shifting the connectivity more to the non-random end of the scale. Non-random connectivity seems more economical than random connectivity.

    Liked by 1 person

    1. I think starting out randomly and then developing connections are what the model is assuming. (Although I wasn’t quite clear on that. I might be reading too much into the paper.) But we know the mind is not a blank slate. We have innate abilities, such as the ability to recognize a face, right out of the womb. We also fear snake shapes innately. And babies know to open their mouth when held up by their palms (for breast feeding).

      Some of it, such as the reaction to snake shapes and the suckling reflex, are almost certain subcortical, but from what I understand, the face one isn’t. That said, it might be that much of the cortex starts off much more evenly (that is, randomly) connected than the subcortical regions.

      Anyway, assuming randomness at the beginning probably simplified the model. The biological version will almost certainly be more complicated.

      Like

  2. Sounds a little like a Rube Goldberg Theory of Mind. 🙂

    The authors seem to focus on higher and more human-only sort of brain functions – language, planning, etc – but then periodically wander back to the more general term of “cognition” so I’m not sure whether they are primarily focused on language or on broader aspects of cognition that would apply to much simpler organism like spiders and insects. It might be useful if they are aiming for a more general theory if they focused on smaller organisms. Language is so uniquely human and so uniquely seems to be confined to very specific areas of the brain that it is hard to see how a theory primarily focused on it would apply to cognition more broadly. It seems like a cart before horse thing. They are trying to explain one of the most complicated things a brain does and leverage the explanation to explain cognition in general.

    That neurons work in assemblies is right. Most of the connections of neurons are local and this is probably because it is faster to work locally and the brain would fill up with connections if every neuron was connected to every other neuron. This likely is an evolutionary solution to communication in larger brains. How would the theory apply to cognition in smaller brains with smaller numbers of neurons?

    Liked by 2 people

    1. The example is on language, but that’s really in the discussion section of the paper, toward the end. I shared that example because it was the most comprehensive one. I think the authors are aiming at cognition in general, not just human cognition. It’s just that the examples are human centered.

      My thinking is that something like assemblies (which are known by a variety of other names, such as Damasio’s convergence-divergence zones) are used in all vertebrate nervous systems. I haven’t studied invertebrate ones enough to be sure, but I’d expect them there too. There just aren’t nearly as many, and in very simple creatures, the assemblies themselves might be simpler. But I’ll admit on this I’m speculating.

      Like

      1. I’m still working on the paper, so I may have more to say, but my impression is that the “assemblies” they’re talking about are units which are essentially similar or identical in structure/organization until various operations (inputs) cause them to reorganize in various ways, mostly by changing their interactions. I think the this type of structure is unlikely to be found in low-cognitive-level animals. It’s found in human cortex, and so presumably in all mammalian cortex, and I’ll guess in whatever structure that makes birds like corvids fairly intelligent.

        I’m wondering/expecting that the units of assemblies will be essentially cortical columns.

        *
        [preview of where my heads at: assemblies = unitrackers]

        Liked by 1 person

        1. They admit that that the initial random connectivity is an assumption. I’m not sure how true it is in any biological system, although as I mentioned somewhere else on this thread, there may be sections of cortex where it’s true. But they assert that their model also works in a system that doesn’t start out random. Which definitely seems like the case in subcortical regions.

          I figured you’d map the assemblies to unit trackers. I would have mentioned them when I mentioned convergence-divergence zones, but my brain couldn’t retrieve the term at the time.

          Like

          1. Depends, of course, on how you mean “Consciousness”. For me, Consciousness does not require unitrackers (pattern recognition units), but unitrackers do play a prominent role in human consciousness. And yes I’m thinking digitally, and serially, and parallel-ly, and analog-ly because I’m thinking computationally. I have my reasons.

            *

            Like

      1. Prefrontal synthesis is an already existing theory that involves merging of multiple neural assemblies. In particular it seems to be heavily involved in language ability.

        Like

        1. Ah, I see what you’re thinking. My understanding of PFS is broader than just language; more about goal directed use of imagination. In that sense, I think that type of functionality is widespread. Although how widespread is a contentious issue.

          Like

          1. It is broader than language but has been specifically tied to language.

            “There is evidence that a deficit in PFS in humans presents as language which is “impoverished and show[s] an apparent diminution of the capacity to ‘prepositionize’. The length and complexity of sentences are reduced. There is a dearth of dependent clauses and, more generally, an underutilization of what Chomsky characterizes as the potential for recursiveness of language”

            https://en.wikipedia.org/wiki/Prefrontal_synthesis#cite_note-20

            Like

          2. Link should be https://en.wikipedia.org/wiki/Prefrontal_synthesis

            BTW there are some examples in the Arthropod Brains of some fairly complex planning. It cites an example of tarantula wasp digging a hole, then returning with a locust, and apparently measuring the size of its prey by walking back and forth over it to determine if how much the wasp needed to enlarge the hole to have it fit.

            Like

  3. “The result is a sort of programming language.”

    Sort of. Or what people who actually develop programming languages call “wishful thinking.” 😀

    This idea goes back to languages like Prolog and Lisp. Back then it resulted in the first AI Winter, this dream of writing code for general intelligence. What put AI back on the table as a serious contender were deep learning neural nets, a very different approach.

    Maybe neurophysiology has advanced enough for this old approach to work, but so far it sounds like a lot of guesswork and hand-waving. There’s a lot of vague and qualifying language in the bit you quoted.

    Liked by 1 person

    1. This is a team of computer scientists and one neuroscientist. If they are weak on one side of this, it’s more likely to be on the neurological one.

      The language is meant to be an abstraction of neural processing, not be something in lieu of it. I don’t know if it will succeed, but I’m glad someone is trying. Even if they do succeed, this almost certainly is far from the final form. But if it is possible, the only way to find out is to try it and continue iterating until success or sustained failure.

      That quote is the abstract, which are very summarized by design. The paper obviously goes into far more detail.

      Liked by 1 person

  4. All in all, I think this paper is promising. Nice catch, Mike.

    I like it because I can map all of the features to my understanding of how things work. You can see where you’re going to get integration of information, predictive processing, working memory, and of course, unitrackers.

    I was gratified to find the language aspect and how it fits.

    Things they haven’t put in yet would include paring neurons during development (but I guess that’s just the neurons that never get picked for an assembly) and the global workspace (which I suspect will involve “read” and “write” operations).

    I expect there will be significant changes in function depending on the plasticity value, and maybe that will be mapped to specific areas.

    Looking forward to see where this goes.

    *
    [and how I might apply it to my robot, which I finally started messing with]

    Liked by 1 person

    1. Thanks James. Glad you found it useful.

      Interestingly, this being more of a computer science paper, it seems to have been completely under the radar of our usual network of neuroscientists, philosophers, and psychologists. I was lucky one of my feed filters picked it out of the EurekAlerts.

      A lot of neuroscience papers today include neural circuit diagrams. Makes you wonder if at some point in the future cognitive science papers may not have code similar to this for mapping cognition to the neural level.

      Like

  5. I think they want to say that the elements of analysis should be assemblies if we want to know the brain in a computational way, not neurons. Totally agree, and further, I think the elements should be pieces of information, but our brain encode information in assemblies(for sure not in neurons level), so to understand our brain, we better do it in assemblies level.

    Liked by 1 person

    1. Definitely the idea is to find a way to look at the brain’s operations at a higher abstraction level than neurons. This is a very early effort, and I anticipate the final model might turn out very different, but it has to start somewhere, and the effort may help in piercing some conceptual barriers.

      Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.