Read any mainstream neuroscience book, and one of the things you’ll typically see is an admission that while a lot is known about the operations of neurons and synapses, and a lot about high level signalling patterns in various brain regions, along with a good amount on how sensory processing happens in some regions (such as the early visual centers), how it all comes together for cognition remains a mystery.
In an interesting paper, a team of computer scientists and a neuroscientist have come up with a proposed formalism to bridge the gap. From the paper’s abstract:
Assemblies are large populations of neurons believed to imprint memories, concepts, words, and other cognitive information. We identify a repertoire of operations on assemblies. These operations correspond to properties of assemblies observed in experiments, and can be shown, analytically and through simulations, to be realizable by generic, randomly connected populations of neurons with Hebbian plasticity and inhibition. Assemblies and their operations constitute a computational model of the brain which we call the Assembly Calculus, occupying a level of detail intermediate between the level of spiking neurons and synapses and that of the whole brain. The resulting computational system can be shown, under assumptions, to be, in principle, capable of carrying out arbitrary computations. We hypothesize that something like it may underlie higher human cognitive functions such as reasoning, planning, and language. In particular, we propose a plausible brain architecture based on assemblies for implementing the syntactic processing of language in cortex, which is consistent with recent experimental results.
They identify a number of operations between assemblies: projection, association, pattern completion, reciprocal projection, and merge.
Projection is one assembly in a particular region exciting another assembly in another region, which if sustained can lead to the formation of a new assembly. Or, in cases of pattern completion, exciting a small part of an existing assembly can lead to the entire assembly firing. Reciprocal projection is when there are connections back to the originating assembly, leading to a recurrent firing patterns, causing them to reinforce each other.
Association is when connections between assemblies make one more likely to fire when the other one is. Merge is formation of a new linkage assembly that is activated when two or more other assemblies are firing. (An example might be an assembly for a dog firing when an assembly for a four legged animal and another for barking converge.)
They also throw in some more basic operations, such as read, fire, inhibit, disinhibit, etc., along with control operations like if and repeat. The above assembly operations can actually be reduced to sequences of the more basic operations. Some of these they admit may not map precisely to biological functionality, so they’re placeholders for now.
The result is a sort of programming language. An example they provide in the discussion section, for language processing, looks like this:
do in parallel:,
In their model, they make a lot of simplifying assumptions, such as all neurons in a region being randomly connected, although they think their model will adjust appropriately to non-random connectivity. But a lot of the assumptions make me wonder how well this will all map to biological realities. If I had to make a guess, this initial version will need a lot of work.
On the other hand, I find the ambition of the authors refreshing and hope maybe it’s breaking a psychological barrier of sorts. And many of the operations, based on my own neuroscience reading, seem pretty plausible. As the model is tested empirically, hopefully it can be adjusted over time. The final results may look substantially different. But it’s a start.
If it does work, the benefits for artificial intelligence may be immense. Only time (and additional research) will tell.