Biological computation and the nature of software

A new paper is been getting some attention. It makes the case for biological computation. (This is a link to a summary, but there’s a link to the actual paper at the bottom of that article.)

Characterizing the debate between computational functionalism and biological naturalism as camps that are hopelessly dug in, the authors propose that the brain does do computation, but that it’s a very different kind from the type done in the device you’re using to read this, which they call “biological computation.”

The differences are that biological computation is a hybrid between digital (discrete) and analog (continuous) computing, there is no clean division between software and hardware, between algorithms and implementation, and that metabolism and energy constraints shape the processing that happens. They sum it up as, in the brain, the algorithm is the substrate.

The authors argue that to build artificially conscious systems, it may be necessary to go with a different physical ontology, one that is closer to the way biology works.

Let me start by saying that this paper is a big improvement over the usual arguments about the distinctions between computers and biology. The authors are making a real effort to identify what supposedly makes biology unique. Most of what they’re saying already accords with my own understanding of how the brain works, and what’s different about its computation. There are a few points where they try to pass off speculation as established fact, but those are nits.

That said, I think they oversell some of their points. For example, the distinction between analog and digital is often less than it appears. We listen to music and watch movies all the time in digital formats that were originally recorded in analog. Yes, something can be lost in the translation from continuous to discrete signaling, but in an analog system there is always variance noise, variations between a system’s processing, both with other systems of the same type, and between runs in the same system. The trick is for the translation to reduce the quantization noise, the distortions from moving to a discrete format, so that they’re less than the variance noise in the original.

Another is the aspect they call scale inseparability, the idea that the brain doesn’t use the layers of abstraction that technology uses. These layers exist in technology to make the engineering easier to understand and maintain, for engineers. Evolution doesn’t care about understanding so it’s not a factor in how biological systems are organized. The authors use this to imply that the software / hardware divide may be something the technology side will have to give up. That the algorithm may need to be in the substrate as it is with biology.

I think this represents confusion about what software actually is. We usually talk about software as a set of instructions that a processor follows. In most cases, it’s convenient to think about it that way. But at a more physical level, it makes more sense to think of software as a configuration of hardware. So when software is running on hardware, the algorithm is always the substrate.

The real distinction here is that technological computers are designed to be reconfigured on the fly. This is actually an amazing achievement when you stop and think about it. I often see articles marveling at the brain’s plasticity, its ability to rewire itself. But your computer’s memory can undergo wholesale reconfiguration on demand by loading a new software package, something brain’s can’t do, at least not quickly.

Of course, this comes with vulnerabilities brains are far less susceptible to. One reason computers can be hacked is this ability to massively reconfigure. Not that brains are completely immune. Ant brains can be hacked by a fungal infection, and cat owners can be infected with a parasite that makes them like their cats more, and that’s aside from the ability of advertisers and propagandists to hijack our brain’s reasoning to introduce notions we might otherwise resist. But it’s a harder thing to do effectively in biological systems.

What’s important to realize is that anything that can be done in hardware can, in principle, be done in software, at least once a minimal general computing platform is in place. You can run software that emulates other hardware platforms so you can run their software. It is true that doing it in hardware is often far more efficient in terms of performance and energy, but that comes with reduced flexibility. It’s why we now run word processors on our general purpose computers instead of the old word processing machines that once existed.

So I don’t think the fact that current AI runs on software neural networks, in and of itself, is a showstopper. Another difference is that the brain operates with massive parallelization, far more than any current technological system. These systems can still perform something like the brain’s processing in software because they operate millions of times faster. Although the addition of GPUs, designed with parallelization in mind, help a great deal.

But that, I think, gets to a valid concern the authors make about energy constraints. Discrete processing, and doing things with software instead of hardware, come at a cost in terms of energy and performance. This is something I do think AI researchers should be paying more attention to. All we need to do to understand how far current AI is from animal intelligence, much less human level, is look at the vast amounts of data and energy it requires to do what it does. Datacenters are sucking the power grid dry to meet their energy demands. All of which speaks to how crude the technology remains in comparison to biological intelligence.

But this energy constraint issue is broader than just trying to reproduce biological processes. I think it’s a problem for all technological computing. And it will likely eventually result in architecture changes. Understanding how biology does it may be important, but I tend to doubt the solution will be doing it exactly like those systems.

And this gets to a sentiment that I detect in the paper and write ups about it. It’s the idea that consciousness is a ghost in the machine, one we need to find the magic ingredients for so we can generate it. I think this is fundamentally the wrong way to think about it. Neuroscientist Hawan Lau, I think, in a Bluesky post, sums up the issue. Why do we think this might be true for consciousness when it isn’t for so many other things the body does, like motor control?

All that said, I do like the term “biological computation.” It admits that the computation in brains is different while still acknowledging the important ways it’s the same. I suspect that won’t be enough for those strongly convinced computationalism is wrong, but it still feels like useful progress.

What do you think about the points the authors make? Or my take on them? Are they right that a new hardware architecture is required? Or would even that be enough? Does the “biological computation” term strike the right balance?

5 thoughts on “Biological computation and the nature of software

  1. The fundamental difference between AI and human brain is that the neurons only fire when relevant and not continuously like those sucking machines. So first it is key to understand the mathematics of a two dimensional fourier transform – that is, what AI basically does. Powerful but stupid. Of course it can do good things if used right.
    However I strongly oppose the idea of my brain computing something. I do have a balance system making me able to move that is underlaid by a dynamic strengthening function to adjust for the body strength to uphold that movement. Given that freedom I decide to move in a convenient way – that includes my personal relationships (social adhesion)
    Why should a number cruncher do this?
    Remember that „greenIT“ means slender programs and not firing those datacenters with hydrogen. It took 360 Bytes of tubed RAM to fly t the moon…

    Liked by 1 person

    1. Neurons actually are continuously firing. When we read about something causing a neuron to fire, it would be more accurate to say it increases the neuron’s firing frequency. When we read about a neuron being inhibited, it’s actually causing it to fire less frequently.

      The background frequency does vary a lot depending if the neuron is part of the current coalition of neural circuits in control. And of course the background rate is higher when we’re awake. But it’s never entirely absent. It’s one of the reasons the brain is the most energy hungry organ in the body.

      I would say deciding to do anything is a logical process. (We might not think of the decision as logical, but that’s logic at a different level of description.) And logical processing is computation.

      Based on what I’ve seen, the memory in the Apollo guidance computer had 4k (in current terminology). Definitely the cheapest digital watch would have more now. The old Atari 2600s I think started with about that much less than a decade later.

      Like

  2. I think the significant difference between digital and analog processing is that digital operations resolve to a distinct 1 or 0, Yes or No (or some other finite number of discrete outcomes), while analog operations can resolve to a virtual infinity of outcomes, within the resolution of the physical world itself. With respect to the description or emulation of reality, there might even be some connection with Zeno’s paradoxes.

    The difference between software and hardware is more blurred. Our current AI technology relies largely on graphics processors, which were originally designed to perform massively parallel processing for ray-tracing, edge recognition, and other calculating operations required by advanced gaming systems. It’s possible to implement this in software on general-purpose computers, but it runs much faster on “dedicated” hardware, which is devoted to specific computational tasks. The tasks themselves can be tackled either way; to this extent software and hardware can be understood as doing the same thing, and therefore functionally equivalent.

    In the case of biological computation, a hardware implementation would no doubt provide faster results, but the necessary algorithms could in principle be implemented either way. This seems to point toward a mathematical or structural reality independent of either expression.

    Like

  3. Yep, I’m very much on board with this approach.

    “distinction between analog and digital is often less than it appears”

    Point may be valid but your example of music and movies is weak. Digital music/movie doesn’t use dynamic programs but are simply a translations of static data into sound and images.

    “it makes more sense to think of software as a configuration of hardware”

    I think that is a stretch. More importantly, it misses the point that in biological systems, changing the “implementation” changes the “computation,” because the two are tightly intertwined. If the hardware of a computer changes significantly, likely the software wouldn’t run on it without at least recompiling for the platform or using an emulator. The software never changes the basic hardware but the brain is constantly changing itself because it is implemented in the hardware.

    Like

  4. This one’s near and dear to my heart. I think you’re spot on, but I want to add a comment or two.

    First, I also like seeing the move toward understanding that the brain is computing/processing information. But it’s a simple truism that any computation, including analog, can be performed by a Turing machine to any degree of accuracy short of perfection, and given the fact of noise in the brain, the computations don’t need to be that close to perfection.

    Second, I think the major significance in the hardware/software dichotomy comes from the fact that there is a software/informational perspective, subjectivity, which is independent of the hardware, so, multirealizable. When we talk about consciousness, it is this perspective that we’re talking about. Yes, the perspective is determined by the given hardware, but it’s the informational perspective that gets us to “aboutness” and “qualia”.

    Third, I think the major flaw in the paper is the same flaw in panpsychism, OrchOR, and many other theories, which is “we don’t understand consciousness … so … maybe it’s over here [waves hand]”.

    There is one good point to be made from biological computationalism, which is that the exact digital/analog computations being performed in a brain are so nuanced that duplication, as a practical matter, would be impossible. I think this is the correct reason why there will be no “mind uploading”. But if consciousness is a particular class of computation, like, say, I don’t know, pattern recognition, then creating consciousness in machines would be fairly straightforward and possibly already done.

    *

    [note: I’ve learned to write my response in the notes app and then copy paste here. So much better.]

    Like

Your thoughts?