A while back, Julia Galef on Rationally Speaking interviewed Eric Jonas, one of the authors of a study that attempted to use neuroscience techniques on a simple computer processor.
The field of neuroscience has been collecting more and more data, and developing increasingly advanced technological tools in its race to understand how the brain works. But can those data and tools ever yield true understanding? This episode features neuroscientist and computer scientist Eric Jonas, discussing his provocative paper titled “Could a Neuroscientist Understand a Microprocessor?” in which he applied state-of-the-art neuroscience tools, like lesion analysis, to a computer chip. By applying neuroscience’s tools to a system that humans fully understand (because we built it from scratch), he was able to reveal how surprisingly uninformative those tools actually are.
More specifically, Jonas looked at how selectively removing one transistor at a time (effectively creating a one transistor sized lesion) affected the behavior of three video games: Space Invaders, Donkey Kong, and Pitfall. The idea was to see how informative correlating a lesion with a change in behavior, a technique often used in neuroscience, would be in understanding how the chip generated game behavior.
As it turned out, not very informative. From the transcript:
But we can then look on the other side and say: which transistors were necessary for the playing of Donkey Kong? And when we do this, we go through and we find that about half the transistors actually are necessary for any game at all. If you break that, then just no game is played. And half the transistors if you get rid of them, it doesn’t appear to have any impact on the game at all.
There’s just this very small set, let’s say 10% or so, that are … less than that, 3% or so … that are kind of video game specific. So there’s this group of transistors that if you break them, you only lose the ability to play Donkey Kong. And if you were a neuroscientist you’d say, “Yes! These are the Donkey Kong transistors. This is the one that results in Mario having this aggression type impulse to fight with this ape.”
While I think Jonas makes an important point, one that just about any reputable neuroscientist would agree with, that neuroscience is far from having a comprehensive understanding of how brains generate behavior, and his actual views are quite nuanced, I think many people are overselling the results of this experiment. There’s a sentiment that all the neuroscience work that’s currently being done is worthless, which I think is wrong.
The issue, which Jonas accepts but then largely dismisses, is in the differences we think we know about how brains work versus how computer chips work, specifically the hardware / software divide. When we run software on a computer, we’re actually using layered machinery. On one level is the hardware, but on another level, often just as sophisticated, if not more so, is the software.
To illustrate this, consider the two images below. The first is the architecture of the old Intel 80386DX processor. The second is the architecture of one of the most complicated software systems ever built: Windows NT. (Click on either image to see them in more detail, but don’t worry about understanding the actual architectures. I’m not going down the computer science rabbit hole here.)
The thing to understand is that the second system is built completely on the first. If it occurred in nature, we’d probably consider the second system to be emergent from the first. In other words, the second system is entirely a category of actions of the first system. The second system is what the first system does (or more accurately, a subset of what it can do).
This works because the first system is a general purpose computing machine. Windows is just one example of vast ephemeral machines built on top of general computing ones. Implementing these vast software machines is possible because the general computing machine is very fast, roughly a million times faster than biological nervous systems. This is why virtually all artificial neural networks, until recently, were implemented as software, not in hardware (as they are in living systems).
However, a performance optimization that always exists for engineers who control both the hardware and software of a system, is to implement functionality in hardware. Doing so often improves performance substantially, since it moves that functionality down to a more primal layer. This is why researchers are now starting to implement neural networks at the hardware level. (We don’t implement everything in hardware because doing so would require a lot more hardware.)
Now, imagine that the only hardware an engineer had was a million times slower than current commercial systems. The engineer, tasked with creating the same overall systems, would be forced to optimize heavily by moving substantial functionality into the hardware. Much more of the system’s behavior would then be modules in the actual hardware, rather than modules in a higher level of abstraction.
In other words, we would expect that more of a brain’s functionality would be in its physical substrate, rather than in some higher abstraction of its behavior. As it turns out, that’s what the empirical evidence of the last century and a half of neurological case studies show. (The current wave of fMRI studies are only confirming this and doing so with more granularity.)
Jonas argues that we can’t be sure that the brain isn’t implementing some vast software layer. Strictly speaking, he’s right. But the evidence we have from neuroscience doesn’t match the evidence he obtained by lesioning a 6502 processor. In the case of brains, lesioning a specific region very often leads to specific function loss. If the brain were a general purpose computing system, we would expect results similar to those with the 6502, but we don’t get them.
Incidentally, lesioning a 6502 to see the effect it has on, say, Donkey Kong, is a mismatch between abstraction layers. Doing so seems more equivalent to lesioning my brain to see what effect it has on my ability to play Donkey Kong, rather than my overall mental capabilities. I suspect half the lesions might completely destroy my ability to play any video games, and many others would have no effect at all, similar to the results Jonas got.
Lesioning the 6502 to see what deficits arise in its general computing functionality would be a much more relevant study. This recognizes that the 6502 is a general computing machine, and should be tested as one, just as testing for brain lesions recognizes that a brain is ultimately a movement decision machine, not a general purpose computing one. (The brain is still a computational system, just not a general purpose one designed to load arbitrary software.)
All of which is to say, while I think Jonas’ point about neuroscience being very far from a full understanding of the brain is definitely true, that doesn’t mean the more limited levels of understanding it is currently garnering are useless. There’s a danger in being too rigid or binary in our use of the word “understanding”. Pointing out how limited that understanding is may have some cautionary value, but it ultimately does little to move the science forward.
What do you think? Am I just rationalizing the difference between brains and computer chips (as some proponents of this experiment argue)? Is there evidence for a vast software layer in the brain? Or is there some other aspect of this that I’m missing?