The Scientist has an interesting article up reporting on the progress that’s being made in neuromorphic hardware.
But the fact that computers “think” very differently than our brains do actually gives them an advantage when it comes to tasks like number crunching, while making them decidedly primitive in other areas, such as understanding human speech or learning from experience. If scientists want to simulate a brain that can match human intelligence, let alone eclipse it, they may have to start with better building blocks—computer chips inspired by our brains.
So-called neuromorphic chips replicate the architecture of the brain—that is, they talk to each other using “neuronal spikes” akin to a neuron’s action potential. This spiking behavior allows the chips to consume very little power and remain power-efficient even when tiled together into very large-scale systems.
Traditionally, artificial neural networks have been implemented with software. While this gets at algorithms that may resemble the ones in biological nervous systems, it does so without the advantages of the physical implementation of those systems. Essentially it’s emulating that hardware (wetware?), which in computing has always come with a performance hit, with the magnitude of the hit usually corresponding to just how different the hardware architectures are, and modern chips and nervous systems are very different.
There’s a lot of mystique associated with neural networks. But it’s worth remembering that a neural network is basically a crowd sourcing strategy. Instead of having one sophisticated and high performing processor, or a few of them, like the ones in modern commercial computers, the strategy involves having large numbers, millions or billions, of relatively simple processors, the neurons.
Each neuron sums up its inputs, both positive and negative (excitation and inhibitions) and fires when a threshold is reached, providing inputs to its downstream neurons. Synapses, the connections between neurons, grow or weaken depending on usage, changing the overall flow of information.
Of course, biological neurons are cells, which come with all the complexity associated with cellular processes. But we shouldn’t be surprised that evolution solved its computing and communication needs with cells, since in complex life it solves everything that way.
Neuromorphic computing is moving the actual hardware closer to the structure used in nervous systems. I’d always known about the performance advantages that might bring, but apparently a lot of the power efficiency of the brain (which operates on about 20 watts) comes down to its analog features, and neuromorphic computing, by adopting hybrid analog-digital structures, appears to be reaping many of those benefits.
The article also discusses various attempts that are underway to run simulations of the brain, although at present they’re simulating simplified versions of it. But combined with computational neuroscience, this approach may yield theoretical insights into actual biological brains.
I’ve written before about Moore’s Law petering out, and that further progress in computing will require innovative architectural changes for us to continue seeing progress. I find it heartening that this kind of research is happening. Too much of the industry seems caught up in the quantum computing hype, but this line of inquiry may yield results much sooner.