In the fictional far future of the classic science fiction novel, ‘Dune‘, computers are taboo across all human cultures, the result of an ancient jihad which resulted in the religious commandment: “Thou shalt not make a machine in the likeness of a human mind.” The result of this commandment, is that computers, robots, or artificial intelligence of any type is completely absent from the setting. (If you’ve never read Dune, I highly recommend it. Although I’m about to disagree with something in it, there are many reasons for its status as a landmark classic in science fiction.)
I’ve been thinking about this fictional taboo recently, because I think it highlights a common misunderstanding of the relationship between brains and computers. I happen to think that the computational theory of mind is sound (something I realize that some of you disagree with), but that doesn’t mean I think the brain is a general purpose computing device.
Of course, the brain is definitely not a digital computer. It’s architecture is decidedly analog. Transistors in computer chips are designed to be in one of two voltage states, which are interpreted as discrete 1s or 0s. Synapses, by contrast, vary smoothly in strength. There are many other differences, but for purposes of this post, the one I want to highlight is that brains aren’t designed to load any desired software. They evolved to handle certain tasks, and can’t be repurposed the way a general computer can.
Brains are famously malleable and adaptable of course, but aside from the fact that this is slow, it has inherent limits. No one has taken, say, a mouse brain, and made it run accounting or navigation applications. Brains evolved to be the central command center of an animal, to increase that animal’s ability to find food and mates, and avoid predators. Brains are better thought of as what are known in the information technology industry as appliances, that is, information processing systems narrowly designed for certain purposes, for running certain types of applications. (A good example of this is the router most of us have at the center of our home’s wireless network.)
This fits with the data from evolutionary psychology and animal behavior research which shows that we are not born blank slates. We come into the world with an enormous amount of cognitive “pre-wiring”, instinct, evolved programming, or whatever we want to call it. Certainly brains can learn, depending on the available capacities of the specific species. And some of that programming can be modified or resisted by learning. But much of it can’t. Much of it is central to what a mind does.
In other words, if the computational theory of mind is sound, then a mind is not just a computing system, it’s a specific application (or perhaps more accurately, a set of integrated applications) of a computing system. This is the main reason why the idea of a computer “waking up” into a conscious state at some level of computing capacity is infeasible. It’s a bit like saying that a computer might “wake up” to be a game system, or a tax filing application. None of these applications will come into being unless someone engineers them.
Consider that the laptop I’m typing this on has more processing power than the brains of many insects. Yet my laptop has shown no emergent insect like behavior. Why? Because insect brains evolved for very specific purposes. My laptop didn’t. And it won’t behave like an insect unless someone painstakingly programs it to.
Why hasn’t anyone done this programming yet? Because no one know how, yet. We can’t program a computer to act like an ant, or a bee. (At least not accurately.) To do it, we’d need at least a moderately comprehensive understanding of how ant or bee minds work, and we don’t have that yet. We certainly don’t have it for more complex animals such as mice, dogs, or humans. And, based on statements from neuroscientists in the trenches of scientific research, we’re probably decades, if not centuries away from that understanding.
But, many will say, no one engineered humans or other animals. They simply evolved. If it happened with them, why couldn’t it happen with artificial intelligence, if we set up the right environment?
In answer, I think we have to be aware of two broad facts. One is that it took billions of years of evolution to produce animal minds, and half a billion years of additional evolution to produce human minds, and it’s far from clear that they were inevitable. People have attempted to evolve digital animals, but from what I’ve read, nothing approaching intelligence has resulted, at least not yet. And that leads to the second broad fact: we don’t really know what led to the evolution of intelligence, either broadly in the form of animal brains, or more specifically human level intelligence, which means we don’t know how to set up the right environment.
(Note that if the evolution approach did somehow succeed in generating intelligence, then the dangers many people fear would probably be valid. Which, in my mind, is a good reason not to do it this way. It seems unethical and dangerous, and not likely to generate usable technology even if it worked.)
None of this is to say that computers won’t continue to increase in capacity and capabilities. I know I’m definitely looking forward to my self driving car and more intelligent home appliances. But I have no illusions that they will have minds, because we won’t know how to build those for a while yet. And we’re about as likely to accidentally make one as we’re to accidentally make a game console.
(And if the doubters of the computational theory of mind are right, then that only seems to increase how far away we are from developing a technological mind.)
All of which is to say, that the Dune universe didn’t really need to be devoid of computers to meet its taboo against machines-in-the-likeness-of-a-human-mind. They could have gotten along quite well with just mandating that no one ever develop a software mind. Not that the distinction between computers and minds was as clear in the 1960s when Frank Herbert was writing his famous novel. But today, when many people are decrying the dangers of artificial intelligence, it’s a distinction worth being aware of.