Peter Kassan has an article at Skeptic Magazine which sets out to disprove the simulation hypothesis, the idea that we’re all living in a computer simulation.
I personally find arguing about the simulation hypothesis unproductive. Short of the simulation owner deciding to jump in and contact us, we can’t prove the hypothesis. Even if the simulation has flaws that it would allow us to find and perceive, we can never know whether we are looking at an actual flaw or just something we don’t understand. For example, is quantum wave-particle duality a flaw in the simulation, or just a puzzling aspect of nature?
Nor can we disprove the simulation. There’s simply no way to prove to a determined skeptic that the world is real. And if we are in a simulation, it appears to exact unpleasant consequences for not taking it seriously. It effectively is our reality. And we have little choice but to play the game.
But this post isn’t about the simulation hypothesis. It’s about the central argument Kassan makes against it, that there can’t be a consciousness inside a computer system. The argument Kassan uses to make this case is one I’m increasingly encountering in online conversations, involving assertions about the nature of information.
The argument goes something like this. Information is only information because we interpret it to be information. With no one to do that interpretation, the patterns we refer to as information are just patterns, structures, configurations, with no inherent meaning. Consequently, the physical machinations of computers are information processing only because of our interpretations of what we put into them, what they do with it, and what they produce. However, brains do their work regardless of the interpretation, so they can’t be processing information, and information processing can’t lead to consciousness.
To be fair, this brief summary of the argument may not do it justice. If you want to see the case made by someone who buys it, I recommend reading Kassan’s piece.
That said, I think the argument fails for at least two reasons.
The first is that it depends on a particularly narrow conception of information. There are numerous definitions of information out there. But for purposes of this post, we don’t need to settle on any one specific definition. We just need to discuss an implied aspect of all of them, that information must be for something.
The people making the argument are right about one thing. Information, in an of itself, is not inherently information. To be information, something must make use of it. But the assertion is that this role of making use of information can only be fulfilled by a conscious agent. No conscious agent involved, then no information. The problem is that this ignores the non-conscious systems that make use of information.
For example, if the long molecules that are DNA chromosomes somehow spontaneously formed by themselves somewhere, there would be nothing about them that made them information. But when DNA is in the nucleus of a cell, the proteins that surround it create mRNA molecules based on sections of the DNA’s configuration. These mRNA molecules physically flow to ribosomes, protein factories that assemble amino acids into specific proteins based on the mRNA’s configuration. Arguably it’s the systems in the cell that make DNA into genetic information on how to construct its molecular machinery.
Another example is a particular type of molecule that is allowed entry through the cell’s membrane. There’s nothing about that molecule in and of itself that makes it information. But if the chemical properties of the molecule cause the cell to change its development or behavior, then we often talk about the molecule, perhaps a hormone, being a chemical “signal”. It’s the cell’s response to the molecule that makes it information.
But even in computer technology, there are often transient pieces of information that no conscious observer interprets. The device you’re reading this on likely has a MAC address which it uses to communicate on your local network. It probably contacted a DHCP server to get a dynamically assigned IP address for it to communicate on the internet. It had to contact a domain name server to get the IP address for this website. The various apps on it likely all have various internal system identifiers. None of these things are anything you likely know or think about, but they’re vital for the device to do its job. Many of the dynamically assigned items will come into and go out of existence without any conscious observer ever interpreting them. Yet it seems perverse to say that these aren’t information.
Of course, we could fall back to the etymology of “information” and insist on defining it only as something that inputs Platonic forms into a conscious mind (in-form). But then we’ve created a need to come up with a new word for the patterns, such as DNA or transient IP addresses, that have causal effects on non-conscious systems. Maybe we could call such patterns “causalation”. Which means we could talk about brains being causalation processing systems. Of course, computers would also be causalation processing systems, which just brings us right back to the original bone of contention.
And that in turn bring us to the second reason the argument fails. Every information processing system is a physical system, and can be described in purely physical terms. Consider the following description.
A system is constantly propagating energy, at small but consistent levels, through portions of its structure. The speed and direction of the energy flows are altered by aspects of the structure. But many of those structural aspects themselves are altered by the energy flow, creating a complex synergy between energy and structure. The overall dynamic is altered by energy from the environment, and alters the environment by the energy it emits. Interactions with the environment often happen through intermediate systems that modulate and moderate the inbound energy patterns to a level consistent with the central system, and magnify the causal effects of the emitted energy.
This description can pertain to both computers and central nervous systems. The energy in commercial computers is electricity, the modifiable aspects of the structure are transistor voltage states, and the intermediate systems are I/O devices such as keyboards, monitors, and printers. The energy in nervous systems is electrochemical action potentials, the aspects of modifiable structure are the synapses between neurons, and the intermediate systems are the peripheral nervous system and musculature.
(It’s also worth noting that computers can also be built in other ways. For example, they can be built with mechanical switches, where the energy is mechanical force and the modifiable aspects are the opening and closing switches. A computer could, in principle, also be built with hydraulic plumbing controlling the flow of liquids. In his science fiction novel, The Three-Body Problem, Cixin Liu describes an alien computer implemented with a vast army of soldiers, with each soldier acting as a switch, raising or lowering their arms following simple rules based on what the soldiers next to them did.)
It’s the similarities between how these physical systems work that make it easy for neuroscientists to talk in terms of neural circuits and neural computation, and to see the brain as an information processing organ. Engaging in lingistic jiu jitsu over the definition of “information” (or “computation” as often happens in similar arguments) doesn’t change these similarities.
Not that there aren’t major differences between a commercial digital computer and an organic brain. (Although the differences between technology and biology are constantly decreasing.) The issue isn’t whether brains are computers in the narrow modern sense, but whether they are computational information processing systems.
So, am I being too dismissive of this interpretation argument? Or are there similar arguments that may make a better case? How do you define “information”?