The upcoming movie, Transcendence, looks like it will be interesting, but the trailer includes common assumptions about the singularity that I’m not sure are justified.
To be sure, the assumptions are held by a lot of singularity believers. Below I offer some reasons why these assumptions shouldn’t be taken as self evident.
Assumption 1: There is almost infinite room to improve on human intelligence.
There could well be, but I’ve also read some studies that indicate that the human brain may be at an evolutionary optimal state given the laws of physics. Machine intelligence may be able to go far past organic intelligence, or it may find itself faced with many of the same types of tradeoffs in processing speed, heat dissipation, energy consumption, and other factors.
A lot of this assumption is based on a projection of Moore’s law, the increasing power of computer processing chips. However, Moore’s law is not an unlimited proposition. It’s an S-curve one, a period of rapid growth that will eventually level out, and we don’t know where on the S-curve we are yet. The ability to increase transistors on silicon chips is nearing its end, by 2020 at the latest. Quantum computing may give it a new lease on life, but eventually we will hit the laws of physics and reach the top of the S-curve.
But, some singularity believers will say, an AI could be networked across several nodes. A networked machine intelligence could certainly be larger than any currently existing organic intelligence, but we don’t really have a good idea of what the tradeoffs for such an intelligence might be. It might be that once a networked intelligence gets too large, too complicated, its mental processing might slow down, its ability for coordinated action might become compromised, and its ability to maintain a unified self could conceivably become problematic.
All that said, I personally suspect that human minds can be improved on significantly, but not to the astronomical levels often assumed.
Consider the technology of flight, where although we did pretty quickly surpass birds in velocity and altitude, the cruising speed of the common airliner today is still less than ten times that of a falcon. Certainly we have the technology to go much faster, but its rarely worth the cost, at least with today’s technology.
I suspect AIs will be similar; a significant but not infinite improvement, limited by trade-offs and costs. The idea of god like AIs causing universal transcendence may be wishful thinking.
Assumption 2: There will be unlimited processing capacity.
Dreams of a post-scarcity society have been around a while. The singularity just moves it into virtual computer environments. Like the assumption of near infinite increases in intelligence, the assumption of unlimited processing capacity may be overly optimistic.
The idea here is that we will all upload our minds into shared computer environments, and then have the capacity to do whatever we want, spawn as many copies of ourselves as we’d like, explore any simulations we’d like, etc.
The problem is that there’s only so much raw material for making hardware. (Not to say that there isn’t a lot of it out there.) There’s also only so much power available to fuel that hardware. We’ll certainly have a lot more capacity than today, but I see no real evidence that it will be unlimited.
That means resource scarcity will still be an issue. Which also implies economic systems for allocating those resources, competition for acquiring those resources, and many of the related ancient ills, will all likely still be around.
Assumption 3: Everyone will prefer living in a shared computer environment.
Perhaps, but it’s worth thinking about the disadvantages of living in such an environment, aside from the issues of not having unlimited processing capacity.
Withdrawing from the world may leave us blind to outside threats such as natural disasters or rivals from another environment. If resources aren’t unlimited, there’s no reason to suppose that war or criminals will go away. For survival purposes, at least some portion of a shared environment would have to be outward looking.
We’d also be at risk of losing our individual identity in such an environment. Once we’ve uploaded ourselves, there’s no limit on what we could change about ourselves, or what changes could be imposed. If our survival instinct is removed, there’s nothing stopping us from making our knowledge available to the collective, and then ceasing independent execution, ceasing independent existence.
Many people, aware of this possibility, might resist the collective environments, opting for their own hardware, their own body. Doing so would also provide independent mobility and agency in the world, a freedom that we might dearly miss in a collective environment, particularly if survival requires keeping track of, and responding to, what’s going on in the real world.
A very strange world
None of this is to say that a post-singularity world wouldn’t be unimaginably strange or that it might not provide solutions to many age old problems. Only that the laws of nature will provide some constraints on that strangeness.
Much of the thinking around singularity borders on semi-religious conceptions of a technological rapture. An idea of an event that will reset all of the world’s problems and usher in a new utopia, usually in twenty years from whenever it is being discussed.
Either that or on apocalyptic thinking, with many concerned about what AIs might do to us, that humans might find ourselves obsolete and in danger of extinction, or enslavement. I’ve already written about my views on this, but to summarize, I’m not particularly worried about it.
It would require that those AIs have something like our survival instinct, an impulse for self preservation along with preservation of our kin, that we only have because of billions of years of evolution. We’d have to program that instinct into them, and if we can do that, we can also program an aversion to harming humans.
I think we should hold a healthy degree of skepticism for both utopic and apocalyptic visions of the singularity.
The future will be strange, and is impossible to predict with any accuracy. But so was the future for medieval scholars, or for stone age foragers. Today’s world would be largely incomprehensible to them, and to the extent that it was understandable, it would seem largely like a utopia. Probably, if we could see it, the world of 2100 would be like that for us.