Singularity assumptions that should be questioned

The upcoming movie, Transcendence, looks like it will be interesting, but the trailer includes common assumptions about the singularity that I’m not sure are justified.

To be sure, the assumptions are held by a lot of singularity believers.  Below I offer some reasons why these assumptions shouldn’t be taken as self evident.

Assumption 1: There is almost infinite room to improve on human intelligence.

There could well be, but I’ve also read some studies that indicate that the human brain may be at an evolutionary optimal state given the laws of physics.  Machine intelligence may be able to go far past organic intelligence, or it may find itself faced with many of the same types of tradeoffs in processing speed, heat dissipation, energy consumption, and other factors.

A lot of this assumption is based on a projection of Moore’s law, the increasing power of computer processing chips.  However, Moore’s law is not an unlimited proposition.  It’s an S-curve one, a period of rapid growth that will eventually level out, and we don’t know where on the S-curve we are yet.  The ability to increase transistors on silicon chips is nearing its end, by 2020 at the latest.  Quantum computing may give it a new lease on life, but eventually we will hit the laws of physics and reach the top of the S-curve.

But, some singularity believers will say, an AI could be networked across several nodes.   A networked machine intelligence could certainly be larger than any currently existing organic intelligence, but we don’t really have a good idea of what the tradeoffs for such an intelligence might be.  It might be that once a networked intelligence gets too large, too complicated, its mental processing might slow down, its ability for coordinated action might become compromised, and its ability to maintain a unified self could conceivably become problematic.

All that said, I personally suspect that human minds can be improved on significantly, but not to the astronomical levels often assumed.

Consider the technology of flight, where although we did pretty quickly surpass birds in velocity and altitude, the cruising speed of the common airliner today is still less than ten times that of a falcon.  Certainly we have the technology to go much faster, but its rarely worth the cost, at least with today’s technology.

I suspect AIs will be similar; a significant but not infinite improvement, limited by trade-offs and costs.  The idea of god like AIs causing universal transcendence may be wishful thinking.

Assumption 2:  There will be unlimited processing capacity.  

Dreams of a post-scarcity society have been around a while.  The singularity just moves it into virtual computer environments.  Like the assumption of near infinite increases in intelligence, the assumption of unlimited processing capacity may be overly optimistic.

The idea here is that we will all upload our minds into shared computer environments, and then have the capacity to do whatever we want, spawn as many copies of ourselves as we’d like, explore any simulations we’d like, etc.

The problem is that there’s only so much raw material for making hardware.  (Not to say that there isn’t a lot of it out there.)  There’s also only so much power available to fuel that hardware.  We’ll certainly have a lot more capacity than today, but I see no real evidence that it will be unlimited.

That means resource scarcity will still be an issue.  Which also implies economic systems for allocating those resources, competition for acquiring those resources, and many of the related ancient ills, will all likely still be around.

Assumption 3:  Everyone will prefer living in a shared computer environment.  

Perhaps, but it’s worth thinking about the disadvantages of living in such an environment, aside from the issues of not having unlimited processing capacity.

Withdrawing from the world may leave us blind to outside threats such as natural disasters or rivals from another environment.  If resources aren’t unlimited, there’s no reason to suppose that war or criminals will go away.  For survival purposes, at least some portion of a shared environment would have to be outward looking.

We’d also be at risk of losing our individual identity in such an environment.  Once we’ve uploaded ourselves, there’s no limit on what we could change about ourselves, or what changes could be imposed.  If our survival instinct is removed, there’s nothing stopping us from making our knowledge available to the collective, and then ceasing independent execution, ceasing independent existence.  

Many people, aware of this possibility, might resist the collective environments, opting for their own hardware, their own body.  Doing so would also provide independent mobility and agency in the world, a freedom that we might dearly miss in a collective environment, particularly if survival requires keeping track of, and responding to, what’s going on in the real world.

A very strange world

None of this is to say that a post-singularity world wouldn’t be unimaginably strange or that it might not provide solutions to many age old problems.  Only that the laws of nature will provide some constraints on that strangeness.

Much of the thinking around singularity borders on semi-religious conceptions of a technological rapture.  An idea of an event that will reset all of the world’s problems and usher in a new utopia, usually in twenty years from whenever it is being discussed.

Either that or on apocalyptic thinking, with many concerned about what AIs might do to us, that humans might find ourselves obsolete and in danger of extinction, or enslavement.  I’ve already written about my views on this, but to summarize, I’m not particularly worried about it.

It would require that those AIs have something like our survival instinct, an impulse for self preservation along with preservation of our kin, that we only have because of billions of years of evolution.  We’d have to program that instinct into them, and if we can do that, we can also program an aversion to harming humans.

I think we should hold a healthy degree of skepticism for both utopic and apocalyptic visions of the singularity.

The future will be strange, and is impossible to predict with any accuracy.  But so was the future for medieval scholars, or for stone age foragers.  Today’s world would be largely incomprehensible to them, and to the extent that it was understandable, it would seem largely like a utopia.  Probably, if we could see it, the world of 2100 would be like that for us.

This entry was posted in Science, Science Fiction and tagged , , , , , , , , , , , . Bookmark the permalink.

18 Responses to Singularity assumptions that should be questioned

  1. Steve Morris says:

    The music and language used in the trailer strongly suggest that this will be yet another dystopic “don’t mess with nature” style of movie. To be fair, it’s harder to write a compelling narrative that presents a positive view of AI. Too hard to expect Hollywood to try.

    Assumption 1: Improving on human intelligence. I think that we are already universal processing machines, so in essence we already have everything in place. There’s surely plenty of room to add more processing capacity however. “Almost infinite” is a meaningless concept.

    Assumption 2: Unlimited processing capacity. Some say that scarcity will be removed. I don’t think so. New resources will be created in abundance in the future (as in the past), but there will always be limits, ultimately the laws of physics. I think there is room for **a lot** more abundance before any fundamental limits are reached.

    Assumption 3:Everyone will prefer living in a computer. Nope, don’t think so. I don’t believe in mind uploading, but I do believe in mind augmentation and distributed processing power / combined physical and computational substrates / multi-body living. Flexibility is key, as always.

    Post-singularity: Seems to me that this is a misconception. An exponential acceleration of technology is a process, not an event. There’s no such thing as before or after. We are already accelerating exponentially and will continue to do so (probably).

    The future will be strange: Yes, but that has always been true, throughout history, as you said.

    AIs will enslave us: Hardly, as we will be the AIs. Imagine – if you can build a superintelligence, would you put it to work cleaning your home or managing your nuclear weapon arsenal, or would you merge with it in order to enhance your own cognitive abilities?

    Future Utopia: Please, get real. Read some history.

    Like

    • Steve Morris says:

      Just re-read this and it sounds a bit angry – sorry! I think I’m agreeing with what you said! Wasn’t meant to be critical!

      Like

      • Thanks Steve! I realized it was agreement rather than disagreement, but I’m grateful for the clarification. I agree with most of what you said.

        I am curious though about what you see as the obstacles to mind uploading, and whether you would see the same obstacles to an artificial replacement brain with the same information in it as the original.

        Like

        • Steve Morris says:

          My objection to mind uploading is related to identity. The way I see it is as follows. Suppose you can create a “mind map” and upload this to a computational substrate. Then if you do this, but leave the original biological brain unharmed, you have created a copy of the mind. “You” are still in the biological brain and have no awareness of what the “copy” is doing.

          To my primitive and intuitive way of thinking, the copy is not “me”. I realise that this may be blinkered thinking and I am open to the possibility of being wrong, but for me it doesn’t sound appealing.

          On the other hand, an augmentation of my biological brain with replacement or additional components, that retains my perception of self sounds altogether welcome. However I am incredibly squeamish and would hate any kind of surgery. So any kind of Matrix-style implants/hardware is out of the question. I’m hoping for some non-invasive augmentation, perhaps through nanotech.

          If this is possible, then I envisage a wireless connection to a cloud-based system, which could enhance our human intelligence by perhaps orders of magnitude and enable distributed computing and perhaps even multiple synthetic or biological real-world bodies. At this point, the “self” would be a complex distributed entity, but perhaps no different in principle to the way the left and right brains communicate and work as a team to create the illusion of a whole.

          I have rambled more on this topic here: http://www.singularityweblog.com/mind-uploading-and-identity

          Like

          • Wow, that’s an excellent article! I have a couple of questions: 1. If you were at the end of life, would you feel differently about uploading a copy of yourself? Even if the uploading had to be destructive? 2. Along the lines of your last comment paragraph, if the memories of your self in the computer could be synched with your organic self (assuming it could be done in some way that isn’t too invasive), so that your two selves “remembered” each other’s experiences, would that make you feel better about the arrangement? Would it make the eventual demise of your organic self more tolerable?

            I totally agree that watching my uploaded but disconnected self have all the fun while I languish in my meat body isn’t appealing. But I the idea of having a non-executing backup in place (periodically updated) would appeal to me. Having a copy of me live on seems preferable to nothing living on past organic death.

            Like

          • Steve Morris says:

            Thanks. If I knew that I was about to die, then I would probably choose uploading. After all, even if I am dead, it’s nice to know that a copy of myself is still in the world. That’s pure vanity though.

            If the “self” can become a greater entity distributed in multiple systems, then I imagine that one would “grow” into the enlarged mental space and would then come to terms with a part of it (the original biological part) being “switched off”. In fact, a distributed system would have to be fault tolerant and cope with parts going offline and then reconnecting.

            Going further, it would seem very natural that one “self” would merge with other selves partially or totally for various purposes. It’s not easy to imagine that kind of mental fluidity, but it seems to be a natural thing for such a distributed system to do. I suspect that is our ultimate destiny – not a merger into one single mind, but a fluid mingling of many minds.

            Like

  2. Scout Paget says:

    A lot of your assumptions are based on our current known technologies, scientific understanding, and our models for energy transfer. When considering those elements it is easy to be skeptical of the singularity occurring in this century – particularly within the next few decades as Kurzweil & Vinge predict.

    But, if one considers the newer findings in the areas pointed out, it becomes a little easier to imagine such an event. The very idea of silicon chips and man-made housing for the computer is being challenged by new biological storage devices being discovered, examined, then cloned and engineered to be more powerful. There’s a growing movement of scientists who aren’t just settling with the current understanding of physics and they seek the point where it all breaks down, searching for theories that better explain physical phenomena. In other words, humans are still in the infant stages of technology and science.

    When these future developments are put into the equation, another world arises completely. It becomes quite possible that the singularity could be the natural step for human/earthly evolution. After all, is not a single strand of DNA one of the most beautiful organic computers we’ve ever laid our eyes on? Everything in the universe is information – to base its evolution on the simple technologies of today is small thinking. One could say that it is humanly arrogant.

    The tail end of Stuart Armstrong’s calculated prediction of the singularity is 2112. This appears by far more realistic than the dates predicted by Kurzweil & Vinge. Whether or not that single moment occurs will remain to be seen. But one thing seems certain, the future of information, knowledge, and intelligence, does not appear to be limited to human agency alone.

    Like

    • I understand where you’re coming from. I’m not really making assumptions though, just calling into question some of the common ones. And my points come from scientific laws, not current technology.

      Our understanding of physical laws is certainly incomplete. We will undoubtably discover new ones, and refinements to existing ones. However, any new understandings will still have to deal with the same empirical facts that established the laws as we currently understand them.

      It’s conceivable that we may find ways to ignore or transcend the laws as we currently understand them. But trying to make predictions based on scientific understandings that we don’t have yet is not itself scientific, or even philosophically rigorous. Once we cross that line, we’re free to predict anything we desire. (And many writers do.) At that point, we’re effectively just fantasizing. I enjoy fantasy as much as anyone, in fiction.

      I think a good guide to know if we will ever be able to do something is, does it happen anywhere in nature already? If it does, then it should be possible, at least in principle, for us to do it someday. If it doesn’t happen anywhere in nature, then our ability to do it would be extraordinary. Many will label me a curmudgeon for it, but I don’t think we should assume such an extraordinary assertion without extraordinary evidence.

      Like

      • Scout Paget says:

        I think that you’re missing my point. It is current scientific and biological research that I’m referring to when I allude to the future of those fields and their possible evolution.

        If all possible understanding of phenomena were based on popularly held beliefs and contemporary development, humans would still think that imaginary gods above the Earth determine the outcomes of all human endeavors, Fortunately, there have been those highly intelligent and creative folks throughout history who have dared to look beyond where we are and how we commonly understand things, to discover amazing, otherwise secret, essentials of nature.

        I think you should remember that once there was a time, not too long ago, when a human craft visiting the planet Mars was seen as fantasy and ‘fiction.’ Today, humans have rovers traversing the ‘red planet’ and a planned peopled expedition of exploration to launch within the next decade.

        Everything that we enjoy in this ‘modern’ world came from ‘extraordinary assertions’ based on ‘extraordinary evidence’ that was discovered by extraordinary people who thought outside of the ordinary box.

        It’s understandable that people of this age might think that what are defined as nature’s ‘laws’ today are somehow immutable in future discoveries. After all, we are oh so ‘advanced.’ Still, there remain the few visionaries who understand that humans have barely scratched the surface of understanding – and defining – the universe and the world in which we live in. And I might say, that vision requires an incredibly ‘rigorous’ kind of philosophy.

        Like

        • I think you and I are much more on the same page than this conversation makes it seem. I totally agree that we will learn new things and I have nothing but admiration for those on the forefront of that. I am very much a fan of scientific exploration.

          But, my point was that when contemplating what that new knowledge might be, we don’t get to just ignore the existing laws of physics. Human craft visiting Mars are still subject to Newton’s laws of motion (discovered in the 17th century), even though those laws were are now known to be an approximation of Einstein’s general relativity laws (early 20th century). Getting that craft to Mars while ignoring those laws would have been impossible. Indeed, a German visionary named Hohmann in the 1920s laid out the principles of getting a craft to Mars, using those laws. His work would have been useless if he had ignored the laws, no matter how visionary he otherwise might have seemed.

          Someday, someone (and it might well be an AI) will demonstrate that Einstein’s laws are an approximation of yet more refined laws, but both Einstein’s and Newton’s laws will remain as useful approximations of those new laws that can’t be ignored except (likely) in extreme conditions. So, making predictions that violate those heavily verified laws isn’t visionary, unless there is good evidence that those laws are wrong.

          You can’t really think outside the box until you understand what the box is about 🙂

          Like

          • Scout Paget says:

            It seems that you’re under the impression that I’ve been suggesting that the future of technology and information requires throwing the baby out with the bathwater – that a phenomenon like the singularity requires defying today’s known scientific principles. That is not the case. I’ve merely suggested that science and technological evolution will not be limited by those principles. That creativity and discovery will move information evolution beyond what we understand our limits to be today.

            Like

          • I guess the distinction between “defying today’s known scientific principles” and “will not be limited by those principles” escapes me.

            I don’t doubt for a moment that we will discover things that will give us new capabilities, capabilities that we can’t imagine right now. I’m open to the possibility that those new capabilities might allow us to transcend known physical laws. But I don’t take it as an article of faith. And I’m skeptical of predictions made about it, particularly when those predictions seem to tell us what we want to hear.

            Like

  3. Pingback: TRANSHUMAN – Do you want to live forever? | Machines Like Us | SelfAwarePatterns

  4. Steve Morris says:

    I believe that the greatest barrier to progress is not physical laws but human imagination. In my view, if you can imagine something, you are more than half way to creating it.

    We can already imagine a lot – interstellar travel, machine intelligence, indefinite lifespans – and I think we are going to achieve all these a lot sooner than many think. But there are so many things that nobody has yet imagined. That’s a thought that makes me very excited indeed.

    Like

  5. Pingback: Why You Should Upload Yourself to a Supercomputer | SelfAwarePatterns

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s