The dangers of artificial companionship

Lux Alpstraum at Undark argues against “Our Irrational Fear of Sexbots”:

When most people envision a world where human partners are abandoned in favor of robots, the robots they picture tend to be reasonably good approximations of flesh-and-blood humans. The sexbots of “Westworld” are effectively just humans who can be programmed and controlled by the park’s operators.

…What most of us want isn’t an intimate relationship with a sentient Roomba, but a relationship with a being who closely approximates all the good parts of sex and love with a human — minus the messiness that comes with, well, sex and love with a human. Yet a robot capable not just of passing a Turing test but of feeling like a human partner in the most intimate of settings isn’t likely to be built any time soon. True AI is still a long ways off. Even if we assume that sexbot lovers will feel content with Alexa-level conversations, robots that not only look and feel real but also autonomously move with the grace and dexterity of a human aren’t within the realm of current, or near future, tech.

I have to admit I didn’t know that angst about sexbots was a thing, but given the success and acclaim of Westworld, not to mention other AI movies like Ex Machina and Her, it seems kind of inevitable.  I do think Alpstraum is right that realistic sex robots are not anything we’re going to have to worry about in the next few years.  Anything feasible in the short term, as Alpstaum mentions, remains firmly in the Uncanny Valley, the space where something that resembles humanity is just close enough to be creepy but not convincing.

That said, I do think the long term concern about sexbots is valid.  They do have the potential to disrupt normal human relationships.  But I’m going to broaden it to a long term concern about artificial companionship overall, not just involving sex, but friendship and social interactions of any kind.  It is worth noting the positive aspects of this for people needing caretakers such as the elderly or infirm, or for those who are just lonely.  But there is a danger.

Imagine a world where you are surrounded by entities that take care of you, do tasks for you, keep you company, laugh at all your jokes, pay attention to you whenever you want attention and go away when you don’t want it, and just all around make you the center of their world.  It seems like it would be extremely easy to fall into a routine where these entities, these artificial humans, become your entire sphere of interaction.

Now imagine how jarring it might be when you encounter an actual other human being, one with their own point of view, their own unfunny jokes, their own ego, their own selfish desires, and basically their own social agenda.  Is it that hard to imagine that many humans might prefer being with the first group?

Science fiction has looked at this many times.  An early example is Isaac Asimov’s The Naked Sun about a planet where humans are outnumbered 10,000 to one by their servant androids, where people live alone on vast estates with their androids, and where actual face to face interaction between humans is so rare that it has become dirty and taboo.  Another is Charles Stross’ Saturn’s Children, where humanity’s social and reproductive urges are so catered to by robots, that the humans end up going extinct, leaving behind a robot civilization that worships the memory of “the makers”.

Now, I doubt that humanity would ever go completely extinct because of this.  For one thing, we’re talking about it, as the Undark article demonstrates, which means that it’s entering our public consciousness as a concern, increasing the chances that we will eventually take steps to avoid that scenario.  And I suspect there would always be a portion of humanity that values the old ways enough to reject sexbots and other forms of artificial companionship.

But it’s still easy to see it leading to the overall human population crashing to some small portion of what it is today.  A civilization where real humans are vastly outnumbered by artificial engineered entities seems like a plausible scenario.  And that’s before considering that the line between evolved humans and engineered ones will likely be blurred as genetic manipulation and other forms of biological engineering eventually merge with machine engineering, leading to humans first being enhanced, then later copied and perpetuated.

So, there is a danger.  I don’t think the solution is to react as conservatives currently are, with talk of prohibitions.  A world with a much smaller human population isn’t necessarily a bad thing.  (Although it’s interesting to think about how this could lead to artificial intelligence being taboo as imagined by Frank Herbert in his Dune universe.)  But we should be aware of how artificial humans, when we get to the point that we can create them, might change us.

Posted in Zeitgeist | Tagged , , , , , | 6 Comments

What positions do you hold that are not popular?

Rebecca Brown has an article at Aeon on how philosophy can make the previously unthinkable thinkable.  She starts with a discussion of the Overton window:

In the mid-1990s, Joseph Overton, a researcher at the US think tank the Mackinac Center for Public Policy, proposed the idea of a ‘window’ of socially acceptable policies within any given domain. This came to be knownas the Overton window of political possibilities. The job of think tanks, Overton proposed, was not directly to advocate particular policies, but to shift the window of possibilities so that previously unthinkable policy ideas – those shocking to the sensibilities of the time – become mainstream and part of the debate.

Overton’s insight was that there is little point advocating policies that are publicly unacceptable, since (almost) no politician will support them. Efforts are better spent, he argued, in shifting the debate so that such policies seem less radical and become more likely to receive support from sympathetic politicians. For instance, working to increase awareness of climate change might make future proposals to restrict the use of diesel cars more palatable, and ultimately more effective, than directly lobbying for a ban on such vehicles.

This reminds me of someone on Twitter recently asking for what positions people held that were unpopular.  Here are mine (slightly expanded from the response tweet):

  1. The universe is ultimately meaningless.  Whatever meaning we find in this life, we have to provide, both to ourselves and to each other.
  2. There is no objective morality.  Ultimately what a society calls “moral” amounts to what the majority of a given population decides is allowable and what is not.  Innate instincts do provide some constraints on this, but the variances they allow are wider than just about anyone is comfortable with.
  3. Whether a given system is conscious is not a fact, but an interpretation, depending on what definition of “consciousness” we’re currently using.  Consciousness exists only relative to other conscious entities.
  4. We don’t have contra-causal free will, but social responsibility remains a coherent and useful concept.
  5. The mind is a physical process and system that can be understood, and someday enhanced and copied.
  6. Enhancement of ourselves, either with technological add-ons or genetic therapy, should be allowed, particularly when it will alleviate suffering.
  7. Politics is about inclusive self interest.  The political philosophies people choose are generally stances that benefit them, their family and friends, or people like them.  If we could admit this, compromising to get things done would be easier.

Those are mine.  What about you?  Do you have positions that are not currently popular, that may lie outside of the current Overton window?

Posted in Zeitgeist | Tagged , | 42 Comments

The implications of embodied cognition

Sean Carroll on his podcast interviewed Lisa Aziz-Zadeh on embodied cognition:

Brains are important things; they’re where thinking happens. Or are they? The theory of “embodied cognition” posits that it’s better to think of thinking as something that takes place in the body as a whole, not just in the cells of the brain. In some sense this is trivially true; our brains interact with the rest of our bodies, taking in signals and giving back instructions. But it seems bold to situate important elements of cognition itself in the actual non-brain parts of the body. Lisa Aziz-Zadeh is a psychologist and neuroscientist who uses imaging technologies to study how different parts of the brain and body are involved in different cognitive tasks.

As Carroll notes in his description, the idea of embodied cognition could almost be considered trivially true.  The body is the brain’s chief object of interest.  It is hardwired to monitor and control it.  Cognition in a brain is relentlessly oriented toward this relationship, to the extent that when we think about abstract things, we typically do so in metaphors using sensory or action experience, experiences of a primate body.

A recent study showed that our memories and imagination are actually mapped according to internal location maps primordially used for tracking physical locations.  In light of the brain’s body focus and orientation, this makes complete sense.  (I often think of the location of various web sites, including this one, as existing in an overall physical space, which completely fits with these findings.)

It’s fair to say that the body is what gives the information processing that happens in a brain its meaning.  That said, I do think some of the embodied cognition advocates get a little carried away, asserting that thinking is impossible without a body.

It may be that a human consciousness can’t develop without a body.  If we could somehow grow a human brain without a body, it’s hard to imagine what kind of consciousness might be able to form.  It seems like it would be an utterly desolate one by our standards.  But once it has developed with a body, I think we have plenty of evidence that the human mind is far more resilient than many people assume.

Patients with their spinal cord severed at the neck are cut off from most of their body.  Without the interoceptive feedback, their emotions are reportedly less intense than healthy people’s, but they retain their mind and consciousness.  Likewise, someone can be blind, deaf, lose their sense of smell, or apparently even have their vagus nerve cut, and still be conscious (albeit perhaps on life support).

It seems like the only essential component that must be present for a mind is a working brain, and not even the entire brain.  Someone can have their cerebellum destroyed and remain mentally complete.  (They’ll be clumsy, but their mind will be intact.)  The necessary and sufficient components appear to be the brainstem and overall cerebrum.  (We can lose small parts of the cortex and still retain most of our awareness, although each loss in these regions comes with a cost to our mental abilities.)

Embodied cognition is also sometimes invoked to make the case that mind uploading is impossible, even in principle.  I think it does make the case that a copied human mind would need a body, even if a virtual one.  And it definitely further illuminates just how difficult such an endeavor would be.  But “impossible” is a very strong word, and I don’t think this line of reason really establishes it.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , , , | 35 Comments

Dark energy and repulsive gravity

Over the weekend, Sean Carroll put up a blog post to address common misconceptions about cosmology.  I understood most of his points, but was confused when I saw this one:

Dark energy is not a new force; it’s a new substance. The force causing the universe to accelerate is gravity.

Carroll was referring to the accelerating expansion of the universe.  But gravity causing the acceleration, instead of dark energy?  I asked in a comment, along with at least one other commenter, how this could be so.  Carroll was kind enough to respond to us:

Gravity causes the universe to accelerate because gravity is not always attractive. Roughly speaking, the “source of gravity” is the energy density of a fluid plus three times the pressure of that fluid. Ordinary substances have positive energy and pressure, so gravity attracts. But vacuum energy has negative pressure, equal in size but opposite in sign to its energy. So the net effect is to push things apart.

I had always been under the impression that dark energy was simply the unknown force behind the accelerating expansion, a force I understood to be in opposition to gravity.  However, it appears that dark energy actually affects gravity by causing it, on cosmological scales, to be repulsive, to repel distant parts of the universe apart from each other.

The force behind this appears to be negative pressure.  Pressure, it turns out, is a source of gravity.  Brian Greene in his book, The Fabric of the Cosmos, explains that pressure, in the sense of outward pushing, like what you might find with a coiled spring, is a form of energy, and energy generates gravity.  Negative pressure, such as the tension in a rubber band that wants to contract when it’s stretched out, is energy going in a different direction.  Negative pressure actually has a negative effect on the attractive force of gravity, causing it to be repulsive.

Albert Einstein understood this when he first formulated his Cosmological Constant to explain why gravity didn’t cause the universe to collapse.  After Edwin Hubble’s discovery that the universe was in fact expanding, Einstein would regard the Cosmological Constant as his greatest blunder.  It’s therefore ironic that several decades later it became useful again with the discovery that the expansion of the universe was actually accelerating.

So gravity can be repulsive, and dark energy, an energy apparently permeating all of space, due to its negative pressure, brings out this repulsive nature.  Many of you who are more knowledgeable about physics no doubt already understood this, but it was a major revelation to me.

I briefly wondered if this might be a way to achieve the anti-gravity capabilities that often show up in science fiction.  But after giving it some thought, no, it wouldn’t.

The problem is that most of what generates the gravity that attracts, say, a flying car, to the Earth is Earth’s overall mass.  In order to overcome this with repulsive gravity, the car would have to generate so much negative pressure that it would cause the car to generate more repulsive gravity than the Earth’s attractive gravity.  Such a force would violently repel everything around it, push the Earth out of its orbit, and probably cause a host of other catastrophes.  Not exactly a practical solution.

Still, this is a fascinating effect and I learned something new!

Posted in Zeitgeist | Tagged , , , , | 18 Comments

China will have the world’s largest economy in 2020

At least, according to a report by Standard Charter Bank as reported by Big Think:

  • The Standard Chartered Bank, a British multinational banking and financial services company, recently issued a report to clients outlining projections about the world economy up until 2030.

  • The report predicts Asian economies will grow significantly in the next decade, taking seven of the top 10 spots on the list of the world’s biggest economies by 2030.

  • However, the researchers formed their predictions by measuring purchasing power parity at GDP, which is an approach that not all economists would use in these kinds of projections.

The Big Think article discusses the last point, that according to exchange rates rather than purchasing power parity, the US will remain the largest economy for a few more years.  It also makes the point that the total overall size of the economy is different from the GDP per capita, the income for the average person in the economy, with China at $18,000 and the US at $63,000.

However, I think this misses the point.  Historically, total economic size equaled economic power, and economic power equaled political and military power.  The ascent of China, and Asia overall, will eventually change the political and cultural orientation of the world.  Deft maneuvering by the US on the international scene might delay this for a while (although that’s decidedly not what we’re getting with our current dumpster fire of an administration) but the long term writing appears to be on the wall.

The world seems primed to be a very different place in coming decades.

Posted in Zeitgeist | Tagged , , , | 3 Comments

A qualified recommendation: Consciousness Demystified

A couple of years ago I did a series of posts inspired by Todd Feinberg and Jon Mallatt’s excellent  The Ancient Origins of Consciousness, a book on the evolution of animal consciousness.  Somewhat building on what I had read in Antonio Damasio’s Self Comes to Mind, it was a pivotal point in my exploration of consciousness science.  Feinberg and Mallatt shook me out of my human centered understanding of consciousness, one that was largely focused on various forms of metacognition.

Consciousness Demystified coverThey’ve written a new book, Consciousness Demystified.  Unlike their first book, this new one is much more approachable for general readers, although it covers the same basic topics, albeit updated with some new concepts that have come along since the last book.

One of the things Feinberg and Mallatt did that I thought was useful was breaking up the overall concept of consciousness into various types: exteroceptive consciousness, interoceptive consciousness, and affect consciousness.

Exteroceptive consciousness is awareness of the outside world, image maps, models built on information from distance senses such as sight, hearing, and smell.  Interoceptive consciousness is the internal awareness of a body, how the stomach feels, the lungs, or muscles.  Touch and proprioception often sit on the boundary between these categories.  In this book, Feinberg and Mallatt group these perceptions under the phrase “image based consciousness”.

Image based consciousness is interesting because the image maps, the neural firing patterns in the early sensory regions in the brain, are topographically or isomorphically mapped to the surface of the sense organ.  So the pattern that the photoreceptors on the retina are activated in is preserved in the bundle of axons that project up the optic nerve to the thalamus and then to the visual cortex.  A similar relationship exists for touch where each body part ends up being mapped to particular regions in the somatosensory cortex.

But image based consciousness, perception, is more than just these initial firing patterns.  It includes the patterns of neurons activated in later neural layers, layers that map associations, where a particular pattern gets mapped to a concept.  Eventually these layers become integrated across the senses into multi-modal perceptions, such as a piece of food, or a predator.

The third category is affective consciousness, essentially emotional and other valence based feelings.  Unlike image based consciousness, affective consciousness is not mapped to any sense organs.  Affects tend to be global states.  For example, you don’t feel sad in your foot, you just feel sad.  Another name for affective consciousness is sentience.

Many consider affective consciousness, sentience, the ability to feel, to be consciousness.  But in principle there’s no reason that an organism can’t have image based consciousness with only reflexive reactions to the contents of that consciousness, to essentially only have perception paired with unthinking action.

The authors talk about criteria that can be used to determine whether a particular animal has affective consciousness:

Behavioral criteria showing an animal has affective consciousness (likes and dislikes)

  1. Global operant conditioning (involving whole body and learning brand-new behaviors)
  2. Behavioral trade-offs, value-based cost-benefit decisions
  3. Frustration behavior
  4. Self-delivery of pain relievers or rewards
  5. Approach to reinforcing drugs or conditioned place preference

Feinberg, Todd E.. Consciousness Demystified (The MIT Press) . The MIT Press. Kindle Edition.

(As I’ve discussed in other posts, I think affect awareness is closely associated with imaginative simulations, which if you think about it, are necessary to meet all of these criteria, except possibly 3.)

One omission a consciousness aficionado may notice here is self reflection, introspective self awareness, metacognition.  Feinberg and Mallatt explicitly exclude this from their scope.  Their focus is on primary consciousness, also known as sensory consciousness, which could be equated with phenomenal consciousness.  (Although this last association is controversial.)

Most of their discussion is focused on vertebrates, but the authors do spend time exploring the possibility of invertebrate consciousness.  As they did in their first book, they express reservations about the tiny brains of insects, but on balance conclude that many arthropods are conscious to one degree or another, as well as cephalopods (octopusses, etc).  Given the early divergence of these evolutionary lines, consciousness appears to be an example of convergent evolution.

In chapters on the evolution of consciousness, Feinberg and Mallatt spend time discussing the evolution of reflex arcs, then the gradual accumulation of predictive functionality into image based and affective consciousness.  As in the earlier book, they see this happening during the Cambrian Explosion, making consciousness very ancient.  They finish up with what they see as the adaptive values of consciousness:

Adaptive advantages of consciousness

  • It efficiently organizes much sensory input into a set of diverse qualia for action choice. As it organizes them, it resolves conflicts among the diverse inputs.
  • Its unified simulation of the complex environment directs behavior in three-dimensional space.
  • Its importance ranking of sensed stimuli, by assigned affects, makes decisions easier.
  • It allows flexible behavior. It allows much and flexible learning.
  • It predicts the near future, allowing error correction.
  • It deals well with new situations.

Feinberg, Todd E.. Consciousness Demystified (The MIT Press) . The MIT Press. Kindle Edition.

They finish up with a discussion of the hard problem, introducing two terms: auto-ontological irreducibility and allo-ontological irreducibility.  The first refers to the fact that the brain has no sensory neurons, and we have no introspective access to its lower level processing, which means that we can never intuitively look at brain operations and feel like they reflect our subjective states.  The second refers to the fact that an outside observer can never access the subjective state of a system, if it has one.  Together these create an uncrossable subjective / objective divide, although understanding why the divide exists can drain the mystery from it.

My recommendation for this book is qualified.  If you didn’t read their earlier technical book, then this more approachable version may well be worth your time, particularly if the technical nature of the early book was what made you avoid it.  That said, if you’re not comfortable looking at anatomical brain diagrams, this still may not be your cup of tea.

But if you did read that earlier book, I’m not sure this new one has enough to warrant the time and money.  It does contain some concepts that came up in the last few years, as well as descriptions of new experiments and research, but you have to be a serious brain geek like me to make it worth it.

Finally, I can’t resist mapping the categories Feinberg and Mallatt discuss into the hierarchy of conscious capabilities I often use to discuss this stuff.

  1. Reflex arcs
  2. Perception (exteroceptive and interoceptive image based awareness)
  3. Attention
  4. Imagination with affect awareness, enabling the abilities to meet the criteria above for affect consciousness, sentience
  5. Self reflection, metacognition

This hierarchy was, in many ways, inspired by Feinberg and Mallatt’s earlier book.

Posted in Mind and AI | Tagged , , , , , , , | 15 Comments

Is consciousness a thing or a process? Yes.

I came across this tweet by Amanda Gefter:

William James, the founder of American psychology was an illusionist?  I only read the opening portions of the essay, but it appears so.  However, even in 1904, illusionism, the belief that consciousness isn’t what it seems, was a very nuanced thing:

To deny plumply that ‘consciousness’ exists seems so absurd on the face of it — for undeniably ‘thoughts’ do exist — that I fear some readers will follow me no farther. Let me then immediately explain that I mean only to deny that the word stands for an entity, but to insist most emphatically that it does stand for a function. There is, I mean, no aboriginal stuff or quality of being, contrasted with that of which material objects are made, out of which our thoughts of them are made; but there is a function in experience which thoughts perform, and for the performance of which this quality of being is invoked. That function is knowing.

So the assertion is not that consciousness doesn’t exist at all, but that it doesn’t exist as an entity, a corporeal thing.  It is best thought of as a function, a process.  There is no ghost in the machine, not even a 100% naturalistic version, just the machine itself and what it does.

This view seems to rest on a distinction between things and processes, between entities and functions.  But is this a coherent distinction?  It often is for various purposes, but when we’re talking about the ultimate ontology of something, it seems like we have to be a bit more careful.  And that care requires acknowledging that every thing ultimately reduces to a process.

In this case, the machine itself, the nervous system, the neurons, synapses, glia, are themselves processes in action.  They can be reduced to the activity of proteins and other biological mechanisms.  Constructs like proteins are actually molecular chemistry in motion.  Molecules are atoms exchanging electrons.  Atoms are subatomic particles exchanging photons, gluons, and other bosons.

Even elementary particles like quarks and electrons are basically excitations of quantum fields, in other words, processes.  I guess we could stop at space, time, and quantum fields and say those are the things, but some physicists even wonder whether time itself might not be an emergent thing.  Ultimately, all things may be emergent from underlying processes.  Reality may be structure and relations all the way down.

Now, my own view of consciousness is relentlessly functional, so my ontology is similar to James’.  However, I’m uneasy with simply saying “consciousness doesn’t exist”.  Consider that the operating system of the device you’re using to read this, whether it be MS Windows, Linux, iOS, or whatever, is inherently what your device does.  It’s a function.

Yet despite this, we still often talk about software as a thing in and of itself.  It is a construction, one requiring armies of programmers to build.  Technology companies view it as a costly asset, an investment.  At the end of the day, it remains a function, what our machines do, but we find it productive to discuss it as a thing.

There is no scientific evidence for any ghost in the machine.  Neuroscience finds only the process, the function.  It’s important to understand that.  (Granted many people haven’t come around to that point yet.)  But once that’s understood and acknowledged, there is validity in discussing it as a thing in and of itself, since all things emerge from processes.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , , , , | 74 Comments

Higher order theories of consciousness and metacognition

Some of you know, from various conversations, that over the last year or so I’ve flirted with the idea that consciousness is metacognition, although I’ve gradually backed away from it.  In humans, we typically define mental activity that we can introspect to be conscious and anything else to be unconscious.  But I’m swayed by the argument that mental activity accessible to introspection, but that we never get around to actually introspecting, is nevertheless conscious activity.

I had thought the idea of consciousness being metacognition was essentially what HOT (high order theories) of consciousness were all about, and so my backing away from the metacognition idea seemed to entail backing away from HOT.  However, a recent paper written to clear up common misconceptions about HOT points out that this is mistaken (page 5).

Just to review: metacognition is cognition about cognition, thinking about thinking, awareness of our own awareness, etc.  It’s essentially introspection and is necessary for introspective self awareness.

HOT, on the other hand, posits that there are two types of mental representations. There are simple representations about the external world, such as the neural pattern that forms in the visual cortex based on signals from the retina.  This would be a first order representation.  First order representations are often associated with early sensory processing regions and are not themselves sufficient for us to be conscious of them.

Then there are representations about these first order representations.  These are second order, or higher order representations, and are often associated with the prefrontal cortex, the executive center of the brain.

Under HOT, the contents of consciousness are these higher order representations.  The higher order representation is us being aware, conscious, of the first order representation.  (To be conscious of the higher order representation itself requires another higher order representation.)  Our sense of inner awareness comes from these representations of the representations.

Given that I’ve often pondered that qualia are the raw stuff of the communication from the perceiving parts of the brain (where the first order representations are) to the planning parts of the brain (where the second order representations are), HOT strikes me as very plausible.

Crucially, higher order representations, despite their name, are much more primal in nature than metacognition.  The paper does admit that they likely share some common functional areas, but metacognition is a more sophisticated and comprehensive activity.  It strikes me as very likely that metacognition is built with recursive higher order representations.

My only reservation with HOT, and I’m sure various specific versions handle this, is that not just any higher order representation is going to be conscious.  It will depend on where in the information flows that representation is formed.  That and the higher order representation shouldn’t be thought of a simple echo of the first order one, but as informational structures which have their own unique functionality.

Ultimately HOT will have to be judged by how well it matches observations, but a nice implication of it is that inner experiences aren’t ruled out for species that show little or no evidence for metacognition.

Posted in Zeitgeist | Tagged , , , | 51 Comments

Changing what makes us happy

from Saturday Morning Breakfast Cereal (click through for the hovertext and red button caption)

Greg Egan in his novel Incandescence posits an alien civilization whose ancestors, in order to survive, establish a series of space habitats.  In order to ensure their descendants will be happy, they bioengineer those descendants to feel satisfaction and bliss working within and maintaining the habitat.  In order to ensure their happiness in such a limited environment, they remove or minimize curiosity from the average inhabitant.

But in order to ensure that the occasional dangerous situation can be handled, they arrange for a few individuals in each generation to have curiosity.  In most generations, these individuals are miserable outcasts, but when their attributes are needed, they’re saviors.  The novel shows us a generation where the curious individuals become leaders and save the world (the habitat), and another where the one curious individual is desperately lonely and unhappy.

I often wondered if this type of survival would be worth it.  The aliens had removed just about everything that made them…them, in order to ensure that some version of their species continued to survive, with only the occasional lonely individual retaining some part of their scientific curiosity.

At some point, we may develop the ability to reprogram ourselves, to control what makes us happy.  The question is, is happiness achieved this way real happiness?  Does it make a difference what kind of scenario we choose to make us happy?

For example, I could see someone deciding that reprogramming people to enjoy hard work and virtuous living is a good thing.  Presumably no one would think reprogramming people to enjoy a horrible death would be good, but if we had a situation where people were stuck in miserable lives and we could simply reprogram them to enjoy their lot, would that be ethical?  Why, or why not?

And how would this be different from a philosophy like stoicism, where people essentially will themselves to be at peace with unavoidable circumstances?

Posted in Zeitgeist | Tagged , , , | 28 Comments

Is the singularity right around the corner?

Schematic Timeline of Information and Replicators in the Biosphere: major evolutionary transitions in information processing.

Image credit: Myworkforwiki via Wikipedia

You’ve probably heard the narrative before.  At some point, we will invent an artificial intelligence that is more intelligent than we are.  The superhuman intelligence will then have the capability to either build an improved version of itself, or engineer upgrades that improve its own intelligence.  This will set off a process where the system upgrades itself, with its greater intelligence come up with new ways to enhance itself, and then upgrade itself again, looping in a rapid runaway process, producing an intelligence explosion.

Given that we only have human level intelligence, we have no ability to predict what happens next.  Which is why Vernor Vinge coined the phrase “the technological singularity” in 1993.  The “singularity” part of the label refers to singularities that exist in math and science, points at which existing theories or frameworks break down.  Vinge predicted that this would happen “within 30 years” and would mark the “end of the human era.”

Despite our purported inability to make predictions, some people nevertheless make predictions about what happens next.  Where they go with it depends on whether they’re a pessimist or an optimist.  The pessimist doesn’t imagine things turning out very well for humanity.  At best, we might hope to hang around as pets.  At worst, the machines might either accidentally or intentionally wipe us out.

Most people who get excited about the singularity fall into the optimist camp.  They see it being a major boon for humanity.  The superhuman intelligences will provide the technology to upload ourselves into virtual environments, providing immortality and heaven on Earth.  We will be taken along on the intelligence explosion ride, ultimately resulting, according to Ray Kurzeil, in the universe “waking up.”  This religious like vision has been called “the rapture of the nerds.”

The modern singularity sentiment is that it will happen sometime in the 2040s, in other words, in about 20-30 years.  Note however that Vinge’s original essay was written in 1993, when he said it would happen in about 30 years, a point that we’re rapidly approaching.

(Before going any further, I can’t resist pointing out that it’s 2019, the year when the original Blade Runner happens!  Where is my flying car?  My off world colonies?  My sexy replicant administrative assistant?)

Human level artificial intelligence is almost always promised to be 20 years in the future.  It’s been 20 years in the future since the 1950s.  (In this way, it’s similar to fusion power and human exploration of Mars, both of which have also been 20 years in the future for the last several decades.)  Obviously all the optimistic predictions in previous decades were wrong.  Is there any reason to think that today’s predictions are any more accurate?

One reason frequently cited for the predictions are the ever increasing power of computer processing chips.  Known as Moore’s Law, the trend of increasing computational power was first noted by Gordon Moore in the 1960s.  What Moore actually noticed was the doubling of the number of transistors on an intergrated circuit chip over a period of time (originally one year, but later revised to every two years).

It’s important to understand that Moore never saw this as an open ended proposition.  From the beginning, it was understood that eventually fundamental barriers would get in the way and the “law” would end.  In fact, Moore’s Law in recent years has started sputtering.  Progress has slowed and may halt completely between 2020 and 2025 after transistor features have been scaled down to 7 nanometers, below which quantum tunneling and other issues are expected to make further miniaturization infeasible, at least with doped silicon.

Undeterred, Kurzeil and other singularity predictors express faith that some new technology will step in to keep things moving, whether it be new materials (such as graphene) or new paradigms (neuromorphic computing, quantum computing, etc).  But any prediction on the rate of progress after Moore’s Law peters out is based more on faith than science or engineering.

It’s worth noting that achieving human level intelligence in a system is more than just a capacity and performance issue.  We won’t keep adding performance and have the machine “wake up.”  Every advance in AI so far has required meticulous and extensive work by designers.  There’s not currently any reason to suppose that will change.

AI research got out of its “winter” period in the 90s when it started focusing on narrow relatively practical solutions rather than the quest to build a mind.  The achievements we see in the press continue to be along those lines.  The reason is because engineers understand these problems and have some idea how to tackle them.  They aren’t easy by any stretch, but they are achievable.

But building a mind is unlikely to happen until we understand how the natural versions work.  I often write about neuroscience and our growing understanding of the brain.  We have a broad but very blurry idea of how it works, with detailed knowledge on a few regions.  But that knowledge is nowhere near the point where someone could use it to construct a technological version.  If you talk to a typical neuroscientist, they will tell you that level of understanding is probably at least a century away.

To be clear, all the evidence is that the mind is a physical system that operates according to the laws of physics.  I see no good reason to suppose that a technological version of it can’t be built…eventually.  But predictions that it will happen in 20-30 years seem like overly optimistic speculation, speculation that is very similar to the predictions people have been making for 70 years.  It could happen, but confident assertions that it will strike me as snake oil.

What about superhuman intelligence?  Again, there’s no reason to suppose that human brains are the pinnacle of possible intelligence.  On the other hand, there’s nothing in nature demonstrating intelligence orders of magnitude greater than humans.  We don’t have an extant example to prove it can happen.

It might be that achieving the computational complexity and capacity of a human brain requires inevitable trade offs that put limits on just how intelligent such a system can be.  Maybe squeezing hundreds of terabytes of information into a compact massively parallel processing framework operating on 20 watts of power and producing a flexible intelligence, due to the laws of physics, requires slower performance and water cooled operation (aka wetware).   Or there may be alternate ways to achieve the same functionality, but they come with their own trade offs.

In many ways, the belief in god-like superhuman AIs is an updated version of the notions that humanity has entertained for tens of thousands of years, likely since our beginnings, that there are powerful conscious forces running the world.  This new version has us actually creating the gods, but the resulting relationship is the same, particularly the part where they come in and solve all our problems.

My own view is that we will eventually have AGI (artificial general intelligence) and that it may very well exceed us in intelligence, but the runaway process envisioned by singularity enthusiasts will probably be limited by logistical realities and design constraints and trade offs we can’t currently see.  While AGI is progressing, we will also be enhancing our own performance and integrating with the technology.  Eventually biological engineering and artificial intelligence will converge, blurring the lines between engineered and evolved intelligence.

But it’s unlikely to come in some hard take off singularity, and it’s unlikely to happen in the next few decades.  AGI and mind uploading are technologies that likely won’t come to fruition until several decades down the road, possibly not for centuries.

I totally understand why people want it to happen in a near time frame.  No one wants to be in one of the last mortal generations.  But I fear the best we can hope for in our lifetime is that someone figures out a way to save our current brain state.  The “rapture of the nerds” is probably wishful thinking.

Unless of course I’m missing something.  Are there reasons for optimism that I’ve overlooked?

Posted in Mind and AI | Tagged , , , , , , | 54 Comments