Is the singularity right around the corner?

Schematic Timeline of Information and Replicators in the Biosphere: major evolutionary transitions in information processing.
Image credit: Myworkforwiki via Wikipedia

You’ve probably heard the narrative before.  At some point, we will invent an artificial intelligence that is more intelligent than we are.  The superhuman intelligence will then have the capability to either build an improved version of itself, or engineer upgrades that improve its own intelligence.  This will set off a process where the system upgrades itself, with its greater intelligence come up with new ways to enhance itself, and then upgrade itself again, looping in a rapid runaway process, producing an intelligence explosion.

Given that we only have human level intelligence, we have no ability to predict what happens next.  Which is why Vernor Vinge coined the phrase “the technological singularity” in 1993.  The “singularity” part of the label refers to singularities that exist in math and science, points at which existing theories or frameworks break down.  Vinge predicted that this would happen “within 30 years” and would mark the “end of the human era.”

Despite our purported inability to make predictions, some people nevertheless make predictions about what happens next.  Where they go with it depends on whether they’re a pessimist or an optimist.  The pessimist doesn’t imagine things turning out very well for humanity.  At best, we might hope to hang around as pets.  At worst, the machines might either accidentally or intentionally wipe us out.

Most people who get excited about the singularity fall into the optimist camp.  They see it being a major boon for humanity.  The superhuman intelligences will provide the technology to upload ourselves into virtual environments, providing immortality and heaven on Earth.  We will be taken along on the intelligence explosion ride, ultimately resulting, according to Ray Kurzeil, in the universe “waking up.”  This religious like vision has been called “the rapture of the nerds.”

The modern singularity sentiment is that it will happen sometime in the 2040s, in other words, in about 20-30 years.  Note however that Vinge’s original essay was written in 1993, when he said it would happen in about 30 years, a point that we’re rapidly approaching.

(Before going any further, I can’t resist pointing out that it’s 2019, the year when the original Blade Runner happens!  Where is my flying car?  My off world colonies?  My sexy replicant administrative assistant?)

Human level artificial intelligence is almost always promised to be 20 years in the future.  It’s been 20 years in the future since the 1950s.  (In this way, it’s similar to fusion power and human exploration of Mars, both of which have also been 20 years in the future for the last several decades.)  Obviously all the optimistic predictions in previous decades were wrong.  Is there any reason to think that today’s predictions are any more accurate?

One reason frequently cited for the predictions are the ever increasing power of computer processing chips.  Known as Moore’s Law, the trend of increasing computational power was first noted by Gordon Moore in the 1960s.  What Moore actually noticed was the doubling of the number of transistors on an intergrated circuit chip over a period of time (originally one year, but later revised to every two years).

It’s important to understand that Moore never saw this as an open ended proposition.  From the beginning, it was understood that eventually fundamental barriers would get in the way and the “law” would end.  In fact, Moore’s Law in recent years has started sputtering.  Progress has slowed and may halt completely between 2020 and 2025 after transistor features have been scaled down to 7 nanometers, below which quantum tunneling and other issues are expected to make further miniaturization infeasible, at least with doped silicon.

Undeterred, Kurzeil and other singularity predictors express faith that some new technology will step in to keep things moving, whether it be new materials (such as graphene) or new paradigms (neuromorphic computing, quantum computing, etc).  But any prediction on the rate of progress after Moore’s Law peters out is based more on faith than science or engineering.

It’s worth noting that achieving human level intelligence in a system is more than just a capacity and performance issue.  We won’t keep adding performance and have the machine “wake up.”  Every advance in AI so far has required meticulous and extensive work by designers.  There’s not currently any reason to suppose that will change.

AI research got out of its “winter” period in the 90s when it started focusing on narrow relatively practical solutions rather than the quest to build a mind.  The achievements we see in the press continue to be along those lines.  The reason is because engineers understand these problems and have some idea how to tackle them.  They aren’t easy by any stretch, but they are achievable.

But building a mind is unlikely to happen until we understand how the natural versions work.  I often write about neuroscience and our growing understanding of the brain.  We have a broad but very blurry idea of how it works, with detailed knowledge on a few regions.  But that knowledge is nowhere near the point where someone could use it to construct a technological version.  If you talk to a typical neuroscientist, they will tell you that level of understanding is probably at least a century away.

To be clear, all the evidence is that the mind is a physical system that operates according to the laws of physics.  I see no good reason to suppose that a technological version of it can’t be built…eventually.  But predictions that it will happen in 20-30 years seem like overly optimistic speculation, speculation that is very similar to the predictions people have been making for 70 years.  It could happen, but confident assertions that it will strike me as snake oil.

What about superhuman intelligence?  Again, there’s no reason to suppose that human brains are the pinnacle of possible intelligence.  On the other hand, there’s nothing in nature demonstrating intelligence orders of magnitude greater than humans.  We don’t have an extant example to prove it can happen.

It might be that achieving the computational complexity and capacity of a human brain requires inevitable trade offs that put limits on just how intelligent such a system can be.  Maybe squeezing hundreds of terabytes of information into a compact massively parallel processing framework operating on 20 watts of power and producing a flexible intelligence, due to the laws of physics, requires slower performance and water cooled operation (aka wetware).   Or there may be alternate ways to achieve the same functionality, but they come with their own trade offs.

In many ways, the belief in god-like superhuman AIs is an updated version of the notions that humanity has entertained for tens of thousands of years, likely since our beginnings, that there are powerful conscious forces running the world.  This new version has us actually creating the gods, but the resulting relationship is the same, particularly the part where they come in and solve all our problems.

My own view is that we will eventually have AGI (artificial general intelligence) and that it may very well exceed us in intelligence, but the runaway process envisioned by singularity enthusiasts will probably be limited by logistical realities and design constraints and trade offs we can’t currently see.  While AGI is progressing, we will also be enhancing our own performance and integrating with the technology.  Eventually biological engineering and artificial intelligence will converge, blurring the lines between engineered and evolved intelligence.

But it’s unlikely to come in some hard take off singularity, and it’s unlikely to happen in the next few decades.  AGI and mind uploading are technologies that likely won’t come to fruition until several decades down the road, possibly not for centuries.

I totally understand why people want it to happen in a near time frame.  No one wants to be in one of the last mortal generations.  But I fear the best we can hope for in our lifetime is that someone figures out a way to save our current brain state.  The “rapture of the nerds” is probably wishful thinking.

Unless of course I’m missing something.  Are there reasons for optimism that I’ve overlooked?

Why alien life will probably be engineered life

Martin Rees has an interesting article at Nautilus: When We Find Aliens, We Might Find Something Like the Borg

This September, a team of astronomers noticed that the light from a distant star is flickering in a highly irregular pattern.1 They considered the possibility that comets, debris, and impacts could account for their observations, but each of these explanations was unlikely to varying degrees.2 What their paper didn’t explore, but they and others are beginning to speculate, is that the flickering might be caused by enormous structures built by an advanced civilization—whether the light might be evidence of ET.

In thinking about this possibility, or other similarly suggestive evidence of extraterrestrial life, an image of an alien creature might come to mind—something green, perhaps, or with tentacles or eye stalks. But in this we are probably mistaken. I would argue that any positive identification of ET will very likely not originate from organic or biological life (as Paul Davies has also argued), but from machines.

Few doubt that machines will gradually surpass more and more of our distinctively human capabilities—or enhance them via cyborg technology. Disagreements are basically about the timescale: the rate of travel, not the direction of travel. The cautious amongst us envisage timescales of centuries rather than decades for these transformations.

A few thoughts.

First, I haven’t commented yet here about KIC 8462852, the star Rees mentions in the first paragraph.  It would be beyond cool if this turned out to be something like a partial Dyson swarm or some other megastructure.  But with these types of speculation, it pays to be extra skeptical of propositions we want to be true.  Possibility is not probability.  I think the chances that this is an alien civilization are remote, but I can’t say I’m not hoping.

On the rest of Rees’s article, I largely agree.  (I’m sure my regular readers aren’t shocked by this.)  I do have one quibble though.  Rees uses the terms “robotic” or “machine life”.  In cases where it would make sense to have a body of metal and silicon, such as operating in space or some other airless environment, I think it’s likely that’s what would be used (or its very advanced equivalent).

But when operating inside of a biosphere, I suspect “machine life” might be more accurately labelled as “engineered life”.  In such an environment, an organic body, designed and grown by an advanced civilization for the local biosphere, might be far more useful and efficient than a machine one.  An organic body could get its energy from the biosphere using biological functions such as eating and breathing.  This might be substantially more efficient than carrying a power pack or whatever.

If we met such life, they might well resemble classic sci-fi aliens in some broad fashion.  Nor do I think we should dismiss the possibility that the form of such aliens might not stray too far from their original evolved shapes.  Even the advanced machine versions might well resemble those original shapes, at least in some contexts.

Of course, that original shape might still be radically different than anything in our experience, such as Rees’s speculation about something that starts as an evolved integrated intelligence.  And after billions of years, engineered life may inevitably become an integrated intelligence, at least on the scope of a planet.  (The speed of light barrier would constrain the level of integration across interstellar distances.)

A darker vision of the post-singularity: The Quantum Thief trilogy

TheQuantumThiefCoverI just finished reading Hannu Rajaniemi’s Quantum Thief trilogy: ‘The Quantum Thief‘, ‘The Fractal Prince‘, and ‘The Causal Angel‘.  (The official name of the trilogy is the Jean le Flambeur series, named after one of the chief protagonists, but everyone seems to call it the Quantum Thief trilogy instead.)

Most visions of society after the singularity (or something like the singularity) tend to be utopias, or near utopias.  Rajaneimi’s vision is far darker and more mixed, with some aspects being nightmarish.  Of course, from a story perspective, that actually makes for more fertile ground.

In the Quantum Thief universe, a posthuman civilization TheFractalPrinceCoverexists throughout the solar system, spread between numerous societies.  Mind uploading developed prior to AGI (artificial general intelligence).  Raw AGI itself has proven to be extremely dangerous and AGIs are referred to as “dragons.”  They are rarely released due to their ravenous and uncontrollable nature.  (Rajaneimi, in an interview, stated that this is not necessarily how he expects things would be, but that it was a useful plot device to restrain the story to being between human-like agents.)

The most powerful society in the inner solar system is the Sobornost, a series of collectives with billions of TheCausalAngelCoverminds, referred to as “gogols” in the story.  Gogols are either uploaded human minds or minds created in the template of human minds.  Just about every piece of Sobornost technology involves the use of legions of gogol slaves, implying that for much of humanity, existence has been reduced to the role of mind slaves.  The Sobornost is divided between physical substrates called guberniyas, with each one ruled by a “founder”, presumably one of the people who developed the Sobornost system.

The outer solar system is ruled by a series of collectives referred to as “Zokus”, which are initially presented as gaming circles.  But as the story progresses, it becomes apparent that Zoku society refers to just about any endeavor as a “game”, with an overarching governing authority referred to as the “Great Game” Zoku.   Zoku society appears to be far more appealing than the Sobornost, and they are essentially the Sobornost’s primary power rival.

But there are several other societies.  One is the Oubliette, a city on Mars that is on huge mechanical legs so that it can keep moving to avoid nanobot infections infesting the Martian surface.  The Oubliette society is a posthuman one, but one where everyone lives in human form, except when doing a tour of duty running the city’s machinery.  Every citizen also has control of just how much information they share in social interactions with others.

Another society is the city of Sirr, the last outpost of humanity on Earth, situated somewhere in the Middle East region, and constantly having to protect itself from Earth’s version of nanobot infections, referred to as wildcode.  And Oort, a society of humans living in the Oort cloud, the vast regions of cometary bodies extending for up to two light years from the Sun.  There are also references to societies of bear like creatures (presumably posthumans who adopted that shape for some reason) in the asteroid belt, as well as various mercenary clans.

The story begins with the main protagonist, a thief named Jean le Flambeur, serving time in a the “Dilemma Prison”, a virtual prison where prisoners are forced to regularly undergo painful prisoner dilemmas, a game theory exercise that, presumably, is meant to teach them that tit-for-tat is the most successful strategy, reforming them for society.  le Flambeur is rescued from the prison by the story’s other protagonist, Meili, a warrior from Oort working for the Sobornost.  le Flambeur’s thief skills are needed for a mysterious mission.

The stories go on to explore a number of concepts, such as how reliable our memories can be, the concept of self, whether or not and to what degree we have free will, and what it means to be human.  There are a lot of interesting ideas in these books, and plenty of action to keep things exciting.  And much of what is presented at the beginning is not how it appears.  If you like posthuman science fiction, I highly recommend them, with a qualification.

That qualification is that the prose is very dense.  Rajaneimi introduces new concepts without explanation and counts on the reader picking up their meaning through context, which I for one wasn’t always able to do.  I actually found the first and last books manageable in this regard, with an occasional Wikipedia break helping, but the middle book was a tough slog, with many concepts given names from Islamic spirituality and/or Arabian mythology, whose meanings often weren’t readily available from quick Google searches.

In some cases, I suspected the dense prose masked scientific or plot weaknesses.  And some concepts never seem to get an adequate explanation, with several interpretations of the narrative possible.  I’m generally not a fan of this type of writing, and probably wouldn’t have tolerated it if Rajaneimi’s story and universe hadn’t been so compelling.

But they are, and that’s why, despite its flaws (which some might see as strengths), I still recommend these books.

What do you think about machines that think?

Thinking machines (a cymek and Erasmus) from t...
Thinking machines (a cymek and Erasmus) from the cover of Dune: The Machine Crusade (2003) (Photo credit: Wikipedia)

The Edge question for this year was, “What do you think about machines that think?”  There are a lot of good responses, and some predictably inane ones.  Daniel Dennett gives a good write up on why the Singularity is overblown, and points out something that I’ve said myself, that the real danger isn’t artificial intelligence, but artificial stupidity.

Steven Pinker gives another excellent response, but I think the best one was given by theoretical physicist Sean Carroll.  I had some serious issues last year with Carroll’s response to the 2014 question of “What scientific idea is ready for retirement?”, where Carroll advocated ditching falsifiability.  My post taking issue with his response has been one of the most heavily visited ones that I’ve done.

Since I’m a fan of Carroll’s, I was pleased this year to see that, not only am I (almost) completely in agreement with his response, I find it the best of the ones I’ve read so far: We Are All Machines That Think.

Neuroscience, a much more challenging field and correspondingly not nearly as far along as physics, has nevertheless made enormous strides in connecting human thoughts and behaviors with specific actions in our brains. When asked for my thoughts about machines that think, I can’t help but reply: Hey, those are my friends you’re talking about. We are all machines that think, and the distinction between different types of machines is eroding.

(My only real beef with Carroll’s response is the verbiage immediately preceding this quote where he asserts that science has a complete understanding of the physics involved in everyday life, an assertion I find a bit hubristic, since we often don’t know what it is that we don’t know, the “unknown unknowns” in Rumsfeldian terminology.)

Although I agree with Carroll’s main contention, that we are evolved machines, I can see two objections people might make to the “thinking machines” concept, at least aside from the semantic quibbling about insisting that a “machine” is something humans build.

The first is to assert that there is a non-physical aspect to humans that machines will never be able to duplicate.  I’ve already done a post on why I think the mind is the brain.  The TL;DR is that the well known effects of brain damage and mind altering drugs, which can affect not only our physical coordination, but our memories, inclinations, and our most profound moral and intellectual decisions, leave little room for a non-physical aspect of mind.  As I admit in that earlier post, there is still logical space for a non-physical aspect to the mind, but it is rapidly shrinking as neuroscience advances, and already excludes many things that make us, us.

The second is to admit that the mind is the brain, but assert that the brain mechanisms are too complicated to ever be reproduced.  Perhaps mental processing happens at the base layer of reality, say the quantum layer, or perhaps some unknown lower layer.  While conceivable, there’s no real reason to think that at this point, except to find a way to cling to human exceptionalism.  While we have reasons to suspect that the brain uses quantum effects, we have no good reason to suppose that it uses them in any non “standard” way of how quantum physics are understood to work.

My personal view is that the “secret sauce” of mental processing probably happens at the level of neurons and synapses, with perhaps nuances coming from the molecular level, which might indeed be difficult to reproduce technologically, but far from impossible.  In any case, this is only a problem if someone is attempting to reproduce the exact way a human mind works, not if they are attempting to build something else with the same capabilities and capacities.

Will we ever have thinking engineered machines?  Depending on how you define “thinking”, we already do.  But even if you use a definition that includes consciousness or some other mental capability that machines don’t currently have, I don’t see any fundamental aspect of reality that would prevent it.  (Unlike, say, faster than light travel, which our understanding of physics currently makes unlikely.)  We might eventually discover some such fundamental limitation, but until we do, saying it’s impossible strikes me overly pessimistic (or, depending on your point of view, unrealistically optimistic).

The Edge question also mentions the “dangers” of AI that people periodically express anxiety over.  I’ve done numerous posts on this.  All I’ll say here is that we, as evolved survival machines, fear creating a superior survival machine, but most AIs will have primary purposes other than survival, such as navigation, analysis, construction, etc.  They’re as unlikely to accidentally become survival machines as my Sony PlayStation is to accidentally become a Garmin GPS.

What do you think of the Edge question?  Or the responses?

Worm ‘Brain’ uploaded into robot, which then behaves like a worm

Steve Morris clued me in to this article: Worm ‘Brain’ Uploaded Into Lego Robot | Singularity HUB.

Can a digitally simulated brain on a computer perform tasks just like the real thing?

For simple commands, the answer, it would seem, is yes it can. Researchers at the OpenWorm project recently hooked a simulated worm brain to a wheeled robot. Without being explicitly programmed to do so, the robot moved back and forth and avoided objects—driven only by the interplay of external stimuli and digital neurons.

The article comes with this accompanying video:

Now, the C Elegans worm has about the simplest central nervous system in nature, with only 300 neurons and 7000 synapses (compared to human’s 86 billion neurons and 100 trillion synapses).  Still, the fact that putting that connectome (the map of a brain’s connections) into a robot produced behavior that resembles what an actual C Elegans would do is intriguing.

The article ends by asking the obvious question:

In this example, we’re talking very simple behaviors. But could the result scale? That is, if you map a human brain with similarly high fidelity and supply it with stimulation in a virtual or physical environment—would some of the characteristics we associate with human brains independently emerge? Might that include creativity and consciousness?

There’s only one way to find out.

Of course, many will insist that we shouldn’t even try.  But I suspect that train will leave the station regardless.

Enthusiasts and Skeptics Debate Artificial Intelligence

Kurt Anderson has an interesting article at Vanity Fair that looks at the debate among technologists about the singularity: Enthusiasts and Skeptics Debate Artificial Intelligence | Vanity Fair.

Machines performing unimaginably complicated calculations unimaginably fast—that’s what computers have always done. Computers were called “electronic brains” from the beginning. But the great open question is whether a computer really will be able to do all that your brain can do, and more. Two decades from now, will artificial intelligence—A.I.—go from soft to hard, equaling and then quickly surpassing the human kind? And if the Singularity is near, will it bring about global techno-Nirvana or civilizational ruin?

The article discusses figures like Ray Kurzweil and Peter Diamandis, who strongly believe the singularity is coming and are optimistic about it, and skeptics like Jaron Lanier and Mitch Kapor, who are skeptical of singularity claims.

Personally, I put myself somewhere in the middle.  I’m skeptical that there’s going to be a hard takeoff singularity in the next 20-30 years, an event where technological progress runs away into a technological rapture of the nerds.  But I do think many of the claims that singularitarians make may come true, eventually.  But “eventually” might be centuries down the road.

My skepticism comes from two broad observations.  The first is that I’m not completely convinced that Moore’s Law, the observation by Gordon Moore, co-founder of Intel, that the number of transistors on semiconductor chips doubles every two years, is going to continue indefinitely into the future.

No one knows exactly when we’ll hit the limits of semiconductor technology, but logic-gate sizes are getting closer to the size of atoms, often understood to be a fundamental limit.  It’s an article of faith among staunch singularitarians that some new technology, like quantum or optic computing, will step in to continue the progress, but I can’t see any guarantee of that.  Of course, there’s no guarantee that one of those new technologies won’t soar into even higher exponential progress, but beating our chests about it and proclaiming trust in its eventuality is more emotion than rationality.

The second observation is that the people making predictions of a technological singularity understand computing technology (although not all of them), but not neuroscience.  In other words, they understand one side of the equation, but not the other.  The other day I linked to a study which showed that predictions of hard AI since Turing had been consistently over optimistic, not necessarily on the technology, but on where the technology would have to be to function anywhere like an organic brain (human or not).

Now, that being said, I do think many of the skeptics are too skeptical.  Many of them insist that we’ll never be able to build a machine that can match the human brain, that we’ll never understand it well enough to do so.  I can’t see any real basis for that level of pessimism.

In my experience, when someone claims that X will be forever unknowable, what they’re really saying, explicitly or implicitly, is that we shouldn’t ever have that knowledge.  I can’t disagree more with that kind of thinking.  Maybe there will be  areas of reality we’ll never be able to understand, but I certainly hope the people who a priori conclude that about those areas never get the ability to prevent others from trying.

There are a lot of other things singularitarians assert, such as the whole universe being converted into “computronium” or beings able to completely defy our current understanding of physics.  I think these types of predictions are simply unhinged speculation.  Sure, we can’t rule them out, but having any level of confidence in them strikes me as silly.

None of this is to say that there won’t be amazing progress with AI in the next few years.  We’ll see computers able to do things that will surprise and delight us, and make many people nervous.  In other words, the current trends will continue.  I think we’ll eventually get there, and I’d love it if it happened in my lifetime, but I suspect it will be a much longer and harder slog than most of the singularity advocates imagine.

xkcd: AI-Box Experiment

Click through for full sized version and yellow caption.

via xkcd: AI-Box Experiment.

I do keep saying that AIs won’t want what we want.

Push back against AI alarmism

We’re finally starting to see some push back against the AI (artificial intelligence) alarmism that has been so prevalent in the media lately.  People like Stephen Hawking, Elon Musk, Max Tegmark, and many others have sounded the alarm.  Given my previous post from last night, I think these alarms are premature at best, and are generally misguided.

Now, Rodney Brooks, of Roomba fame, has a post up telling people to chill about AI.

Recently there has been a spate of articles in the mainstream press, and a spate of high profile people who are in tech but not AI, speculating about the dangers of malevolent AI being developed, and how we should be worried about that possibility. I say relax. Chill.  This all comes from some fundamental misunderstandings of the nature of the undeniable progress that is being made in AI, and from a misunderstanding of how far we really are from having volitional or intentional artificially intelligent beings, whether they be deeply benevolent or malevolent.

…In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them. Each of these requires much harder innovations than a winged vehicle landing on a tree branch.  It is going to take a lot of deep thought and hard work from thousands of scientists and engineers.  And, most likely, centuries.

The science is in and accepted on the world being round, evolution, climate change, and on the safety of vaccinations. The science on AI has hardly yet been started, and even its time scale is completely an open question.

And this Edge discussion, titled ‘The Myth of AI‘, is getting shared out a lot.  I found it a bit long winded and rambling, but it expresses a lot of important points.

About the only thing I disagree with on these posts is how much they emphasize how far away we currently are from having AGI (artificial general intelligence), as opposed to the specialized AI we have today.  It’s totally true that we are very far away from an AGI, but I think comforting people with only that leaves out the main reason they shouldn’t freak out.

As I’ve written about multiple times, the fear of AI is the fear that it will have its own agenda, similar to how we and other animals typically have our own agenda.  But our agenda is largely influenced by hundreds of millions of years of evolution.  AIs aren’t going to have that history.  The only agenda they will have, the only desires, impulses, etc, will be the ones they are engineered to have.  The chance of them accidentally acquiring the self actualization agenda that most animals have is infinitesimal.

This is easier to conceive of if we called AIs “engineered intelligences” whose main agenda will be an engineered one, in contrast with “evolved intelligences” whose main agenda is typically survival, procreation, and anything that promotes those goals.

Of course, we might eventually have the ability to build an AI to have an agenda similar to ours.  But if we do that, and treat them as anything less than a fellow being, I think we’d deserve whatever happened next.  Luckily, we have no real incentive to design machines that would hate what we want them to do.  We have every incentive to design machines that will love what we want them to do.  As long as we do that, the danger from AI will be minimal.

Human level AI is always 20 years in the future

Steven Pinker highlighted this study which tracks the predictions of when human level AI (artificial intelligence) will be achieved.  According to the paper, the predictions cluster around predicting that it will be achieved in 15-25 years, and they have been doing so for the last 60 or so years.  The paper also notes that expert predictions have fared no better than non-expert predictions, and actually cluster the same way.

None of this should be too surprising.  Most AI predictions are made by experts in computing technologies, but computer experts are not human mind experts.  Indeed, the level of expertise that exists for computing technology simply doesn’t exist yet for the human brain.  So any predictions made comparing the two should be suspect.  And the people who know the most about the brain, neuroscientists, speak in terms of a century before they’ll know as much about its working as we know about computers.

It would be wrong to take this as evidence that human level AI is impossible.  The human mind exists in nature.  To assert that human level AI is impossible would be to assert that something can exist in nature that human technology cannot replicate.  Historically, no other prediction along these lines has been borne out, and we have no good reason to suspect the human mind will be an exception.

But, as this paper demonstrates, we have very good reasons to be skeptical of anyone who predicts that AI will be here in 20 years, or of any action they’d like us to take in relation to that prediction.

via Steven Pinker

Is the human species still evolving? Of course.

It looks like Bill Nye, the science guy, is coming out with a new book on evolution, with an excerpt at Popular Science: Is The Human Species Still Evolving? | Popular Science.

We cannot step away from evolution. Our genomes are always collecting mutations, and we are always making mate selections. Are humans preferentially mating with other humans who are tall? Blonde or not blonde?

Are smart people actually producing significantly smarter offspring, who end up making more money and ever so slowly outcompeting other families? Or is intelligence a losing trait, because highly educated couples tend to have smaller families, so when something goes wrong there are fewer siblings left to carry the genes forward? Or since highly educated men and women have babies later in life than those that don’t squander their best childbearing years in universities, do the babies of the highly educated enter the world with more trouble in childbirth, and are they prone to more subtle gene troubles that result from later mother and fatherhood? Cue the spooky music.

When I was younger, I reasoned that evolution had ended for humanity because we lived in organized societies that protected the weak.  Without the weak dying in the wilderness, I thought, natural selection couldn’t…select.  And without that selection, traits couldn’t disappear nor new ones dominate, and so evolution couldn’t happen.  But my understanding of natural selection was simplistic.

First off, just because we now live in societies that, at least sometimes, protect the weak, doesn’t mean that mutations don’t happen.  Without the harsh wilderness selection that our ancestors lived in, that probably means that there’s a lot more variation in the human genome than existed, say, 10,000 years ago.  Mutations that might have quickly been selected out in hunter gatherer societies have more of a chance in civilization.

But my second misunderstanding was in believing that natural selection is about survival.  It is, partly, but it’s more about reproductive success.  And animals with certain traits don’t have to be completely unsuccessful reproductively for their traits to disappear.  Given enough time and generations, they only have to be slightly less successful than animals with different traits.

And finally, traits that will be successful in a hunter gatherer culture, such as males with athletic ability and aggression, might be less successful in a farming or industrial society.  Selection is still happening.  It’s just happening at the mate selection and cultural selection level.  (Which is actually still natural selection, if you take a long enough view.)  Humans have just developed the ability to manipulate our environment, and hence the selection criteria.

Another interesting complication with all this is the development of birth control, which essentially allows us to indulge our reproductive instincts without actually reproducing.  That plus the cost of raising additional kids in a modern society means that the most successful people aren’t always going to produce the most offspring.  What effect this might have on evolution over the long term is hard to predict.

Of course, as Nye briefly alludes to, this assumes we won’t go through some type of Singularity in the near future, or, perhaps more likely, take control of our evolution with genetic engineering.  It might be that the era of unguided evolution on this planet is nearing its end, at least for humans.  Possibly.