Human level AI is always 20 years in the future

Steven Pinker highlighted this study which tracks the predictions of when human level AI (artificial intelligence) will be achieved.  According to the paper, the predictions cluster around predicting that it will be achieved in 15-25 years, and they have been doing so for the last 60 or so years.  The paper also notes that expert predictions have fared no better than non-expert predictions, and actually cluster the same way.

None of this should be too surprising.  Most AI predictions are made by experts in computing technologies, but computer experts are not human mind experts.  Indeed, the level of expertise that exists for computing technology simply doesn’t exist yet for the human brain.  So any predictions made comparing the two should be suspect.  And the people who know the most about the brain, neuroscientists, speak in terms of a century before they’ll know as much about its working as we know about computers.

It would be wrong to take this as evidence that human level AI is impossible.  The human mind exists in nature.  To assert that human level AI is impossible would be to assert that something can exist in nature that human technology cannot replicate.  Historically, no other prediction along these lines has been borne out, and we have no good reason to suspect the human mind will be an exception.

But, as this paper demonstrates, we have very good reasons to be skeptical of anyone who predicts that AI will be here in 20 years, or of any action they’d like us to take in relation to that prediction.

via Steven Pinker

37 thoughts on “Human level AI is always 20 years in the future

  1. Too soon for discussion of robot rights, then?

    (Just saw a trailer for Chappie, a film on an AI from the fellow who directed District 9 and Elysium, who I think you are a fan of, yes?)

    Like

    1. It’s interesting. Given how much people are anthropomorphizing Philae the comet probe, I’m actually starting to wonder how much of an issue robot rights will be, particularly with all the cute robot movies. Although all the recent expressions of anxiety over AI from people like Elon Musk makes it hard to know how it will play out.

      I’m definitely a fan of both District 9 and Elysium. You just triggered me looking up and watching the Chappie trailer. (Thank you!) My initial reaction is it seems a bit overdone on pulling the heartstrings, but Blomkamp’s track record leaves me hopeful.

      Like

  2. To assert that human level AI is impossible would be to assert that something can exist in nature that human technology cannot replicate.

    That’s exactly the point. Things exist that any technology cannot replicate. Humans seem to forget that they themselves are a creation and, therefore, cannot create. 🙂

    Like

    1. Thanks for stopping by.

      Your statement seems like one of religious faith, which I’m not particularly interested in contesting. But it does seem like we’ve replicated many things technologically in the last few centuries, often improving on the natural versions. Cars transport us much faster than horses. We fly through the air much faster than any bird. We’ve even tamed the lightning, piping it into our houses for lights and many other purposes. Pointing to something, like brains, and saying we will never replicate it seems unsupportable, particularly since the device you’re using to read this is already doing it in a very limited fashion.

      Liked by 1 person

      1. You said it all: “limited” fashion …

        Dreams about AI exist since when, the 1950ies, if I remember correctly? And what has changed during the last 60 something years in regard to AI? If anything, contemporary science can only state, that the hurdles have beome higher and higher.

        But may be, replicate simply has some other connotation in your language than in mine. 🙂

        Very best regards,
        Salva

        Like

        1. Hi Salva,
          On what’s changed in the last 60 years, computers can now beat humans at chess and at Jeopardy. They can now do many things that were once only the purview of humans. This trend will only continue.

          The people predicting AI decades ago were fairly accurate in predicting the progress in computing capacity, but they didn’t understand the human mind well enough to predict when it would be matched. We still don’t understand it well enough to make that prediction. But if you follow developments in neuroscience, it’s obvious that we understand far more today than we did 20 years ago, and will understand far more in decades to come.

          Liked by 1 person

          1. I completely agree with you that understanding has reached new levels of hitherto unknown knowledge.

            For me, nevertheless, there exists a vast universe of difference between computing power and what generally is termed intelligence. Thus, the coining of AI as intelligence is misleading right from it’s very beginning.

            AI is not a scientific term, but an emotional one …, that is, IMHO. 🙂

            And many thanks, BTW, for this humble discussion thread.

            Like

    2. “AI is not a scientific term, but an emotional one …, that is, IMHO. 🙂 ”

      I agree. The real point of Turing’s test, presented in 1950, was that intelligence was a philosophical concept, and that all science could do was measure behavior. I’m comfortable that machines will be able to eventually exude behavior we’ll be at least tempted to regard as intelligent. Whether it will actually *be* intelligent will always hinge on our particular definition of intelligence. But given how easily people anthropomorphize things like the Philae probe, I tend to think it won’t be as difficult a hurdle as many assume it will be.

      Thanks to you as well for an interesting discussion!

      Liked by 1 person

      1. Well, of course it’s much, much easier to “work” with terms which appear to be known and sound familiar. It will remain a very exciting and thrilling experience to see and watch where this development will ultimately lead to. 🙂

        Liked by 1 person

  3. The human mind does exist in nature, just like the human eye. Yet all of our cameras do not record what the human eye sees. In some cases, the machine ‘sees’ more, such as infra-red light. In other cases, less, like when I could not capture the sun set the other day exactly as I saw it with my eyes.

    Like

    1. It’s interesting that much of what appears to seen with our eyes is an illusion, created by the lower level visual cortex circuitry in our brains. If you read about what the eye actually sees, it’s astoundingly limited, with only a narrow range toward the center of our vision capable of high resolution clarity, and with a large hole in the center that we simply don’t perceive. But our brain maintains an ongoing model of what’s in the world that is updated by the signals from the constantly moving eye.

      What’s even more interesting is that much of the resolution of that model of the world is even itself an illusion. We don’t see nearly as well as we think we do, and we don’t perceive or remember a sight nearly as well as we think we do. These kinds of limitations tend to get exposed, sometimes with tragic consequences, in the contradictions of eyewitnesses in courtroom testimonies.

      Like

  4. I think that there’s a linguistic issue at play in our understanding of “artificial intelligence” – we have some idea of ourselves as specially unique coherent intentional beings, but what exactly that means is sort of ambiguous, so that every time we replicate some intellectual function in technology, it’s extremely easy to just say “Oh well, it still doesn’t do this /other/ thing we can do.” And that can go on pretty much forever.

    Even if we had machines that seemed to have intellectual growth and self-awareness, some will say that unless they start running their own biological functions, it won’t count, and if we had machines that seemed to have biological growth and sensory function, some will say that until they develop metaphysical minds, it won’t count, and then some will say that until the machines perform as well as humans then it won’t count, while others will say that if our machines perform better than humans then they’re clearly not the same, and on and on and on. And if somehow those issues work out, people will bring up pain, emotion, spontaneity, taste, probably a whole host of other things I haven’t thought of, and assert how important these elements are to “human intelligence”.

    So basically, I think that the word “intelligence” is too loaded for people, generally, to admit that we’re anything more than “close” to replicating it, just as every time we find evidence that animals can do something we previously thought only humans could do, instead of admitting we’re not special, we just clarify what humans are a little more narrowly. I think that most Western intellectual traditions – both sacred and secular – are too invested in an image of humans conquering the limits of nature (through religion/souls or through some kind of scientistic humanism) to ever admit we’re anything more than “almost there” with AI. We’ll just keep making some other facet of ourselves “essential” to true human intelligence.

    Liked by 3 people

    1. Well said. People will continue moving the goal post as machines become more intelligent, so that the new degree of machine intelligence won’t be “real” intelligence.

      Personally, I’ve started disliking the term “artificial intelligence”. I think a better one is “engineered intelligence” as opposed to “evolved intelligence”. That distinction probably gives a better idea of the difference between the two.

      Liked by 3 people

      1. Not necessarily. The first artificial general intelligence will likely have been “evolved” in some sense. There are ways of developing software which involve setting up an environment and letting it train itself in ways not dissimilar from natural selection. Very effective but with the downside that we often have very little idea how the end result actually works.

        Like

        1. I’m actually skeptical that the first AGI will be evolved. That notion seems to assume that intelligence is an inevitable consequence of evolution, but I see little evidence for it. I think the first AGI will have to be painstakingly designed. There will be evolution, but it will be the guided evolution of those designs by us as we fine tune them.

          That doesn’t mean any one person will understand the whole system. But that’s already true of any complex system. No one person really understands all aspects of any modern operating system. I doubt any one human completely understands the design of modern CPUs, which are now designed with the aid of software.

          When AGIs start designing AGIs, we do run the risk that no human will even comprehend the principles of the new designs. That could be a danger in the unintended consequences department, if we don’t put in fail-safes.

          Like

        2. Could be, although assuming that feels like giving up. But if so, I’d agree that we should be leery of whatever results. If we set up an environment with selection pressure for intelligence, we’ll get an evolved intelligence, a survival machine, with its own agenda.

          Of course, it’s not clear to me that we even know how to set up an environment to evolve intelligence. It seems like developing such an understanding would give us insights into how to engineer one.

          Like

          1. Artificial Neural Networks are in some respects similar to evolution, but what we let allow to mutate is the strength and organisation of neural connections and what we select for is problem solving. We know how to set up this environment. We can achieve great results. We have no idea (or very little understanding of) how the results are arrived at. The same goes for genetic algorithms. It is not so far-fetched to me that a genetic algorithm or artifical neural network or similar could become the first AGI and that we might have no idea how it works.

            Like

          2. Having thought some more about my evolution comment, I find I’m returning to earlier skepticism. We may be able to evolve certain functions in tightly constrained environments, but evolving general intelligence, at least of the type that would have any independent agenda, strikes me as infinitesimally improbable. At least within any time frame that we’d be willing to wait.

            Although I suppose if the grandest predictions of quantum computing ever come to pass, that time frame could be shrunk considerably. But the more I read about actual quantum computing, the less weight I’m inclined to give to those grand predictions.

            Like

    2. I don’t think the new super intelligent agents are not challenge to humans or it’s a challenge to nature by humans. It’s a step in the evolution where humans had a role to create new super intelligent beings.

      Like

  5. Fusion power has also been “20 years away” for many decades!

    So there are two kinds of AI: “hard AI” and “soft AI”. The latter are non-conscious expert systems, and those already exist. The former refers to machines that would pass the Turing test and any other test of full consciousness we’d expect a human to pass.

    You’re obviously talking about hard AI, and my gut sense is that it will never happen. Now, I lean towards dualism (and spirituality), so I think minds possibly have souls, and that is something a machine can never replicate. I don’t know if we’re endowed by our creator with “inalienable rights,” but perhaps we are endowed with the spark of True Life.

    One the flip side, building a “brain machine” is merely an engineering problem (just like cracking the sound “barrier” was). As you point out, we know brains exist, we can build something that functions just like one of those.

    The question is, what happens when we turn it on?

    Is building a mind like building an FTL spaceship? Not possible, even in principle? Or is it more like the sound barrier — just an engineering problem. I believe it to be the former, but I may yet be proven wrong!

    Like

    1. Ah, I learned something new about you with this comment. You lean toward dualism (I assume substance dualism). I can definitely see a substance dualist seeing AGI (artificial general intelligence) as being impossible. For such a person, an AGI would be a philosophical zombie.

      I’m not a substance dualist. I cover why in this post: https://selfawarepatterns.com/2013/12/16/the-mind-is-the-brain-and-why-thats-good/

      Of course, even if substance dualism is true, that doesn’t mean that a mind might not be “born” when the right substrate is created.

      Like

      1. My sense of dualism extends to ontological and scientific dualism, which tends to ground my sense of mind-body dualism. Cartesian dualism is just a flavor — as you know, the basic idea goes back at least to Aristotle and Plato.

        Your (uncommentable 🙂 ) post raises the primary counter-argument. I guess I don’t see any problem with the effects of drugs and damage (or sleep). Dualism to me implies both are necessary, so what affects one affects the other. This does work both ways — your mood and beliefs can directly affect your physical body.

        Aristotelian dualism is almost a form of emergent materialism, so there’s actually a lot of ground here. I see (ontological) dualism at all levels of life (making Yin-Yang a key philosophical touchstone for me). It’s huge in quantum physics. I wonder if GR and QM could be a dualism. It seems very Occam-y to me that there is some form of dualism regarding mind and body.

        As you say, substance dualism alone doesn’t prohibit creating a brain capable of supporting a mind. That’s kind of where my spiritual leaning kicks in, so I still think it’s a no-go (but we will eventually find out the truth of the matter).

        The coolest description I’ve read recently (admittedly in an SF novel) describes mind as an “incredibly complex standing wave” in the skull. The firing of the brain synapses generate the wave, and since all brains are wired slightly different, all waves — all minds — are slightly different. But since brains have much structure in common, minds are somewhat alike.

        In this account, mind is an emergent material property, but part of the trick is replicating the physical environment that supports the standing wave. Biology may, or may not, be required, but something the size and shape of a skull might be (basically something like a bowling ball).

        Like

        1. On commenting, per your advice, I did extend the comment period to 60 days. My spam tab volume immediately doubled 🙂 but not to anything unmanageable. Unfortunately, the post I linked to was from last year.

          It was pointed out to me by Massimo Pigliucci that my belief in the plausibility of mind uploading was a type of dualism. I agreed that it is, but an engineered form, not one inherent in biological brains. It’s a form of dualism that can exist for any object or system in nature whose state can be recorded. I don’t know if this form of dualism matches property dualism or any of the other philosophical types, but I definitely agree there are different flavors.

          If any form of spiritual dualism is true, then we should definitely hit a brick wall eventually with AI, although I’m personally not expecting it. It will annoy me if we don’t find out one way or another in my lifetime.

          Like

          1. Well, there would be a difference between just recording a system and storing its essence such that it could be restored and functional on some other “platform.” But it’s true that, if that essence can be transferred, that suggests a form of dualism.

            I think the Kurzweil idea remains monistic in that “mind” really is just the calculation. The platform is irrelevant, so there’s not a strong sense of dualism. Per Kurzweil, you could, in theory, simulate a mind with pen and paper.

            A really, really, really slow mind.

            Which is why I think Kurzweil is probably wrong.

            Like

    2. I actually think Kurzweil is right on the mind being computational part. (Very broadly speaking. The brain is not a digital computer, but it does take input, stores information, and produces outputs.)

      I do think Kurzweil’s confident predictions of singularity are simplistic wishful thinking.

      Like

      1. That’s a point we’ll have to disagree on. I don’t believe mind reduces to computation. Roger Penrose makes a good argument for this in his The Emperor’s New Mind. Basically, the Turing Halting Problem shows that some things cannot be computed. There are also numbers you can specify precisely but still not calculate. It all suggests to me there is more to consciousness than algorithm.

        But that’s the key pivot point, isn’t it. Whether human consciousness can occur in anything other than a human.

        Like

        1. It does sound like we’ll have to agree to disagree, which is totally fine. I’m afraid I think Penrose is a good example of why physicists should leave consciousness to neuroscientists.

          The Turing Halting Problem doesn’t mean what many people assume it means. It only means that it cannot be determined ahead of time whether a non-trivial program will ever halt. Of course, one way or another, all programs are eventually halted, but then so are all minds. Other people often talk about Gödel’s incompleteness theorem, which I have to admit to not understanding, but I’ve been assured by multiple mathematicians that even that doesn’t mean what most people think it means.

          It is possible that human consciousness is tied to a biological substrate, that it is of such a fragile nature that it can’t be run in any other environment. I doubt that, but I don’t see any way to rule it out, at least until it is falsified. Even if it is true, I can’t see that it precludes us from eventually building replacement biological substrates.

          Like

          1. I hope I know what the THP means because I’ve got a post about it in my Drafts folder! 🙂

            Here’s an elevator version: The THP is a mechanized version of the Liar Paradox. It’s a computer version of the assertion: “This statement is false.” It’s part of the class of self-referential, or recursive, things, and it shows how analysis can’t provide an answer in all situations (particularly those).

            In a bit more detail: Consider a naive division algorithm that divides two input numbers and returns a decimal result. Give it 2 and 8, and it halts and returns 0.25. But give it 2 and 6, and it generates 0.333… forever. It never halts and returns an answer. (A less naive program would stop after X digits, but this one is dumb.)

            So now assume an “Oracle” algorithm capable of determining whether any given input (another program and its input) will halt or not. The Oracle, for all possible inputs, can tell us whether the division program (or any program) will stop.

            But if the Oracle is possible, I can use it to build a Deceiver program that uses the Oracle’s output to reverse the logic (creating the Liar Paradox). That demonstrates the Oracle is not possible.

            Gödel’s incompleteness is more abstract, but related. (In particular it’s another version of the Liar Paradox.)

            The two theorems are about axiomatic systems capable of expressing arithmetic or set theory. The theory of the Natural numbers, for example. Gödel said such systems cannot fully describe themselves. There will always be things in the system that seem “true” but which can never be proved.

            He further said that any system that claims to be able to prove itself is lying.

            (The technical terms are “consistent” and “complete.” Consistency means it’s impossible prove 1=0. Completeness means your axioms and generated theories cover all possible true statements of the system. Without consistency, your system is useless, but no consistent system can ever be complete.)

            ((The Liar Paradox part comes into play as a formally expressed theorem along the lines of, “This theorem cannot be proven.” If it can be proven, the system is inconsistent. But if it can’t be proven, then it’s true after all.))

            Turing showed there are limits to what algorithms can do. Gödel showed there are limits to mathematics. (Heisenberg showed there are limits to physical reality!) In terms of our other discussion, these are parts of material reality we may never access (like never going faster than light). But this is all monism.

            Like

          2. Hi Wyrd Smythe,

            Those are good descriptions of the THP and Godel, but I don’t agree with you that they show that the mind is not a computation or even give much reason to suspect that. To achieve that, you would need to be able to show that there are things that human minds can do that no computation could ever do, and this has not been established.

            Penrose’s argument is based on faith that any mathematician can in principle prove any theorem or, to take the THP version, that any computer scientist can in principle determine whether any computation can end. I don’t think this is the case. I think for every mathematician there are a set of theorems which that mathematician can prove and a set of theorems they cannot. The same is true for the overall cohort of all mathematicians that will ever live.

            More thoughts here.

            http://disagreeableme.blogspot.co.uk/2013/02/strong-ai-godel-problem.html

            Like

    3. Thanks for the explanation. I knew something about the Halting Problem, but didn’t realize that Godel was an instance of the liar paradox.

      It’s interesting that Turning, despite showing these limitations, was still optimistic about “thinking” machines, what we would call AI today. He didn’t see things like the Halting Problem or Godel’s theorem as roadblocks.

      I have to say that I don’t see them either. The question is whether a human mind could do the things that we’re deeming impossible for a computer to do.

      Like

  6. In response to Michelle above, but also I agree with Selfawarepattern’s post and the newer post on AI.

    I’ve long thought similar things about the Turing Test. It seems like we are searching for a computer to be able to reproduce human language (the vagaries, emotional cues, parameters of human knowledge) and not necessarily intelligent communication. I would agree that we have not created the comprehensiveness and spontaneity in machines that humans are capable of, at least not to my knowledge, but what exactly the Turing Test is testing for is a little vague and idiosyncratic at times. Maybe that is a warning that it is likely that machines will communicate intelligently before they will communicate as unmistakably human, at least that seems intuitive to me.

    There is a great distance from creating a human-like intelligence and to creating intelligence in general (as long as we do not define intelligence as “human intelligence”). And the differences as such is why SAP is saying machines will be more “engineered intelligence” and not necessarily like humans, and thus will pose slightly different motivations and risk factors. Anyways, I concur with such points.

    Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.