Artificial intelligence is what we can do that computers can’t…yet

There is currently no consensus on how closely...
(Photo credit: Wikipedia)

I think I’ve mentioned before that I listen to a number of different podcasts.  One of them is Writing Excuses, a podcast about writing science fiction.  One of the recent episodes featured Nancy Fulda to discuss writing about AI realistically.  In the discussion, she made an observation that I thought was insightful.  What we call “artificial intelligence” is basically what computers can’t do yet.  Once computers can do something, it ceases to be something we label artificial intelligence.

Consider that in 1930, the very idea of a machine that could make decisions based on inputs would have been considered a thinking machine.  By the 1950s, when we had such machines, it was no longer considered to be a thinking entity, but simply just one that followed detailed instructions.

In the 1960s, the idea of a computer that could beat an expert chess player would have been considered artificial intelligence.  Then in 1997, the computer Deep Blue beat Gary Kasparov, and the idea of a computer beating a human being at chess quickly got reclassified as just brute force processing.

Likewise, the idea of a computer winning at something like Jeopardy would have been considered AI a few years ago, but no more.  With each accomplishment, each development that allowed a computer to do something only we could, we simply stopped thinking of that accomplishment as any kind of hallmark of true artificial intelligence.

So what are some of the things we currently consider to fall in artificial intelligence that might eventually make the transition?  Pattern recognition comes to mind, although computers are constantly improving in all aspects of that.  The increasing difficulty of Captcha tests are testament to that.

One of the things people often assert today, is that computers can’t really understand anything, and until they do, there won’t be any true intelligence there.  But what do we mean when we say someone “understands” something?  The word “understand”, taken literally, means to stand under something.  As it’s customarily used, it means to have a thorough knowledge about something, perhaps to have knowledge of how it works in various contexts, or perhaps of its constituent parts.

In other words, to understand something is to have extensive knowledge, that is extensive accurate data, about something.  It’s not clear to me why a sufficiently powerful computer can’t do this.  Indeed, I suspect you could already say that my laptop “understands” how to interact with the WordPress site.

Another thing I often hear is that computers aren’t conscious, that they don’t have an inner experience.  I generally have to agree that currently they aren’t and they don’t.  But I also strongly suspect that this will eventually be a matter of programming.  There are a number of theories about consciousness, the strongest that I currently know of being Michael Graziano’s Attention Schema Theory.  If something like Graziano’s theory is correct, it will only be a matter of time before someone is able to program it into a computer.

A Venn diagram illustrating one of the weaknes...
(Photo credit: Wikipedia)

In an attempt at finding an objective line between mindless computing and intelligence, Alan Turing, decades ago, proposed what is now commonly called the Turing Test, where a human tries to tell the difference between a human correspondent and a machine one.  When they can’t, according to Turing, the machine should be judged intelligent.  Many people have found the idea of this test unsatisfactory, and there have been many objections.  The strongest of these, I think, is that the test really measures human like intelligence, rather than raw intelligence.

But I think this gets at the fact that our evaluation of intelligence is intimately tangled up with how human, how much like us, we perceive an entity to be.  It may well be that we won’t consider a computer intelligent until we can sense in it a common experience, a common set of motivations, desires, and impulses.  Until it has programming similar to ours.

Getting back to Nancy’s observation, I think she’s right.  With each new development, we will re-calibrate our conception of artificial intelligence, until we run out of room to separate them and us.  Actually in some respects, I suspect we won’t let it get to that point.  Already programmers, when designing user interfaces, are urged not to make the application they are writing too independent of action, as it tends to make users anxious.

Aside from some research projects, I think that same principle, along with perhaps some aspects of the uncanny valley effect, will work to always keep artificial minds from being too much like us.  There’s certainly unlikely to be much of a market for a navigation system that worries about whether it will be replaced by a newer model, or self driving cars that find it fun to drag race.

30 thoughts on “Artificial intelligence is what we can do that computers can’t…yet

  1. I think many AI objections are just emotional reactions by people who can’t admit that we are replaceable — or worse — that our thought processes are not as deep as we thought.

    Understanding is a key point. Why would it require consciousness? If performance is unaffected, all we did was add an unverifiable, useless criterion to what was a testable prediction.

    Awfully convenient for those who object to AI…

    Like

    1. Society tends to react with emotions when it comes down to strong AI. Yet, the ones that matter to the debate are those that oppose to strong AI with clear and concise points. There is Sir Roger Penrose, who laid out the initial idea of quantum microtubles within the brain(Later confirmed). There is Hubert Dreyfus, who is an outspoken opponent to strong AI(Or more likely, opponent to how science is approaching strong AI). Also, there is one of the first blows to AI in general, the Halting Problem, which Alan Turing proved himself.

      While there are some who object based on emotions, they are not important, or even relevant, to the debate. Given those I named, there is still room to doubt the existence of strong AI.

      Like

      1. Hi synthverity,

        Roger Penrose did not predict microtubules. He speculated that there was some structure in the brain that could take advantage of quantum effects to perform uncomputable operations. It was Stuart Hammeroff who proposed microtubules as candidates.

        Microtubules are present in all cells, not just the brain, and form part of the structure of the cell (the cytoskeleton). There is no evidence that they have any role in information processing, and in fact the standard understanding of quantum mechanics would seem to rule it out as decoherence would make the entanglement Penrose needs too unstable.

        Penrose’s speculation is under-motivated anyway, as on detailed analysis Godel’s theorems provide no good reason to think that human intelligence is not computational. The whole thing is based on a pretty obvious mistake in reasoning. The halting problem is not even close to a barrier to Strong AI. That just shows that there is no algorithm that can tell if an arbitrary algorithm given as input will halt. But there is no human who can perform that feat either, so what’s your point?

        Like

      2. I need to clarify what I wrote. I was writing about Weak AI in general, not Strong AI. I don’t know if we can get Strong AI, nor am I sure we can ever know if we do. In fact, I wrote a series of blogs about machine consciousness, outlining these issues.

        As for Penrose, while he made some compelling arguments against some of the beliefs underlying strong AI (in “Shadows of the Mind”, IIRC), didn’t they find that his microtubules couldn’t have the quantum effects he claimed?

        Like

        1. Penrose’s arguments would also undermine weak AI. He thinks (based on Godel) that you need quantum magic to do mathematics at a human level. He suspects that the same quantum magic is also responsible for consciousness.

          As for microtubules, synthverity posted a link above that seems to lend support to Penrose’s claims. I find these results extremely dubious because I find Penrose’s arguments fail a priori, however I have yet to see a convincing and knowledgeable analysis of these results from a microtubule skeptic.

          Like

          1. Yeah, as much as I enjoyed “Shadows of the Mind”, I found the conclusions unconvincing, and the link did nothing to change my mind.

            I’d like more details on how you find Penrose’s arguments fail a priori, if you don’t mind providing them.

            Thanks.

            Like

          2. Thanks! You hit the nail on the head with your article; why were people ever considered immune to Godel’s Incompleteness Theorem? It goes back to the (semi?) supernatural, fuzzy thinking that as conscious beings we can do “non-computational things”.

            Like

  2. Hi SAP,

    Some good points. In particular I agree with you about what understanding means, and about whether conscious computers should be possible in principle. However I do think there are some comments that could be refined somewhat.

    1) I think many people would agree that Deep Blue was AI. The disagreement is not over whether AI is possible (there have been AI papers and conferences and great research for many decades, as you know) but whether Sentient Intelligence or Strong AI or superhuman AI is possible.
    2) I’m not sure that even Sentient Intelligence is that much of a moving target. I think at one point people might have naively imagined that a machine would need to be conscious to play chess, but Deep Blue showed that all you need is lots of memory and a brute force search algorithm. People were wrong about what was possible without consciousness, but I don’t think the goalposts are actually moving. The real debate is whether machines can have minds similar to ours (i.e. conscious), not what is possible by machines that are unlike us.
    3) I don’t really see the relevance of the uncanny valley. We may not want our sat-navs to be intelligent, but as you say, researchers will certainly not be afraid to cross the uncanny valley, and it is they who will hopefully achieve sentient AI. Besides, end-users may not want a human-like computer for some purposes, but they may for others (e.g. robotic toys or companions).

    Like

    1. I agree, although on 2) I have seen a tendency for people to downplay the accomplishments, once they’re achieved. Programmers and engineers will, I suspect, be the last people to accept a machine as sentient because we’re too familiar with the workings. (Although we might eventually find ourselves colliding into neuroscientists.)

      On 3), even in the case of companions, I suspect we’ll want to maintain some differences. For instance, I doubt we’d want a sex robot that might get bored with us, or decide we’re not its type.

      Like

  3. The problem with the Turing Test is that it’s designed to test human intelligence, not human stupidity. If you asked me, “In what year did Alan Turing die?” I wouldn’t be able to tell you. An expert system might know the answer to all kinds of factual questions, so it would be very easy to tell it apart from a human, even if it was actually quite poor at understanding.

    What’s my point? I’ve kind of forgotten it. Oh yes – human stupidity. AIs are very likely to outperform humans in nearly every kind of task well before they become truly intelligent in the sense that we understand. Therefore the Turing Test isn’t ever going to be a useful test.

    Like

    1. Good point. I actually think part of the goal of Turing proposing the test was to point out that this is essentially a subjective distinction. AIs will achieve sentience, consciousness, intelligence whenever we humans collectively decide that they’ve achieved it, whenever we see enough in them that we can relate to.

      Some of that will involve processing power, but I’m becoming more and more convinced it will also involve us programming them with similar instincts to ours. (And we’ll likely only do that very selectively.)

      Like

    2. Hi Steve,

      If we can make a system that can do all that a human can do, we can surely make a system that either doesn’t know when Turing died or pretends not to know. The Turing test is a test for a level of ability which is at least human. It’s not supposed to be the end goal of AI reserach. It’s the point at which we can say, unambiguously, that computers can be at least as capable as we are. And it’s a very useful test for marking out that goal.

      Like

      1. Hi DM, my comment wasn’t entirely facetious / off-topic. How useful would a machine be that didn’t know simple facts or pretended not to know them in order to deceive? I don’t think we will ever build such a machine, unless it is an academic project intended to pass the Turing Test.

        Human and machine intelligence are highly asymmetrical. By the time we can build a machine that can match humans in all areas, they will be outperforming us in other areas by orders of magnitude.

        Before that happens I am convinced that we will augment ourselves in unpredictable ways. Will be then be human ourselves? Would we be capable of passing the Turing Test ourselves? I bet that if Alan Turing interviewed a typical teenager of 2014, he might have trouble classifying them as human from their answers to his questions.

        Like

        1. Hi Steve,

          The point of building a machine to pass the turing test is not to build a machine which is useful, it is to demonstrate conclusively that machines can do all that humans can do. Yes, it would be an academic project, but it would also settle an important question once and for all.

          But perhaps you have a point. By the time we make a machine capable of passing the Turing test, the test itself might be a moot point because the machines may far surpass us in every way that matters.

          I’m not so sure though. I think that passing the Turing test will be a gradual process. We’ll have machines that are harder and harder to identify as machines. I mean, even Eliza might fool some people. So you don’t necessarily need to have a wonderfully sophisticated program to pass the Turing test for crappy testers. I think the first signs of machines passing the Turing test will not necessarily mean that we have machines that are more creative, imaginative or intelligent than humans, perhaps much less so. If we want to test whether they are capable of having a reasonable level of comprehension, there is no better idea than the Turing test. Part of making these proto-intelligent machines convincing as humans will be to mask their ability to perform machinelike computation.

          TLDR: The Turing Test is not an end, it’s a benchmark.

          Like

  4. SelfAwarePatterns wrote: Programmers and engineers will, I suspect, be the last people to accept a machine as sentient because we’re too familiar with the workings.

    That’s a perceptive observation. Having worked in software development until four years ago, I have trouble seeing computers as anything more than mechanisms that carry out instructions loaded into them by people (us “self-aware patterns”). A computer is as “intelligent” as the software it runs, and the software is only “intelligent” in a metaphorical sense (meaning well-written, efficient, etc.). In a similar but not completely analogous fashion, we say that a book is intelligent, when we really mean the author is.

    There’s no doubt that software will be written that causes computers to do just about anything people can do and do it quicker and more efficiently. I haven’t been paying attention lately, but I assume that there is now software that writes other software. One day, presumably, all software will be written by other software. If I’m still around then, it will be easier for someone like me (old, prejudiced, ignorant, whatever) to think of computers (hardware/software implementations) as intelligent, since they will exhibit creativity and self-control, instead of simply following instructions (in effect, following “instincts” that their creators installed in them). They will be the source of the source (code)!

    I realize my view is premised on the idea that we humans aren’t completely programmed by our genes and our environment, which is obviously a controversial idea. What I’m trying to do is to explain why I currently have trouble accepting machines as human-like with respect to intelligence (I’m even more skeptical about machines becoming conscious, because that seems to be a step beyond being intelligent).There is clearly a normative aspect to calling someone or something “intelligent”, which is one reason we keep shifting the requirements for a machine to qualify as intelligent, whether that’s fair to the machines or not.

    Like

    1. Maybe I should have added that software will write the other software and decide what software to write! To the point that the originating software has gone beyond its own programming.

      Like

      1. I have been a software developer for eight years and I have no problem imagining both that minds are software and that a computer program could be both conscious and intelligent. In fact I believe that this is true, as the rejection of this view seems to me to be incompatible with naturalism.

        Like

        1. You’re probably right, which is why I don’t feel comfortable denying that computers can be intelligent or even conscious. Still, the only things we know about that seem to be intelligent and conscious are animals (including us), which makes me wonder whether computers will need to become more animal-like to acquire those characteristics.

          Like

    2. “I realize my view is premised on the idea that we humans aren’t completely programmed by our genes and our environment, which is obviously a controversial idea.”
      The problem is that the combination of genes and environment is so complicated that it may always be impossible to demonstrate that those are the only causes to someone determined to see more there. I think a key detail is that it’s not just the current environment, but all environments an entity has ever experienced.

      That said, I think a case could be made that, purely on an emergent basis, your statement is true, which I perceive is what you’re saying is needed for machine intelligence.

      Like

      1. I think if you really down to the fundamental facts, we probably are fully programmed, but as you say, the programming is so complex and our consciousness is so limited that it’s difficult to detect. Which is why, as I said in the reply to Disagreeable Me just now, I’m not comfortable denying that machines can be intelligent and conscious. I do think, however, that the machines need to be more autonomous (closer to HAL than an adding machine) before we call them intelligent.

        Like

  5. I find Penrose’s fixation on quantum effects baffling. Whether the brain is a classical computer or a quantum computer; whether it runs on silicon or proteins or carbon nanotubes or steel ball bearings is irrelevant to how it behaves. You can abstract all that into a mathematical model and run that on any kind of digital or analogue computer you like. It is the underlying model that is core to the process, not the physical hardware on which it is instantiated.

    Like

    1. While I think Penrose is wrong for much the same reasons you do, he does have an argument that’s worth addressing. There’s no need for bafflement.

      In short, Penrose has a hunch that quantum effects are in principle impossible to simulate on a computer. He thinks there is more going on than we realise, and that the fundamental laws of physics (of which our laws of physics are only approximations) are actually uncomputable.

      This suspicion of his is quite plausible in some respects. It could be that there is a pattern we cannot easily perceive in all the apparent randomness of quantum mechanics. Indeed, if that pattern is uncomputable, it ought to be next to impossible to perceive it. Yet, thinks Penrose, nature may have unwittingly tapped into it. (One problem I have with this is that if the pattern is uncomputable it’s probably just as hard for evolution to do anything useful with it).

      But of course the most important point Penrose makes is his argument that human intelligence cannot be computational because of what Godel’s theorems say about the impossibility of certain types of proof which human mathematicians are manifestly capable of. His reasoning is that if it cannot be computational, but if we are physical beings subject to the laws of physics, then physics itself cannot be computational. Quantum mechanics is simply the best candidate for where this uncomputable physics lies hidden.

      Penrose’s mistake lies in his analysis of what Godel’s theorems imply. In fact, there is no good Godelian argument to say that human reason is not computational, and I give a detailed explanation of why this is on my blog.

      http://disagreeableme.blogspot.co.uk/2013/02/strong-ai-godel-problem.html

      Like

      1. DM, your blog about formal systems is very interesting and stimulating, but I do not feel qualified to comment on it intelligently.

        It really seems to me that Penrose is clutching at philosophical straws in an attempt to prove that human minds are somehow beyond mathematical understanding – a kind of Turing oracle. As you say in your blog, there is no need to resort to such straw-grasping. The case for uncomputability is not proven.

        As for physics being uncomputable, that is surely a real possibility. We observe that physical systems appear to obey mathematical laws, but we can’t explain why. Yet all complex systems that are composed of many constituents obey strict statistical laws, due to the law of large numbers (e.g. Boyle’s law for a gas, etc). It is certainly feasible that what we observe as strict mathematical behaviour is due to some underlying chaotic/random behaviour of collections of unimaginably large numbers of [something] at unimaginably small scales. But why we would need quantum brains in order to harness this uncomputability (if it exists at all) still escapes me.

        Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.