Enthusiasts and Skeptics Debate Artificial Intelligence

Kurt Anderson has an interesting article at Vanity Fair that looks at the debate among technologists about the singularity: Enthusiasts and Skeptics Debate Artificial Intelligence | Vanity Fair.

Machines performing unimaginably complicated calculations unimaginably fast—that’s what computers have always done. Computers were called “electronic brains” from the beginning. But the great open question is whether a computer really will be able to do all that your brain can do, and more. Two decades from now, will artificial intelligence—A.I.—go from soft to hard, equaling and then quickly surpassing the human kind? And if the Singularity is near, will it bring about global techno-Nirvana or civilizational ruin?

The article discusses figures like Ray Kurzweil and Peter Diamandis, who strongly believe the singularity is coming and are optimistic about it, and skeptics like Jaron Lanier and Mitch Kapor, who are skeptical of singularity claims.

Personally, I put myself somewhere in the middle.  I’m skeptical that there’s going to be a hard takeoff singularity in the next 20-30 years, an event where technological progress runs away into a technological rapture of the nerds.  But I do think many of the claims that singularitarians make may come true, eventually.  But “eventually” might be centuries down the road.

My skepticism comes from two broad observations.  The first is that I’m not completely convinced that Moore’s Law, the observation by Gordon Moore, co-founder of Intel, that the number of transistors on semiconductor chips doubles every two years, is going to continue indefinitely into the future.

No one knows exactly when we’ll hit the limits of semiconductor technology, but logic-gate sizes are getting closer to the size of atoms, often understood to be a fundamental limit.  It’s an article of faith among staunch singularitarians that some new technology, like quantum or optic computing, will step in to continue the progress, but I can’t see any guarantee of that.  Of course, there’s no guarantee that one of those new technologies won’t soar into even higher exponential progress, but beating our chests about it and proclaiming trust in its eventuality is more emotion than rationality.

The second observation is that the people making predictions of a technological singularity understand computing technology (although not all of them), but not neuroscience.  In other words, they understand one side of the equation, but not the other.  The other day I linked to a study which showed that predictions of hard AI since Turing had been consistently over optimistic, not necessarily on the technology, but on where the technology would have to be to function anywhere like an organic brain (human or not).

Now, that being said, I do think many of the skeptics are too skeptical.  Many of them insist that we’ll never be able to build a machine that can match the human brain, that we’ll never understand it well enough to do so.  I can’t see any real basis for that level of pessimism.

In my experience, when someone claims that X will be forever unknowable, what they’re really saying, explicitly or implicitly, is that we shouldn’t ever have that knowledge.  I can’t disagree more with that kind of thinking.  Maybe there will be  areas of reality we’ll never be able to understand, but I certainly hope the people who a priori conclude that about those areas never get the ability to prevent others from trying.

There are a lot of other things singularitarians assert, such as the whole universe being converted into “computronium” or beings able to completely defy our current understanding of physics.  I think these types of predictions are simply unhinged speculation.  Sure, we can’t rule them out, but having any level of confidence in them strikes me as silly.

None of this is to say that there won’t be amazing progress with AI in the next few years.  We’ll see computers able to do things that will surprise and delight us, and make many people nervous.  In other words, the current trends will continue.  I think we’ll eventually get there, and I’d love it if it happened in my lifetime, but I suspect it will be a much longer and harder slog than most of the singularity advocates imagine.

10 thoughts on “Enthusiasts and Skeptics Debate Artificial Intelligence

  1. “Many [skeptics] insist that we’ll never be able to build a machine that can match the human brain…”

    They’re wrong. That much is just an engineering problem. Building a mind may be another story.

    It’s a bit like breaking the sound barrier versus breaking the light barrier. We always knew that things could go faster than sound (tip of a whip, for example). So building machines to do it is just an engineering problem. But physics says breaking the light barrier is impossible in principle, so it’s more than an engineering problem — new physics would be required. Our understanding of reality would have to change. (Which is certainly possible.)

    We know brain “machines” exist, so we will eventually make (or grow?) one ourselves. I’ve wondered if such a brain might need to be “born” tabula rasa and then educated and trained just as we are.

    “Maybe there will be areas of reality we’ll never be able to understand,…”

    Don’t we already know that to be the case due to Turing, Gödel and Heisenberg? One might even toss in the idea of NP-complete computing problems. I’m quite comfortable with the idea that some areas of reality may be, even in principle, unknowable.

    We can agree that, whatever the case does turn out to be, we’re a long ways away from it now. I’m reminded of how proponents of fusion power have been claiming it’s “20 years away” for a lot longer than 20 years.

    As a vaguely related aside, yesterday I (many * re-)watched Desk Set, which stars the wonderful Spencer Tracy and Katharine Hepburn. It came out in 1957, and watching it this time I was struck by how it may be the first popular movie in which a computer is part of the “cast” and which features the first “computer nerd” (Tracy as a computer nerd… that man had range… his Mr. Hyde was done with almost no special makeup).

    Like

    1. Agreed on the engineering versus understanding of reality problems.

      On Godel and the rest, for limitations in mathematics, yes. Based on the remarks of mathematicians, I’m not necessarily convinced those translate into real world epistemological limitations.

      I once saw the ending of Desk Set, and was intrigued, interested in how an old movie like that portrayed computing technology, and wished I had seen the whole thing. Just looked it up, but no one streams it for free 😦

      Like

      1. Heisenberg would seem to translate to real world limits. A key point here is that these are not issues of epistemology, but issues of ontology. Heisenberg, especially, isn’t about what’s possible to know, but a real limit on reality. For me all three are significant in demonstrating the limits of what we can measure, compute or analyze mathematically. (I actually find those limits rather comforting! 🙂 )

        What might get really interesting is if any of these limits turn out to apply to the phenomena of human consciousness. Like a secure quantum connection, it could turn out full consciousness can’t be tapped and, hence, never downloaded.

        For a movie done in 1957, Desk Set is surprisingly accurate about computers (for a popular comedy movie). But they had some assistance from a small company: International Business Machines.

        Like

        1. Whenever someone asserts to me that an issue is one of ontology rather than epistemology, I’m tempted to ask, how can we know that? Do we have any other way to access ontology except through epistemic means?

          Again though, I am open to areas that might be unknowable, although I hope people never stop trying to know them.

          Thanks for the info on Desk Set. I might have to rent it at some point.

          Like

          1. It’s well worth the rental.

            If one believes what Heisenberg asserts, the matter ontological — it references reality. Position and momentum “really are” exclusive pairs. The limit is as real as the light speed limit.

            Of course, new physics could re-write the book, but as we understand the situation, the limits are real. The way we know (or presume) that is via the math and physics involved.

            Likewise, Turing is talking about real limits to computation and Gödel is talking about the limits of math. For that matter, Cantor (whose diagonalization trick underlies both the Turing and Gödel proofs) also assets a real limit regarding uncountable infinities and real numbers.

            But, totally agree, we shouldn’t stop trying! You never know when someone will find new physics or even just a new idea. I have this abiding belief that our model of QM will someday be viewed the same way we view epicycles — as a nice try. 🙂

            (I read a couple of articles recently about abolishing the concept of spacetime and replacing it with some form of event and relationship scheme. We do almost seem in a sort of dead end with no real discoveries in a long, long time.)

            Liked by 1 person

  2. We’re already feeling the end of Moore’s Law, I think, with the shift towards greater parallelism both within PCs and in the rise of cloud computing (mega-parallelism in server farms connected to devices by high speed broadband and wifi).

    Liked by 1 person

  3. The odd thing to consider about artificial intelligence is whether we could differentiate between actions based on a insanely vast amount of data stored options and actions based on an artificial ego driven intellect. As an outsider watching these actions taken by an artificial construct, how are you going to know how the decision was made?

    Like

      1. Ah yes that’s what I was getting at, but I wasn’t even aware the question went that deep. Is there a real difference? And if there was a difference more than what’s stated above. Which one would be superior in a scenario of competitive survival?

        Like

        1. In capabilities, it’s not hard to imagine that an AI will be superior. Computers are already superior at a number of tasks.

          But the real question (one that might provide an answer your original question) is, why would we think an AI would care about it’s own survival? My current laptop doesn’t care if I replace it with a newer model. Why should it start caring if it had orders of magnitude more storage and processing power? At least why until and unless we programed it to?

          Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.