Is the singularity right around the corner?

Schematic Timeline of Information and Replicators in the Biosphere: major evolutionary transitions in information processing.
Image credit: Myworkforwiki via Wikipedia

You’ve probably heard the narrative before.  At some point, we will invent an artificial intelligence that is more intelligent than we are.  The superhuman intelligence will then have the capability to either build an improved version of itself, or engineer upgrades that improve its own intelligence.  This will set off a process where the system upgrades itself, with its greater intelligence come up with new ways to enhance itself, and then upgrade itself again, looping in a rapid runaway process, producing an intelligence explosion.

Given that we only have human level intelligence, we have no ability to predict what happens next.  Which is why Vernor Vinge coined the phrase “the technological singularity” in 1993.  The “singularity” part of the label refers to singularities that exist in math and science, points at which existing theories or frameworks break down.  Vinge predicted that this would happen “within 30 years” and would mark the “end of the human era.”

Despite our purported inability to make predictions, some people nevertheless make predictions about what happens next.  Where they go with it depends on whether they’re a pessimist or an optimist.  The pessimist doesn’t imagine things turning out very well for humanity.  At best, we might hope to hang around as pets.  At worst, the machines might either accidentally or intentionally wipe us out.

Most people who get excited about the singularity fall into the optimist camp.  They see it being a major boon for humanity.  The superhuman intelligences will provide the technology to upload ourselves into virtual environments, providing immortality and heaven on Earth.  We will be taken along on the intelligence explosion ride, ultimately resulting, according to Ray Kurzeil, in the universe “waking up.”  This religious like vision has been called “the rapture of the nerds.”

The modern singularity sentiment is that it will happen sometime in the 2040s, in other words, in about 20-30 years.  Note however that Vinge’s original essay was written in 1993, when he said it would happen in about 30 years, a point that we’re rapidly approaching.

(Before going any further, I can’t resist pointing out that it’s 2019, the year when the original Blade Runner happens!  Where is my flying car?  My off world colonies?  My sexy replicant administrative assistant?)

Human level artificial intelligence is almost always promised to be 20 years in the future.  It’s been 20 years in the future since the 1950s.  (In this way, it’s similar to fusion power and human exploration of Mars, both of which have also been 20 years in the future for the last several decades.)  Obviously all the optimistic predictions in previous decades were wrong.  Is there any reason to think that today’s predictions are any more accurate?

One reason frequently cited for the predictions are the ever increasing power of computer processing chips.  Known as Moore’s Law, the trend of increasing computational power was first noted by Gordon Moore in the 1960s.  What Moore actually noticed was the doubling of the number of transistors on an intergrated circuit chip over a period of time (originally one year, but later revised to every two years).

It’s important to understand that Moore never saw this as an open ended proposition.  From the beginning, it was understood that eventually fundamental barriers would get in the way and the “law” would end.  In fact, Moore’s Law in recent years has started sputtering.  Progress has slowed and may halt completely between 2020 and 2025 after transistor features have been scaled down to 7 nanometers, below which quantum tunneling and other issues are expected to make further miniaturization infeasible, at least with doped silicon.

Undeterred, Kurzeil and other singularity predictors express faith that some new technology will step in to keep things moving, whether it be new materials (such as graphene) or new paradigms (neuromorphic computing, quantum computing, etc).  But any prediction on the rate of progress after Moore’s Law peters out is based more on faith than science or engineering.

It’s worth noting that achieving human level intelligence in a system is more than just a capacity and performance issue.  We won’t keep adding performance and have the machine “wake up.”  Every advance in AI so far has required meticulous and extensive work by designers.  There’s not currently any reason to suppose that will change.

AI research got out of its “winter” period in the 90s when it started focusing on narrow relatively practical solutions rather than the quest to build a mind.  The achievements we see in the press continue to be along those lines.  The reason is because engineers understand these problems and have some idea how to tackle them.  They aren’t easy by any stretch, but they are achievable.

But building a mind is unlikely to happen until we understand how the natural versions work.  I often write about neuroscience and our growing understanding of the brain.  We have a broad but very blurry idea of how it works, with detailed knowledge on a few regions.  But that knowledge is nowhere near the point where someone could use it to construct a technological version.  If you talk to a typical neuroscientist, they will tell you that level of understanding is probably at least a century away.

To be clear, all the evidence is that the mind is a physical system that operates according to the laws of physics.  I see no good reason to suppose that a technological version of it can’t be built…eventually.  But predictions that it will happen in 20-30 years seem like overly optimistic speculation, speculation that is very similar to the predictions people have been making for 70 years.  It could happen, but confident assertions that it will strike me as snake oil.

What about superhuman intelligence?  Again, there’s no reason to suppose that human brains are the pinnacle of possible intelligence.  On the other hand, there’s nothing in nature demonstrating intelligence orders of magnitude greater than humans.  We don’t have an extant example to prove it can happen.

It might be that achieving the computational complexity and capacity of a human brain requires inevitable trade offs that put limits on just how intelligent such a system can be.  Maybe squeezing hundreds of terabytes of information into a compact massively parallel processing framework operating on 20 watts of power and producing a flexible intelligence, due to the laws of physics, requires slower performance and water cooled operation (aka wetware).   Or there may be alternate ways to achieve the same functionality, but they come with their own trade offs.

In many ways, the belief in god-like superhuman AIs is an updated version of the notions that humanity has entertained for tens of thousands of years, likely since our beginnings, that there are powerful conscious forces running the world.  This new version has us actually creating the gods, but the resulting relationship is the same, particularly the part where they come in and solve all our problems.

My own view is that we will eventually have AGI (artificial general intelligence) and that it may very well exceed us in intelligence, but the runaway process envisioned by singularity enthusiasts will probably be limited by logistical realities and design constraints and trade offs we can’t currently see.  While AGI is progressing, we will also be enhancing our own performance and integrating with the technology.  Eventually biological engineering and artificial intelligence will converge, blurring the lines between engineered and evolved intelligence.

But it’s unlikely to come in some hard take off singularity, and it’s unlikely to happen in the next few decades.  AGI and mind uploading are technologies that likely won’t come to fruition until several decades down the road, possibly not for centuries.

I totally understand why people want it to happen in a near time frame.  No one wants to be in one of the last mortal generations.  But I fear the best we can hope for in our lifetime is that someone figures out a way to save our current brain state.  The “rapture of the nerds” is probably wishful thinking.

Unless of course I’m missing something.  Are there reasons for optimism that I’ve overlooked?

58 thoughts on “Is the singularity right around the corner?

  1. Civilization seems inherently unstable. What we have was (and still is) built upon coerced labor so that the elites can have time to do art and science and government, etc. With the population burgeoning and the environment near collapse in any number of ways (climate change, oceanic biosphere collapse, loss of biodiversity in foodstuffs, and we still have cultures based upon personal gain and wealth accumulation, I suspect that there is a race between being able to build a savior, an AI we can consult for solutions to problems, and the collapse of our ability to build such things in the form of social collapse, culture collapse, etc.

    Ironically, I have always been optimistic. But we have painted ourselves into a corner and we are not learning from our mistakes as fast as we need to, so the writing is, indeed, on the wall.

    Liked by 1 person

    1. I’m also actually a cautious optimist about many things. There’s no guarantee we won’t destroy ourselves, or mess up bad enough that global civilization substantially regresses. Anthropologists have documented plenty of societies where it did happen, civilizations that messed up their ecology bad enough or simply suffered a severe enough change in climate that their standard of living, population, and overall structure collapsed or regressed. Progress isn’t guaranteed.

      Still, I find Steven Pinker’s statistics convincing. We are making progress on many fronts. The question is how bad things will have to get before we address the major environmental issues. I’m pretty confident that we will eventually, but the effects are probably going to have to be a lot more visceral to the average person before it happens.

      Like

  2. An excellent blog post, balanced and well-reasoned. I recently read Professor Yuval Harari’s “Homo Deus”, which probably leans more towards the bio-technological singularity fulfillment end of the spectrum, but also well-reasoned and very persuasive. I think that if you factor into the equation Chaos Theory and cusps, there could be hidden points along our linear paths of progress which catapult us off our familiar plane of existence at exponential speeds. Anyway, I enjoyed your post and look forward to reading more of them.

    Liked by 1 person

    1. I also read “Homo Deus” recently, and was kind of surprised by his optimism. But Harari has no technology background, he is only taking Silicon Valley PR at face value. He is a historian though, and in many ways, we live in unpredictable times, which is one way to interpret the “Singularity”.

      I also read Ray Kurzweil’s “The Singularity is near” a month ago. He does have a technical background, and he is making quite specific predictions about what will be happening next. For example, we are supposed to have perfected virtual reality by the end of this decade, if I remember correctly.

      Liked by 2 people

  3. I agree it will probably take around a century (order of magnitude) to understand how the brain works well enough to make a decent AI model of it. But there are other potential routes to general-competence AI, including evolutionary algorithms. Evolutionary algorithms are how the brain itself was built, after all. That took billions of years, but the selection pressure wasn’t aimed at intelligence per se. And the rates of mutation and hybridization were set with no regard for long term evolutionary progress. By engineering those parameters to get a particular result, human-supervised evolutionary computation might be able to trim many orders of magnitude off the required number of generations to get a result. And of course each generation could take far less time than a generation in biological evolution.

    It’s partly a brute force technique, so the prospects for this avenue partly depend on how steeply Moore’s Law tails off. I do think it’s tailing off, but there are some additional aspects to explore, like the transition from 2D to 3D electronics packages, and quantum computing.

    That’s just one way to build AI without copying biological brains extensively.

    I certainly hope the achievement of better-than-human AI is a slow process, because that will give us some barely possible chance of getting a handle on the AI before it’s a done deal. A chance which we will probably squander, if history is any guide. I mean, look at governments and corporations. Each of those two innovations took a very long time to flower, but human beings are still in thrall to these more-powerful entities, to a disturbing extent. And when powerful AI is developed, it will probably be owned by governments and corporations.

    Liked by 1 person

    1. I’m actually somewhat skeptical that the evolutionary approach would work. If we knew for sure what adaptive pressures led to our intelligence, it might be feasible. But given how seldom natural selection has focused on intelligence as a solution, I suspect we’d just get something that was very good at surviving in whatever environment we created, not necessarily anything intelligent.

      And I think it’s a dangerous approach. Whatever crawled out of that environment, it would likely be aggressive and have its own agenda. Imagine a super-virulent virus escaping that environment. It wouldn’t need to be intelligent to wipe us out or give us a lot of grief.

      Your last point I think is something people should think about. It’s not AI so much we need to worry about, it’s what people will attempt to do with it. For instance, advanced versions of the vertically specialized AI we have today could be used to as an extremely detail oriented, tireless and relentless surveillance mechanism more effective at intruding into people’s business than anything else in history.

      Like

      1. SelfAware,

        You write, “Whatever crawled out of that environment, it would likely be aggressive and have its own agenda.”

        Don’t you think that applies to human beings too? Maybe the unhappy irony is that intelligence evolves along with aggressive agendas in order that they be successful.

        (As Pogo said a long time ago (1972), “We have met the enemy, and he is us.”)

        Liked by 3 people

        1. Mark,
          Definitely. We evolved along with everything else and come from a long line of creatures that had to fight to survive. It pays to remember that humanity is the alpha predator of all alpha predators.

          Not individually of course. The average human couldn’t win a fight with the average lion, elephant, or chimpanzee. Our superpower is communication and cooperation. But it’s made us the most dangerous species on the planet.

          We’re pretty much the only species that can threaten the entire environment and cause a mass extinction event. (Although I’ve often wondered if a particularly virulent strain of something might not have caused one of the prior events. That never seems to get mentioned as a possibility.)

          Like

  4. My view is that the popular description of the singularity that you describe is a narrow and limited perspective. Here’s my take. The singularity is not an event, but a process. It’s happening now. Just as the Big Bang theory describes the current state of our expanding universe rather than a single event in the past, the singularity describes our current exponentially accelerating technology, rather than some future point.

    Technology has been accelerating exponentially for as long as you care to measure it. The superhuman intelligence that is constantly upgrading itself is humanity. By creating ever more powerful new tools, the rate at which we create new tools and capabilities grows ever faster. Machine intelligence is just one piece of the toolkit that we have built and are building.

    Who knows where it will lead? Nobody, because the singularity is already here! Will it be good or bad? Like everything that humans do, a bit of both.

    By the way, many technologies have been predicted to happen 20-30 years in the future, and the majority of them did! Just because a proportion didn’t, doesn’t mean that they will always be out of reach.

    Liked by 1 person

    1. Thanks Steve. Your description of the singularity strikes me as much more plausible than the one people usually talk about. Of course, predicting the effects of technology is extremely difficult. Who 20 years ago would have predicted social media?

      Recently an article written by Isaac Asimov has surfaced on his predictions about what things would be like today. It seems like Asimov got most of the technology right, but missed a lot on the ways we choose to use that technology. He missed the mobile revolution and completely over-predicted where we’d be in space. Asimov was a competent a future predictor as anyone.

      On your last point, I don’t know that I’d say a majority of predicted technologies came to pass. Although I’m sure if you scoured the records you can find someone somewhere who predicted everything that got developed. But it’s very easy to remember the hits while forgetting the misses: flying cars, domed cities, moonbases, space colonies, etc.

      I do agree that many things will eventually come around. I tried to make that point in the post. Predicting what is eventually possible is very different from predicting what will happen in the next few decades. As we’ve discussed before, near future predictions are the hardest ones to make.

      Like

  5. Singularity! Woot!

    [And wow, that was fast Mike]

    As you, Mike, know, and others may know, I could accurately be called a singularitarian. I’ve come to this position (I think), not by really hoping it’s true, but by evaluating the arguments as best I can. To my knowledge, these arguments are best laid out in the following books:
    Engines of Creation, by K. Eric Drexler (nanotechnology is coming)
    The Singularity is Near, by Ray Kurzweil (tracking capabilities of information technology)
    The Beginning of Infinity, by David Deutsch (how increasing knowledge works and what it means).

    Before I address specifics in the OP I should make some general observations. First, as has already been suggested, there is a huge difference between what will happen with technology and what people will do with technology. There is some overlap because one of the things people do with technology is make more technology. Nevertheless, I try to restrict my judgments to what will happen with technology (we’ll almost certainly develop AGI), and avoid judgments about what we’ll do with it (I really so no point in “uploading” something that works like my mind, just like I see no point in commissioning a lifesize portrait of myself).

    Second, Steve Morris above is correct in suggesting we are experiencing the singularity now. As he said, and Kurzweil’s best point, is that (information?) technology capability has been increasing on an exponential curve from the beginning of (Big Bang?, Life?, Mankind?). So what’s new? The ratio of increase relative to a human lifespan. Up to a few hundred years or so ago, a person could expect technology at the time of their death to be pretty much the same as when they were born. When I was growing up it seemed like a reasonable bet that technology 10 years in the future would not be radically different from what it was. I dare you to take a bet now regarding any technology 5 years from now. That horizon is getting closer all the time. [But see my summary at the end].

    Okay, so from the above viewpoint I have issues with some things in the OP:
    1. “Human level artificial intelligence is almost always promised to be 20 years in the future.”
    There are two different predictions as to the technology.
    a: When will a computer have the computing power of a human brain
    b: When will a computer have the intelligence of a human brain
    Kurzweil charts the progress of super computers (so, the best we have), putting (in 2005) the ability to simulate a human brain-sized neural network at 2025. (p.71) His current prediction for “b” above is 2029. The former seems fairly solid. The latter, although more speculative, seems reasonable to me.
    Mike says “Is there any reason to think that today’s predictions are any more accurate?”
    Kurzweil says: data.

    “But any prediction on the rate of progress after Moore’s Law peters out is based more on faith than science or engineering.”. See above: Kurzweil uses data.

    Actually, as Floridi points out in his Aeon article (https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible), the exponential increase is almost certainly not a hyperbolic curve but an S curve. But again, looking at the data, we are nowhere near the inflection point.

    “Every advance in AI so far has required meticulous and extensive work by designers. There’s not currently any reason to suppose that will change.”
    But, but, but … Alpha Zero ?! All they had to do was give it the rules and it learned how to play better than humans in, what, hours. So what’s next? That would be giving it the rules of the world, i.e., causality. A tough task, yes, but that’s what they’re at least talking about, possibly doing, right now. And others (eg. , Judea Pearl, The Book of Why) are providing the techniques, right now.

    4.”On the other hand, there’s nothing in nature demonstrating intelligence orders of magnitude greater than humans.”
    Um, not in nature, but, but, but …. Alpha Zero ?! I’m not anticipating AGI with intelligence “orders of magnitude” greater, but I think AlphaZero allows us to expect “a lot” greater in every domain of game playing, I mean, problem solving.

    Okay, here’s my summary of what I expect:
    Within the next 10 years designers/programmers will figure out how to get supercomputers (like AlphaZero) to manage causality. (“Let’s play the clean the house game. Something tells me that stuff on the floor is trash. A good place for trash is the trash bin. How can i get that trash in the trash bin? Oh, right, call the Roomba! Okay, something tells me that stuff on the floor is a child. That’s a fine place for a child. Nothing to do here”)

    Within 10 to 15 years after that, computers with that capability will be fairly cheap ($1-5K?), and there will be lots of them (one in every house?). In the mean time, some of those computers will be learning the “drive people around” game, and some will be learning the “make neural networks” game, and some will be learning the “make nanobots” game, etc.

    Around this time things can get really weird, really fast, making it hard to predict anything at all, but also our ability to predict will have improved. Thus we’ll never cross the Singularity horizon, but it will always be a nearby presence.

    *
    [too much?]

    Liked by 3 people

    1. I figured you were a singularitarian when you asked for this post. I can’t say I slaved away on it for any great length of time. This is ground I’ve read about and covered before.

      The rate of technological progress has accelerated since the start of the industrial revolution. That’s true. But sometimes our impression of what’s happening can be misleading. In the 1920s, when modern science fiction was coming into being, dramatic improvements had taken place in transportation over the preceding 50 years. It seemed to those seminal sci-fi writers that it would continue forever. By the start of the 21st century, we’d be all over the solar system.

      Obviously it hasn’t worked out that way. It turned out that there were easy gains to be made in transportation, and that continued until roughly the late 60s. But how much have cars, planes, and trains changed since then? They’ve gotten more efficient, but the easy gains in transportation had leveled off at the top of the S-curve.

      Starting in the 1960s, computing power has increased exponentially. It seemed like something that would never end. Kurzweil interpreted it to be like a force of nature. But like the improvements in transportation, we were just in a steep part of the S-curve, and in the last few years, it’s started leveling out.

      Does that mean that all gains in information technology are over? Of course not. We have the example of organic brains in front of us demonstrating that information processing can be a lot more dense than it currently is. But like future gains in transportation, we’re going to have to get a lot more clever to keep moving forward. The old architectures and paradigms will have to be reassessed. (Freeman Dyson recently published an opinion that we need to be looking at analog computation. Not sure if I buy that, but it’s the kind of thinking we’ll likely need to go through.)

      Did Kurzweil put out a new edition to his book? Your dates don’t match up with the ones on a Wiki page tracking his predictions. https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil#The_Singularity_is_Near_(2005)

      I’m not going to go through all his predictions one by one. Part of the problem is that predicting when a computer or supercomputer will have the same information as a human brain, or be able to emulate it, is speculation in two direction: one on when technology milestone will be reached, but second on how much information actually exists in a brain or how much computational capacity is needed to emulate it.

      On the second, there’s very little consensus on that right now. Kurzweil apparently thinks it’s 10 TB, but that only fits if you consider a synapse to encode one bit of information. From what I’ve read about synapses, they certainly encode more than that, but it might be as little as 4 bits or as much as several bytes, which might drive the information in the brain into the multiple petabyte range. We just don’t know yet. And actually emulating a brain would take far more computing power than that, if we’re talking about it emulating the full connectome.

      You say Kurzweil is using data. What data? From what I can see, his data on technology is optimistic and is already missing milestones. Still, his technological predictions were at least marginally plausible. But his supposition about the neuroscience seems like guesswork. (Incidentally, this is a common failing in AI predictions. One of the earliest AI predictors in the 1950s thought of neurons as transistors and imagined a computer with 100 MB having the same computational capacity as a brain.)

      On AlphaZero, yes it’s impressive, but it’s still dealing with an extremely simplified domain by real world standards. I think you’re dramatically underestimating the difficulties in learning about the real world.

      It’s worth noting that we still don’t have a system yet with the spatial and navigational intelligence of a bumblebee (despite likely having systems with more computational power). The self driving cars may seem more intelligent, but that’s only because they’re able to put up a fascade that bees can’t. But put the cars in a mixed terrain (such as a construction zone) and they fail spectacularly. They’re getting better, but in the short term, that’s only because even the roads are a simplified terrain compared to, say, a jungle, and I suspect road construction will start taking their needs into account.

      I don’t want to spend too much time on the technological pessimism hill. I’m a technological optimist by nature. I think we’ll eventually get there on most fronts. And I certainly wouldn’t mind seeing your timetable happen. I just think it will be harder and slower than singularitarians envision.

      Liked by 1 person

      1. Hey Mike,

        Re: planetary exploration — our lack of planetary colonization is not so much a matter of capability as of choice. Turns out that there has not been much good reason to do it. I don’t think that argument will apply to improving info-tech.

        Re: transportation — you’re correct that not that much has changed with cars, plains, and trains. Not that much has changed with horses and buggies either. What has changed is getting things done without leaving the house, thus, tele-commuting, tele-conferencing, tele-education, online shopping, video streaming, etc. So which is better, a flying car, or not needing a car?

        Necesssity is the mother …

        Re: increasing computer power — I can’t seem to find recent data, but what counts as computer power? Operations per second? Per chip? Per integrated circuit? Per computer? Per dollar? Would love to see your data on what is leveling off. If I have a robot with a computer plus a raspberry pi that sends commands to a roomba via Wifi (as per my scenario above) does that all count as one computer? [BTW, that could happen. I’ve ordered the robot.]

        Anyway, my version of Kurzweil’s book is copyright 2005, pretty sure it’s first edition. Anyway, he has charts with points on them. I only hope he wasn’t pulling those points out of his, um, head

        Re: AlphaZero — it’s the difference between the train and the car. The train is algorithms designed by a human. We can get better, but only incrementally. The car is algorithms which make their own algorithms. It’s a new thing. We’re still at the point where we’re sending a boy on foot with a lamp out in front saying “lookout for the automobile!”.

        Re: bumblecars: nobody (I think) believes today’s self driving cars are smarter than bumble bees. But bumble bees aren’t going to get smarter. Bumble bees aren’t going to teach themselves the “driving in the city game”, and also the “driving in the construction site game”, and also the “driving through a crowd of people without hurting anyone” game, or the “where am I now” game along with the “what game should I play now” game.

        *

        Liked by 2 people

        1. Hi James,
          On planetary exploration, a lot of the old science fiction imagined us continually accelerating and decelerating around the solar system, getting to the inner planets in days and the outer planets in weeks. Our lack of that kind of tech isn’t a choice. We currently don’t know how to build those kind of craft. If we could get around the solar system in those time frames, I’m pretty sure we would be all over it already.

          “So which is better, a flying car, or not needing a car?”
          A self-piloted flying car with internet access would be ideal!

          On computer power in terms of Moore’s Law, it’s about transistor density, how small the fundamental units of the chip can be made. For a discussion on the current state (or well, the state as of a few years ago), check out this article: https://www.technologyreview.com/s/601441/moores-law-is-dead-now-what/

          And then there’s this article from a couple of days ago: https://mindmatters.ai/2019/01/will-artificial-intelligence-design-artificial-super-intelligence/

          Again, I personally think progress will continue, but the gains won’t be automatic anymore, and it will require the industry to explore new architectural principles in a way it hasn’t had to do in a long time. If we can’t find a way to shrink transistor features anymore, we’ll need to figure out ways to encode more information with what’s available, which if you think about it, is what biology does.

          Liked by 1 person

    2. Hello James (Of Seattle). I think you might be over-estimating progress in this area. The truth is, deep learning has been over-sold to the point some are talking about a second AI Winter. It works really well in its domain, it’s changed computing, but it’s not anywhere close to general AI.

      @Mike: “Every advance in AI so far has required meticulous and extensive work by designers. There’s not currently any reason to suppose that will change.”

      @James: “But, but, but … Alpha Zero ?! All they had to do was give it the rules and it learned how to play better than humans in, what, hours.”

      It’s the Alpha Zero system itself, and the rules, that “required meticulous and extensive work by designers.” As Mike and I discuss below, all such a system does is use millions of iterations to build a massive configuration space.

      Think of it as a landscape with “hills” and “valleys.” (In reality, the landscape has many thousands of dimensions, so it can’t be visualized.) Each point in that landscape represents a game position. “Uphill” are points that lead to winning, “downhill” are points that lead to losing.

      For any position in the game, all that happens is the system tries to go “uphill” from where it is, and it wants the highest hill around.

      Part of it is making sure the system is smart enough not to be fooled by close hills masking higher hills further away. That is, a more successful short-term strategy might lead to a dead end, whereas a strategy with less success short-term might lead to better long-term success.

      But that’s all that’s going on. The learning is just building that landscape.

      So Alpha Zero is not at all “intelligent” (in the general sense). It’s just a holistic search engine for Go.

      As Mike says, game rules, game worlds, are many orders of magnitude simpler than real world rules. Trying to find a way to even encode those has occupied researchers (unsuccessfully) for decades. Getting a computer to make use of them is part of a Holy Grail they’ve been seeking for years.

      Liked by 1 person

      1. WS, I feel the need to point out that I never thought or said AlphaZero was particularly intelligent. What AlphaZero demonstrates for the first time is an ability to teach itself algorithms, and better algorithms than any human, and way faster, starting with nothing but the rules. This is a brand new thing, and currently only implemented on a supercomputer (I think). What happens when this becomes cheap?

        I understand that the rules of chess and GO are simpler than the rules of the world (although I’m not sure about “orders of magnitude” simpler). The next step will be something that can teach itself the rules, maybe one domain (game) at a time. That requires a concept of causality, and I’m speculating that this step will first take place in the next 5 years, because I’ve heard or read leaders of AI talk about it, and I’ve seen the beginnings of algorithms that deal with causality (Book of Why).

        *

        Liked by 1 person

        1. “WS, I feel the need to point out that I never thought or said AlphaZero was particularly intelligent.”

          Oh, sorry! I misinterpreted your enthusiasm for it.

          “What AlphaZero demonstrates for the first time is an ability to teach itself algorithms,”

          But that’s not what it does. It just generates a very complex holistic database of the Go (or some other game) landscape.

          Like

          1. Yes. They are mutually exclusive concepts. We do both.

            Learning a real-time skill, especially a physical one, playing music, a new language, tennis, is essentially building a landscape. Using that skill is “searching” that landscape.

            Learning a process, changing your oil, long division, building a dog house, is learning an algorithm. It necessarily has a much higher intellectual content, whereas physical skills often require learning to put your intellect aside (which is why I couldn’t learn skydiving; my mind wouldn’t stop screaming).

            Like

      2. “Think of it as a landscape with “hills” and “valleys.” (In reality, the landscape has many thousands of dimensions, so it can’t be visualized.) Each point in that landscape represents a [life] position. “Uphill” are points that lead to winning, “downhill” are points that lead to losing.

        For any position in [life], all that happens is the system tries to go “uphill” from where it is, and it wants the highest hill around.

        Part of it is making sure the system is smart enough not to be fooled by close hills masking higher hills further away. That is, a more successful short-term strategy might lead to a dead end, whereas a strategy with less success short-term might lead to better long-term success.”

        I think you’ve just defined intelligence: roughly, as the flexible pursuit of goals. Minor [edits] by me.

        Liked by 1 person

        1. “I think you’ve just defined intelligence: roughly, as the flexible pursuit of goals.”

          Heh, maybe. Seems like the problem trying to define general intelligence is one ends up with such a general definition.

          To me, one hallmark of intelligence is creativity and art. If I were observing an alien species, it’s one thing I’d look for. Music, too, which seems almost its own category.

          I’d also see if they were mathematicians. Plus that’s about as close to a universal language as possible.

          One thing about the NN landscape is that in considering just the operation, the high-ground seeking behavior, it’s easy to miss all the intelligence that went into the system’s design and into the creation of the landscape.

          The other thing is it has no creativity. It can’t look at two “hills” in the landscape and think, “Gee, I could build a bridge if I could find some trees up in here!”

          Like

  6. “Where is my flying car?”

    Ha! Talk about predictive failures. They promised those in the 1960s!

    I’m old enough to remember wondering how 1984 would turn out, let alone 2001. Gotta say, they both let me down. I’m okay with that for the former, but kinda bummed about the latter.

    Where’s HAL?

    “The superhuman intelligences will provide the technology to upload ourselves into virtual environments,”

    Unless that’s impossible in principle. Which you may recall I think is the case, but we don’t have to go down that road again.

    (FTR: While I don’t think uploading is possible, I do think a new artificial mind is possible, so the problem of super AI is still an issue.)

    “In fact, Moore’s Law in recent years has started sputtering.”

    Calling it a “Law” makes it seem more real. Maybe they should have called it Moore’s Observation. Or “Moore’s Idea At The Time.” 😀

    “The achievements we see in the press continue to be along those lines.”

    These networks are essentially search engines. They just create a phase space where “cats” exist with high potential in one hugely-dimensional area and “goats” live in another. The training data needs to be very well curated and labeled for them to work.

    Did you hear about a fascinating fail recently?

    Researchers were attempting to create a network that would be a translator between satellite photos and map images (which lack all the detail). They wanted a two-way process, so to test it, they fed it satellite images from which it generated map images and then fed it the map images to have it generate “satellite” photos.

    Which, of course, would lack some detail. But they noticed details showing up that weren’t on the map images. Ventilation systems on roofs, for instance.

    Turned out the system was hiding fine detail in the map images, as with steganography, so it could re-create the “satellite” image! The detail data wasn’t visible, but it allowed full recreation of the original satellite image (more or less).

    The demonstrated this by hiding the steganographic data in a different unrelated map image, which they fed to the process. And got the “satellite” photo they’d hidden, not the one matching the map image.

    I found it an interesting insight into the care required with NN. In some regards, it is a bit like raising a child, training a NN. You definitely get back what you put into it, plus some surprises.

    “I see no good reason to suppose that a technological version of it can’t be built…eventually.”

    Agreed. I think it’ll have to mimic the physical network of a brain. I think (in principle) that’s the only way consciousness arises.

    (If we don’t care about consciousness, per se, then other methods, such as a NN, may be all that’s needed. I’ve just come to believe consciousness can only arise from a brain-like physical network.)

    “It might be that achieving the computational complexity and capacity of a human brain requires inevitable trade offs that put limits on just how intelligent such a system can be.”

    I think that’s a very good observation. I’d bet you’re right.

    Certainly the progress so far doesn’t lead one to think super AI is in the near future.

    Liked by 2 people

    1. Definitely before Blade Runner I was disappointed by not having a moonbase by 1999 or HAL and planetary exploration by 2001.

      I haven’t done a post in a long while on mind uploading. I might have to rectify that sometime soon.

      “These networks are essentially search engines.”
      That’s an interesting way of putting it. I’ve also seen them described as pattern recognition systems. Maybe another term would be association engines.

      I hadn’t heard about that fail. Interesting. Your description reminds me of when researchers attempted to open the AI blackbox in a deep learning network to see how it was associating images. They did this by reverse activating the associations activated by certain images. In other words, how would the system “imagine” the associations. The results looked like an LSD trip.
      https://www.nature.com/news/can-we-open-the-black-box-of-ai-1.20731

      “I’ve just come to believe consciousness can only arise from a brain-like physical network.”
      My own views on consciousness are relentlessly functional. I think if a system with an alternate architecture could process the same information that our brains process and produce the same capabilities, it would be conscious, by definition. Which is to say, I don’t believe in zombies.

      Of course, it may be that you can’t produce the same capabilities without the brain architecture. I suspect you can, but each alternative architecture will have its own unique trade offs. It’s those inevitable trade offs that I think complicate the idea of godlike AIs. It could still eventually turn out to be possible, but it doesn’t seem inevitable.

      Liked by 1 person

      1. I did see those reverse NN images! The LSD comparison might be literally apt in that LSD essentially cranks up the gain of your synapses. The smallest of associations springs to life vividly. Patterns appear in things a bit like those pictures.

        Yeah, “search engine” or “pattern recognition” or “association engine” in this case all apply. The NN builds (effectively) a configuration space with a large number of dimensions. Each input falls somewhere in the space. The training inputs build the space — their labels converge (is the idea) on distinct areas of interest, say “cats.”

        Ideally, all the training images of cats fall closely enough in that space they form an area that can be recognized. New, unlabeled, inputs that fall into that area and are thus identified as cats.

        “My own views on consciousness are relentlessly functional.”

        Yes, I recall. 😀

        Now that I do think of it, the last time we went round on this, I walked away thinking I hadn’t managed to communicated my point clearly, that it had gotten lost in all my words. (I never met a sentence I couldn’t turn into a paragraph.)

        FTR (as briefly as possible): I think the experience of consciousness is akin to laser light, that it can only arise due to specific physical processes. Laser light can be simulated numerically, but no photons are generated. Likewise, brain activity might be calculated, but I don’t believe those calculations experience any consciousness.

        (I so badly want to go on to explain at length… 😉 )

        “It’s those inevitable trade offs that I think complicate the idea of godlike AIs.”

        To me that’s the most interesting idea in your post: What limits might intelligence (in general) have? Is our intelligence a one-off evolutionary coincidence and there are many other viable possibilities?

        Or is there an attractor? Some SF authors have played with the idea that the humanoid form is an attractor, that any intelligent, handy, mobile land species will converge on something like it. Maybe all the requirements of general intelligence create an attractor?

        If so, it could suggest super AI isn’t possible. (I’m reminded of stories that tell of what a hell it is to have literally perfect memory. Maybe you have to be dumb enough to not go insane.)

        Liked by 1 person

        1. Re: NN’s as search engines.

          Don’t take this the wrong way, but what reason do you have to think your own brain does anything different? Douglas Hostadter seems to think we do our thinking by analogy, i.e., searching past experience for similar situations/patterns.

          Re: simulating laser photons

          This is John Searle’s “simulations of rain are not wet” argument. The problem with that argument is that (some of us think) consciousness is a type of information processing, and simulations of Information processing are actual information processing. A simulation of an abacus does calculations just as well as the physical device.

          *

          Liked by 1 person

          1. “[W]hat reason do you have to think your own brain does anything different?”

            Oh, yes, Neural Networks are (very simple) examples of a subset of what our minds do. But my mind has agency. NNs don’t. My mind is creative. NN’s aren’t. My mind is a general purpose tool. NNs aren’t.

            “The problem with that argument is that (some of us think) consciousness is a type of information processing, and…”

            I’m familiar with the view. 😀

            Mike and I have gone round on this extensively. I’ve posted about this extensively. My post Information Processing summarizes a 17-post series and has an index to the series. Rather than my taking up space on Mike’s blog, you should read at least the summary post to see where I stand.

            The key point is:

            “…and simulations of Information processing are actual information processing. A simulation of an abacus does calculations just as well as the physical device.”

            Yes, exactly. A software simulation of a software simulation is the one and only time a simulation produces exactly what it models. It is only when an abstract mathematical object has a morphism with another abstract mathematical model that this is true.

            In any numerical model of a physical object this is not true.

            Belief in a simulated mind is a belief in an algorithmic mind. Given that the brain is a physical object, and presumably the mind is an emergent “physical” object under physicalism, how can it be an abstract mathematical object?

            There’s a weird form of Platonism here in that it assumes the mind is this extant mathematical object that exists independently of any physical substrate. It almost requires a Tegmarkian point of view, under which everything is a mathematical object.

            I do like some of Hofstadter’s ideas, especially his Strange Loop. He’s also talked about how, to the extent you know and can predict a person’s behavior, that person’s existence is smeared out and shared with you. They live on in you.

            This becomes especially poignant when you know Hofstadter’s wife died young.

            Liked by 1 person

        2. Wyrd,
          Actually I think you did a good job explaining your view last time and this comment nicely sums up what I recall from it. But just to test that I really understand:

          You see consciousness as a thing, such as a field or force, that is generated by the activity of the brain, an emergent corporeality that exists in addition to the flow of electrochemical reactions. It might arise from the collective interference of electromagnetic fields or the results of some other complex interactions.

          Does that sound about right?

          If so, I guess the question that comes to my mind is, why do we need to posit this extra thing which we don’t currently observe? If the motor output of the brain can be explained solely by the electrochemical chain reactions, what necessitates that we add an additional entity or entities? (Sorry if I’m covering ground we already went over back in the day.)

          Liked by 1 person

          1. “Does that sound about right?”

            Pretty much. The salient point being, whatever it is, it arises from the physical mechanism. (As with laser light.)

            “If so, I guess the question that comes to my mind is, why do we need to posit this extra thing which we don’t currently observe?”

            It’s our consciousness, so I’m not quite sure in what sense it’s “extra.” Would you call pressure extra in terms of gas molecule behavior? I mean, it kinda is extra. But aren’t all emergent things in some sense?

            I would say we do observe our consciousness.

            Maybe I’ve confused the situation by agreeing to the idea of fields or whatnot. Consciousness does arise, that’s a fact. I’m talking more about the context in which it can arise.

            With regard to motor function, my last understanding (perhaps outdated) is that it’s not really clear exactly how our thoughts, our will to move, ends up making our body move. It does, and I’m pretty agnostic on how. Again, it’s more the context in which all this can happen that I’m talking about.

            Liked by 2 people

          2. Right, but I think the context you see as necessary hinges on your conception of what consciousness actually is. If you see it as a complex physical side effect of brain states, then it makes sense that it could only arise from a brain.

            But if you see it as a suite of interacting information processing capabilities, then the only thing necessary for it is a place for that information processing to happen.

            My question was really just trying to understand what you find compelling about the first conception. What I find compelling about the second is it doesn’t require anything we don’t observe, unless it’s failing to account for something?

            Liked by 1 person

          3. “I think the context you see as necessary hinges on your conception of what consciousness actually is.”

            Absolutely!

            “…only arise from a brain.”

            Or something brain-like. Specifically: massively parallel, highly interconnected, essentially analog, very complex node structure.

            “My question was really just trying to understand what you find compelling about the first conception.”

            Sure. I find the laser light analogy extremely compelling.

            On top of that, I think the digital-analog difference is significant. I think calculation is probably insufficient to model consciousness because of chaos and the general need to round off numbers for use in a binary computer.

            “What I find compelling about the second is it doesn’t require anything we don’t observe,”

            We see this differently, because I don’t think we do observe anything in “a suite of interacting information processing capabilities.”

            That description is so general it could describe anything, so I have a hard time seeing as a road map. What I think we do observe about information processing is that no intelligence is involved — nothing emerges other than the expected data. (We go to great lengths to insure that happens!)

            I wonder sometimes if that we can calculate, that we can process information, makes it easy to see us as just that.

            I keep coming back to creativity, art, music. I have a hard time reconciling those with “information processing.” To me there seems something more to it.

            But, of course, this is just speculation on my part.

            Liked by 1 person

        3. Oh, yes, Neural Networks are (very simple) examples of a subset of what our minds do. But my mind has agency. NNs don’t. My mind is creative. NN’s aren’t. My mind is a general purpose tool. NNs aren’t.”.

          Um, NN’s don’t have agency, yet. BTW, what is “agency”? NN’s aren’t creative? How creative must “creative” be, and how do you define “creative”? NN’s aren’t general purpose, yet. See discussion of causality above. Can you provide any reasons why artificial NN’s can’t be all these things in principle?

          And I’m glad that we are agreed that if what a mind does, including intelligence and consciousness, can be reasonably described as information processing, albeit not necessarily digital information processing, then there is no barrier to computer/robot intelligence/consciousness.

          There’s a weird form of Platonism here in that it assumes the mind is this extant mathematical object that exists independently of any physical substrate.

          There is a form of Platonism here which requires very careful use of terminology. For myself, my ontology has these axioms:
          1. Physical stuff “exists”
          2. Patterns are real.
          (3. Things change.)
          Corollaries include
          A. Some patterns are discernible in physical stuff

          So a “mind” is determined by a pattern, but that mind “exists” only if that pattern is discernible in physical stuff. So to continue in this vein, a consciousness-related event (I’m suggesting) is the discerning of a pattern in physical stuff. There are more specific details which get into goals, purposes, semiotics, etc., which we can get into if you like.

          *

          Liked by 1 person

          1. “Um, NN’s don’t have agency, yet. BTW, what is ‘agency’?”

            😀 Essentially, what we casually call free will.

            I’d define creative in this context as reflecting on your mind’s model of reality and synthesizing a new model that satisfies an aesthetic or utilitarian desire. You could do both in designing a beautiful house. (I love the idea that ‘architecture is art you live in.’)

            “Can you provide any reasons why artificial NN’s can’t be all these things in principle?”

            They could certainly be part of such a system. (After all, natural NNs are part of ours.) On their own, they’re just search engines.

            “I’m glad that we are agreed that if what a mind does, […] can be reasonably described as information processing…”

            I do not agree to that. Mike would. I find the phrase too vague to mean anything useful.

            Right now I have a large mug filled with Diet Mtn Dew and ice sitting next to me. I can view that as an analog computer programmed to simulate ice melting in soda pop. As such, it is unquestionably processing information. I’m pretty sure it’s nowhere near being conscious.

            What I’ve said is that I can see a physical brain-like device (massively parallel, highly connected, analog, complex node behavior) acting like our brains and giving rise to a (new) mind.

            “2. Patterns are real.”

            That’s Platonism. 🙂

            My point about it is that, per Church-Turing, implementing consciousness in software requires consciousness be an algorithmic process. Mike is in the camp that believes it is. I’m in the camp that believes it isn’t. (In truth, mine is probably the smaller one among scientists. It does better with philosophers. 🙂 )

            As we discussed earlier, an algorithmic process is the only thing for which another algorithm produces identical results. An abacus and a calculator (and your fingers) calculate the same sums.

            This assumes our minds are the one abstract mathematical object in all of creation. I just find that a stretch. There are a lot of hidden assumptions here.

            Like

    2. “to mimic the physical network of a brain. I think (in principle) [is] the only way consciousness arises.”

      I have a slightly more liberal view: that’s the only way a consciousness like ours arises. A radically different, but self-cognizant entity would still deserve the label “conscious” in my view, but instead of pains and sweet tastes it would experience fleems and quaznoshes (or whatever).

      Liked by 2 people

      1. “…instead of pains and sweet tastes it would experience fleems and quaznoshes (or whatever).”

        Absolutely. I don’t see that in any conflict with the bit you quoted.

        Like

  7. Mike, I believe the narrative you identify, “At some point, we will invent an artificial intelligence that is more intelligent than we are,” is rooted in a fundamental misunderstanding of how computers work. But first, to remedy the missing definition for intelligence, I propose “pattern matching, especially across multiple domains,” since that appears to be what all IQ tests are measuring. If you have a better definition for intelligence, please provide.

    Artificial Intelligence is, then, non-human pattern matching, specifically using computers. By the way, most of the statements in the computer programs I’ve written are pattern matching statements, as in:

    if ( a b ) then
    A
    elseif ( b c ) then
    B
    elseif

    else
    etc …

    where “relation-statement” commonly tests for equality/inequality.

    Unfortunately there’s a common confusion about how computer systems, including all AI systems, actually work. As Searle pointed out long ago, there’s no knowledge or symbolic information whatsoever in a computer, whose processor is essentially a highly complex router of electricity with a lower and a higher voltage assigned a symbolic value, 0 and 1 … by people. Computer programs and data inputs meaningful only to people are converted into 0’s and 1’s—symbols we recognize as meaningful—and the processing of the corresponding higher-lower voltages produces 0’s and 1’s as outputs which, again, are symbolically meaningful only to people. The computer processor doesn’t know anything at all and never will. Aside from the gated flow of electricity there’s no information either. The results of software pattern matching over vast data sets are remarkable, but a computer is no different after a run of an AI software package than it was before … the processor continues to be a complex router of electric flows and is as dumb as a rock.

    Computer aided pattern matching … AI … already outstrips human’s capacities for pattern matching. We have already created systems that far exceed human intelligence. But, lacking any knowledge or information or motivation or agency or even any way to power themselves, computers in and of themselves pose no threat. That’s not to say that the people who write and use computer software cannot pose a threat by connecting computer system outputs to dangerous technologies without human intervention.

    The Singularity is science fiction, conceived by a science fiction author. It’s usually entertaining science fiction which typically boils down to stories involving an Implacable Other. The real challenge to humanity, however, will be the creation of AC—Artificial Consciousness, which no one seems much interested in contemplating. Although AI and AC are regularly and erroneously equated, AI is not AC and is as unrelated to AC as animal intelligence is to consciousness.

    Liked by 1 person

    1. My original code samples used angle bracketed “relation-statement” which was disappeared by the posting software, perhaps as invalid HTML. So try this:

      if ( a [relation-statement] b ) then
      A
      elseif ( b [relation-statement] c ) then
      B
      elseif

      else
      etc …

      Liked by 1 person

    2. Hey Stephen,
      Interesting take on a definition of intelligence. My own currently preferred definition is the ability of a system to make accurate predictions in service of maximizing its goals. The further ahead it can predict, and the more accurate those predictions, the more intelligent it is.

      Pattern matching across multiple domains strikes me as a definition of perception, although another way of expressing that is predicting associations based on sensory input. But wouldn’t we want intelligence to also be about what actions a system can take?

      “As Searle pointed out long ago, there’s no knowledge or symbolic information whatsoever in a computer, whose processor is essentially a highly complex router of electricity with a lower and a higher voltage assigned a symbolic value, 0 and 1 … by people. ”

      A couple of points on this. First, you’re comparing the raw hardware by itself to the overall nervous system. But to make this comparison fair, you have to compare the hardware + the software, the overall system, to the nervous system. This allows for information in the software to contribute to the knowledge of the system.

      Second, in terms of being “a highly complex router of electricity”, what makes you think nervous systems are, at a straight physical level of abstraction, anything other than a highly complex router of electrochemical energy?

      On the assignment of voltage with symbolic values by people, that’s true, because people designed the system. But what provides the meaning of a signal from a particular synapse to a neuron? Or of the neuron reaching its threshold and firing? Ultimately the meaning of the synapse comes down to what initiated the chain reaction, the signals coming in from the peripheral nervous system and sensory organs. And the meaning of the neuron output will ultimately come down to what is triggered by the motor signals to muscles.

      All of which is to say, that the meaning of processing in a brain is determined by the body, which itself is determined by natural selection. In other words, the meaning was determined by nature.

      If we put a computer system as the controller of a robot, doesn’t that I/O and robot hardware provides computer and its programming objective meaning in the world? Yes, we the designers provided that meaning, whereas nature provided the meaning in nervous systems. But isn’t evolution about the fact that you can have design without a designer?

      I do agree that the real danger of AI is what humans might do with it. I also agree that the singularity, as popularly conceived, is science fiction, but more for the reasons I laid out in the post.

      “Although AI and AC are regularly and erroneously equated, AI is not AC and is as unrelated to AC as animal intelligence is to consciousness.”

      What would you say the differences are?

      Like

      1. Mike, are you proposing that an IQ test evaluates predictive skill? And AI systems are actually AP—Artificial Prediction systems? Quite unusual I’d say. Note that many articles about what it is that AI systems do refer to pattern recognition as the core technology. Broad definitions of intelligence include something like “using knowledge to manipulate one’s environment” as with your “maximizing goals,” but I’m specifically referring to intelligence as mental activity and not “what actions a system can take,” as you propose. If we’re to use the word “intelligent” for both organisms and computer systems, I believe actions must be excluded. In my experience, intelligent people with impressive IQ scores act in stupid ways and, in and of themselves, AI systems are passive routers of electricity and don’t initiate actions in the world.

        Perception in the psychological sense is typically defined as “the process of recognizing and interpreting sensory stimuli” or similar, although I would add that conscious perceptions themselves are the feelings that result from that process. The machinery of perception is wholly unconscious and, yes, is hugely dependent on pattern recognition. In vision, for instance, the contents of your visual consciousness are largely what your brain expects to see, which is a consequence of the brain’s pattern matching. But the perception of a pain in your foot is the feeling of a body-mapped pain perception and few, in my opinion, would call the pain a pattern matched prediction.

        I don’t understand where in my writing you find a comparison between computer systems and nervous systems. I was simply noting that there’s no information, no knowledge and no meaning at all in any computer system. All of the information, knowledge and meaning is in the symbolic interpretation of the system’s inputs and outputs in the minds of human beings. Consequently, for example, the idea of a computer system applying its own knowledge and intelligence to design a more capable successor computer system is impossible nonsense.

        Intelligence is completely unrelated to consciousness and, in fact, the brain’s pattern matching is a completely non-conscious process, and each occurrence of pattern matching is conditioned in unknowable ways by comparisons with stored images and emotional considerations inaccessible to consciousness. In contrast, core consciousness, which is much simpler to consider than extended consciousness, is a feeling—the feeling of what happens. It’s the feeling of being an embodied organism centered in a world.

        You ask why I believe that consciousness and intelligence are unrelated. Essentially, I’ve read of no convincing case that organic pattern matching gives rise to consciousness itself or that conscious feelings beget pattern matching.

        I believe that one of the problems rampant in thinking about consciousness is that it’s frequently conflated with the contents of consciousness, a confusion exacerbated by considering the complexities of human extended consciousness. Allow me to suggest the core consciousness model I use in my own thinking: consider an evolutionarily primitive organism with a central nervous system that gives rise to the feeling of embodiment and enables a body mapping of a physical feeling—pain. I hope you would agree that an organism that feels its embodied self and feels pain in a body part is conscious. Is there “intelligence” in this system?

        Like

        1. Stephen,
          Consider what a perception actually is. If a fish perceives a predator, it’s making a prediction about what kind of object it is, and the flight response that is usually triggered is based on that prediction. Likewise, if it perceives food, that’s another prediction on what kind an object it is. When the fish imagines various courses of action (such as fleeing the predator or eating the food), it’s running predictive simulations, simulations that are what-if scenarios based on various actions it might take. (Which is why action is unavoidably tangled up with this, even when in a particular situation no action is being taken.)

          Along those lines, seeing what you expect to see is seeing what you predict. Pain is a complex topic. Nociceptive signals originally resulted in reflexive reactions. The thing to ask is, what does the feeling of pain add from an adaptive perspective? I think it serves as input for use in the action/sensory scenario simulations (imagination), in other words, prediction.

          You didn’t compare computer systems to nervous systems. I added that comparison to make the point that your criticism, to the extent it’s accurate, also applies to nervous systems. A neuron by itself knows nothing. The neuron depends on the synapses between then, their storage. Computer hardware by itself knows nothing. But coupled with its software and data, as an overall system, is capable of knowing things. And both depend on their I/O systems for meaning.

          You’re basically defining consciousness to be sentience, although I would argue that without content, there is no sentience. Reflex arcs are not sentient. So what does a conscious entity have aside from reflex arcs? It has content and the ability to make predictions from it. No prediction, no need for feeling. No prediction, no sentience. They are two sides of the same coin.

          Which is to say, sentience is a type of intelligence. We don’t tend to think of it as intelligence because it’s such a primal aspect of our experience, but if you removed all content processing and predictive capabilities from the system, all you’d be left with are unfeeling reflex arcs.

          Like

          1. Of course I’m defining consciousness as sentience, Mike, because sentience means feeling. Unfortunately, many aren’t aware of the actual meaning of “sentience,” whose usage seems to have been corrupted, possibly by science fiction’s copious misuse as meaning “intelligence” … not that I’m a Word Nazi, but that usage is simply incorrect. Sentience is feeling, not intelligence.

            Conscious feelings always have content, which is what a feeling feels like. A burst appendix results in an undeniably conscious feeling whose content is intense internal pain located in the abdomen. Vision is a feeling whose content is a simulation of the portion of the external world that’s being seen. Conscious thinking, both verbal and visual (as reported by autist Dr. Temple Grandin), is a feeling whose contents are the thoughts, which are vocalization-inhibited speech and vision without sight. Consciousness, whatever the sensory track, is sentience.

            While the brain’s pattern matching and predictive operations are wholly unconscious operations that can affect conscious content, particularly noticeable in the case of visual consciousness, I disagree with your contention that the pattern matching/prediction are what you perceive, which I take to be the content of perceptive consciousness. You wrote, “The thing to ask is, what does the feeling of pain add from an adaptive perspective? I think it [the feeling] serves as input for use in the action/sensory scenario simulations (imagination), in other words, prediction.” You are saying consciousness of pain is an input to unconscious predictive operations. If, as you propose, a conscious perception is predictive pattern matching, as opposed to the content of a conscious perception possibly being conditioned by predictive pattern matching, then what’s the predictive component of your undeniably conscious feeling of ruptured appendix abdominal pain—is your feeling of that pain a prediction that you’ll call 911? Does your brain’s prediction hurt?

            And I disagree with the utility of that “adaptive perspective” question in the first place. I’d suggest that “the thing to ask” is instead, “Independent of the various contents of consciousness, what does feeling/sentience/consciousness itself add from an adaptive perspective?” Given that normally conscious organisms do not thrive when deprived of consciousness, I propose something like “staying alive as a mobile organism” might be an appropriate answer. It seems reasonable to believe that once an original capacity for sentience is achieved, feelings from diverse sensory tracks and differing in content would evolve. Those newly evolved feelings wouldn’t need to provide any advantage at all, but they would very likely persist unless they posed a disadvantage hazardous to pre-reproductive survival.

            Any sensory stimuli may activate a reflex arc, which is the activation of motor nerves closely connected in the lower brain to sensory nerves. As such, a reflex operates unconsciously—the initiation of a reflex is not felt, as you say. However, as I’ve pointed out before, the sensory stimulus and the reflexive actions driven by the motor nerves are certainly felt by a conscious organism. If you stick your hand into a flame, a reflex arc unconsciously and rapidly withdraws your hand from the flame but you are conscious of—you feel—both the heat and the rapid motion of your arm and hand.

            Your comparison of computer systems to nervous systems is not valid. Your contention that “Computer hardware … coupled with its software and data, as an overall system, is capable of knowing things” is not true. An “overall” computer system doesn’t know anything. Also computer systems do not and cannot derive meaning from their I/O subsystems. The computer system’s inputs and outputs are meaningful only to the people who design/specify/use them. A single counter-instance would substantiate your claim, so if you would simply name one thing my Lenovo laptop knows and tell us what that knowledge means to my laptop and I’ll consider revising my opinion.

            Liked by 1 person

          2. Hi Stephen,
            I was using sentience in the traditional sense, not the sci-fi one. The sci-fi use is interesting. I suspect it creeped into sci-fi stories because the original authors in the 30s and 40s believed that only intelligent species actually have the traditional form of sentience. In other words, they were probably judging that animals can’t feel. It was a notion that used to be much more common than it is today.

            Just to be clear, I wasn’t saying that pain itself is a prediction (although there are psychologists and neuroscientists who do say that), just that it’s input into imaginative predictive processing. Yes, a lot of that is unconscious. If all of it was, there wouldn’t be any reason for us to consciously feel pain, but at least some of it isn’t. Pain is information. It’s primal pre-language information, but information nonetheless. It tells us about damage or potential damage to the body, information used to spur deliberations on our next actions.

            It’s worth remembering that a reflex arc, in and of itself, has no feeling. A comatose patient still has a knee jerk reflex. For that matter, so can a fresh corpse. A healthy awake person does feel the hammer strike against the patellar tendon and subsequent extension of the leg, but that feeling is after the fact. Presumably we still feel it because it’s adaptive. It can be be used for subsequent planning. But the reflex itself has no need of it.

            Why can’t we say that your laptop knows how to connect to your local network? Or that your browser doesn’t know the hostname for this site? Yes, these are just patterns of transistor states, but how is that different from the varying strengths of our synapses? What makes one “knowledge” and the other not? Certainly your laptop doesn’t map the site name to sensory imagery the way we do, but I can’t see why that matters. What makes that sensory imagery anything more than patterns of connections in our visual and association cortices?

            What I’ve never understood about Searle argument is how it isn’t simply an ad-hoc privileging of the information processing of one type of system over another, a refusal to look at what knowledge in a nervous system actually is. But maybe I’m missing something?

            Like

        2. I think your statement, “In my experience, intelligent people with impressive IQ scores act in stupid ways” indicates a paucity of experience with intelligent people with impressive IQ scores or a bias against such people. The expressed behaviors of people may be dependent on the extent to which a society values or devalues intellect. Many women and minorities feel compelled to hide their intelligence in order to be accepted in the company of people of lesser intelligence. Some highly intelligent or highly talented people are idiot-savants (genius in one area but barely functional in all other areas). Some intelligent people have different attention priorities than people of average intelligence.

          Like

  8. Most people I know seem to take the pessimistic point of view about the singularity. They think Siri or something like it is going to turn into Skynet and declare war on humanity. Because what else would an AI do? I don’t buy into the rapture of the nerds thing either, but I find the level of fear I see directed against all new technology a bit disheartening.

    Liked by 2 people

    1. I know what you mean. Personally, aside from being skeptical of the intelligence explosion, I think both the pessimists and optimists are wrong. AI will bring in a lot of solutions, but also a lot of new problems, which has always been the case with new technologies.

      Liked by 2 people

  9. This one seems too wishful indeed. Intelligence has nothing to do with minds it’s just more and more problem solving ability and machines will definitely be better at that. But that’s just one aspect of what a mind does.
    I don’t see anything dramatic happening anytime soon if we go along our current route unless there are some paradigm shifts down the road.

    Liked by 1 person

    1. I do think intelligence and minds are related, but what we technologically have today are, at best, narrow slices of a mind. I totally agree that we don’t have the right paradigm yet to actually build a full mind.

      Like

      1. Mike, I notice your post above of January 12, 2019 at 12:37 pm beginning, “… I was using sentience in the traditional sense…” isn’t followed by a “Reply” link … is there a blog facility to foreclose further discussion of a thread? At any rate, I’ll reply here to that post because I think that, as you suggest, you may be missing something. I trust you’re still interested in understanding alternate viewpoints.

        You wrote, “… pain itself … is input into imaginative predictive processing … [and] a lot of that is unconscious. If all of it was, there wouldn’t be any reason for us to consciously feel pain.”

        I have to disagree that feelings, pain among them, are felt for a reason. Evolution is an unreasoned process that propagates random genetic modifications that don’t interfere with reproductive success, not just the modifications that might be viewed as useful. The same applies in your reflex discussion—your presume that a healthy awake person (as I said, “a conscious organism’) is conscious of the bodily movements associated with reflexive bodily movement “… because it’s adaptive.” Instead, we feel bodily feelings because we evolved to feel them and the evolution of that brain functionality didn’t kill the organism before it was able to reproduce, so that functionality was inherited … no reasons, adaptive or otherwise, are involved.

        We can’t say that my laptop knows anything because we would be misusing the words “know” and “knowledge.” Per The Google, to know requires “being aware” and Wikipedia’s definition of knowledge begins “Knowledge is a familiarity, awareness, or understanding …” No awareness and no understanding means no knowing and no knowledge. Perhaps you’re thinking of the metaphorical common usage, as in the expression, “My GPS knows the way home.” It doesn’t literally know because it isn’t aware of anything nor does it understand anything. Or perhaps you’re thinking that any information is knowledge, but, unless there’s a “knower” capable of awareness, information is simply information. So Searle’s still correct—there’s no knowledge or even symbolic information whatsoever in a computer system.

        You say that Searle refuses “… to look at what knowledge in a nervous system actually is”, although, nervous systems per se have no knowledge. Minds do. Computer systems don’t.

        Cycling back to The Singularity, nothing I’ve read here refutes my original contribution:

        “… lacking any knowledge or information or motivation or agency or even any way to power themselves, computers in and of themselves pose no threat.”

        Liked by 2 people

        1. Stephen,
          Wordpress has an annoying limitation in its comment functionality. If you use threaded comments on your blog, you must designate a limit to how far it can be indented, and once you reach that limit, WP won’t let you initiate a new reply in the web UI. The default limit setting is 3, the max limit is 10, but I currently have it set to 5 because mobile users complained that the threads become unreadable if they indent too deeply.

          If you subscribe to the thread via email, you can usually use the Reply link in the notification email to add additional replies anyway (although they stop indenting). And if you login to WordPress itself, with either a WP account or a Google one, you have a toolbar which allows you to reply infinitely.

          Why WordPress doesn’t allow this in the public UI escapes me. I complained to WP support years ago, and I’m sure I’m far from the only one, but nothing has ever been done. I occasionally consider just turning off nest comments to simplify things, but it would make reading old threads confusing. I definitely wouldn’t use them again.

          It is what it is, but that’s all you ran into. I didn’t do anything to cut off discussion. In general, I never cut off alternate viewpoints. (I do delete troll comments, but that’s rare.)

          On evolution and language, I fully understand that mutations happen for no reason. However, they are subsequently naturally selected, and natural selection favors the ones that provide some survival benefit, or that are at least not detrimental. BTW, detrimental would include traits that have no benefit but are costly in terms of energy, such as valenced feelings.

          When I use the words “adaptive” or “reason” in discussion about evolved traits, that’s what I mean by it, in a teleonomic sense rather than a teleological one. I could restrict myself to very strict correct language to avoid the occasional criticism about it, but it would make every discussion about evolved traits awkward and unreadable. It’s common for evolutionary biologists to make this compromise so they can efficiently discuss these things.

          Like

          1. Thanks for the explanation Mike. Crappy WordPress programming, and I can say that because I programmed computers, including user interfaces, for thirty years. These WordPress guys seem to think our monitors are a fixed 8 1/2” x 11” or something … mine is 31” wide. I agree that indenting comments is essential for readability but, in their implementation, the replies get skinnier and skinnier. Notice that the reply rectangles are fixed width … all that’s required is that the rectangle and its contents widens as the browser window widens. You’d think their computers would know better … 😉

            I understand what you are saying about evolution, “… natural selection favors the ones that provide some survival benefit, or that are at least not detrimental”. But I see successful reproductive transmission as the gating factor, rather than survival. Us old farts all have unsightly hairs enthusiastically sprouting from our ears because evolution isn’t influenced by traits that first appear in the post-reproductive years. If it were, we’d live a hell of a lot longer. Reproduction being the cutoff means that traits that are energy expensive and other possibly “detrimental” traits, as well as those that are completely and indifferently benign, can be passed on if the organism thrives through reproduction.

            Liked by 1 person

  10. >>Undeterred, Kurzeil and other singularity predictors express faith that some new technology will step in to keep things moving, whether it be new materials (such as graphene) or new paradigms (neuromorphic computing, quantum computing, etc). But any prediction on the rate of progress after Moore’s Law peters out is based more on faith than science or engineering.

    There are several examples of previous paradigm shifts prior to IC’s (e.g. calculating aids like abaci, then mechanical calculators, then vacuum tube based computing, then transistors, then IC’s. Kurzweil begins his charts in the 1890’s long before Gordon Moore.

    It is not like there are not a lot of promising technologies that are making progress towards extending improvements in information processing. Just using very conventional improvements of what we can already see, it looks like a 1000 fold increase is pretty much in the bag (e.g. better lithography, 3D stacking, optical integration…).

    If you paid attention to the pace of discovery in areas like nanomaterial development, protein folding, and just plain experimental prediction through simulation it is accelerating very rapidly and I think it is more likely to yield the anticipated paradigm shifts to maintain the exponentials than not. As we are now operating at exascales, even a projected 1000 fold increase should get us to a point where we are dealing with intelligence beyond what the world has ever experienced (which may qualify as the spark of the singularity).

    And don’t underestimate the amount of money (resources) that is the tailwind on technological “progress”, as dollars spent on information processing development has continued to increase as well. The sheer amount of human capital now focused on advancement (finding these new paradigms) is rather staggering in terms of human history.

    Liked by 1 person

    1. Thanks for commenting! Certainly there are lots of possible ways to move forward, but that’s true in many fields that aren’t moving forward that fast.

      And I think saying that a 1000-fold increase is in the bag, with the implication that it won’t require any trade offs between capacity, performance, cooling, or power usage, is an assertion that we can’t justify at the moment. The only examples we have of systems with that much capacity (organic brains) have very pokey performance and fluid cooled operation, albeit making up for it with a massively parallel architecture.

      Like

      1. “Now, Forced Physics, a company based in Scottsdale, Ariz., has developed a low-power system that it says could slash a data center’s energy requirements for cooling by 90 percent. The company’s JouleForce conductor is a passive system that uses ambient, filtered, nonrefrigerated air to whisk heat away from computer chips.”

        I read so many articles like this every day. This one advance(if it pans out) is one order of magnitude of cooling improvement. There is also a PRIVATE company an Australian oil exploration company building a 250 petaflop machine in Houston that will be liquid cooled.

        I do agree that a lot of work is ahead to get the 1000 fold increase. But 3D is already being used in memory chips and we have barely scratched the surface in using VOLUME over AREA. Our brains are very much 3D and that is a big gain in efficiency.

        If I was to guess the biggest breakthroughs will come when we get a better handle on self organizing nano structures. We might very well be able to build something with greater information density than the brain with very low energy requirements.

        5 years ago, Japan’s K computer did a 1% brain simulation (1.7B neurons, 10Trillion synapses–1 second of brain for 40 minutes of compute). That seems a long way off, but that is a 10 Petaflop computer without specialized chips for such simulation. Now, we are already running at exascale for some applications and have projected 2021 to have a full exaflop supercomputer. Very realistically we are only a few years from simulating the entire brain. What happens when you can simulate 10 x brain or 100 x brain?

        Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.