A machine in the likeness of a human mind

In the fictional far future of the classic science fiction novel, ‘Dune‘, computers are taboo across all human cultures, the result of an ancient jihad which resulted in the religious commandment: “Thou shalt not make a machine in the likeness of a human mind.”  The result of this commandment, is that computers, robots, or artificial intelligence of any type is completely absent from the setting.  (If you’ve never read Dune, I highly recommend it.  Although I’m about to disagree with something in it, there are many reasons for its status as a landmark classic in science fiction.)

I’ve been thinking about this fictional taboo recently, because I think it highlights a common misunderstanding of the relationship between brains and computers.  I happen to think that the computational theory of mind is sound (something I realize that some of you disagree with), but that doesn’t mean I think the brain is a general purpose computing device.

Of course, the brain is definitely not a digital computer.  It’s architecture is decidedly analog.  Transistors in computer chips are designed to be in one of two voltage states, which are interpreted as discrete 1s or 0s.  Synapses, by contrast, vary smoothly in strength.  There are many other differences, but for purposes of this post, the one I want to highlight is that brains aren’t designed to load any desired software.  They evolved to handle certain tasks, and can’t be repurposed the way a general computer can.

Brains are famously malleable and adaptable of course, but aside from the fact that this is slow, it has inherent limits.  No one has taken, say, a mouse brain, and made it run accounting or navigation applications.  Brains evolved to be the central command center of an animal, to increase that animal’s ability to find food and mates, and avoid predators.  Brains are better thought of as what are known in the information technology industry as appliances, that is, information processing systems narrowly designed for certain purposes, for running certain types of applications.  (A good example of this is the router most of us have at the center of our home’s wireless network.)

This fits with the data from evolutionary psychology and animal behavior research which shows that we are not born blank slates.  We come into the world with an enormous amount of cognitive “pre-wiring”, instinct, evolved programming, or whatever we want to call it.  Certainly brains can learn, depending on the available capacities of the specific species.  And some of that programming can be modified or resisted by learning.  But much of it can’t.  Much of it is central to what a mind does.

ComputingandmindsIn other words, if the computational theory of mind is sound, then a mind is not just a computing system, it’s a specific application (or perhaps more accurately, a set of integrated applications) of a computing system.  This is the main reason why the idea of a computer “waking up” into a conscious state at some level of computing capacity is infeasible.  It’s a bit like saying that a computer might “wake up” to be a game system, or a tax filing application.  None of these applications will come into being unless someone engineers them.

Consider that the laptop I’m typing this on has more processing power than the brains of many insects.  Yet my laptop has shown no emergent insect like behavior.  Why?  Because insect brains evolved for very specific purposes.  My laptop didn’t.  And it won’t behave like an insect unless someone painstakingly programs it to.

Why hasn’t anyone done this programming yet?  Because no one know how, yet.  We can’t program a computer to act like an ant, or a bee.  (At least not accurately.)  To do it, we’d need at least a moderately comprehensive understanding of how ant or bee minds work, and we don’t have that yet.  We certainly don’t have it for more complex animals such as mice, dogs, or humans.  And, based on statements from neuroscientists in the trenches of scientific research, we’re probably decades, if not centuries away from that understanding.

But, many will say, no one engineered humans or other animals.  They simply evolved.  If it happened with them, why couldn’t it happen with artificial intelligence, if we set up the right environment?

In answer, I think we have to be aware of two broad facts.  One is that it took billions of years of evolution to produce animal minds, and half a billion years of additional evolution to produce human minds, and it’s far from clear that they were inevitable.  People have attempted to evolve digital animals, but from what I’ve read, nothing approaching intelligence has resulted, at least not yet.  And that leads to the second broad fact: we don’t really know what led to the evolution of intelligence, either broadly in the form of animal brains, or more specifically human level intelligence, which means we don’t know how to set up the right environment.

(Note that if the evolution approach did somehow succeed in generating intelligence, then the dangers many people fear would probably be valid.  Which, in my mind, is a good reason not to do it this way.  It seems unethical and dangerous, and not likely to generate usable technology even if it worked.)

None of this is to say that computers won’t continue to increase in capacity and capabilities.  I know I’m definitely looking forward to my self driving car and more intelligent home appliances.  But I have no illusions that they will have minds, because we won’t know how to build those for a while yet.  And we’re about as likely to accidentally make one as we’re to accidentally make a game console.

(And if the doubters of the computational theory of mind are right, then that only seems to increase how far away we are from developing a technological mind.)

All of which is to say, that the Dune universe didn’t really need to be devoid of computers to meet its taboo against machines-in-the-likeness-of-a-human-mind.  They could have gotten along quite well with just mandating that no one ever develop a software mind.  Not that the distinction between computers and minds was as clear in the 1960s when Frank Herbert was writing his famous novel.  But today, when many people are decrying the dangers of artificial intelligence, it’s a distinction worth being aware of.

54 thoughts on “A machine in the likeness of a human mind

  1. I always felt the anti-computer mentality in Dune seemed like a huge overreaction. Then again, human history is full of huge overreactions. It did serve a story point of emphasizing that whatever happened in the Butlerian Jihad must have been really bad.

    Like

    1. Good point. And I don’t begrudge Herbert a plot device to remove technology from his setting that would have prevented him from telling the story he wanted to tell. I also find the idea of a new universal taboo interesting. Taboos against incest and in-group murder are prehistoric, but ones against cannibalism and human sacrifice seem like historical developments that happened in certain societies and then had to spread. It’s interesting to ponder what new future taboos might look like.

      I haven’t read the Anderson prequels, but from what I’ve read about them, they take a stab at portraying a traumatic species level event that might lead to the taboo. But I’d always envisioned the Butlerian Jihad as more of a revivalist movement against a degradation in human vitality caused by over dependence on the machines. No doubt, that’s why Frank Herbert originally left the details out, so we could each imagine it in our own way.

      Liked by 1 person

    2. I think it’s supposed to be a huge over-reaction in-universe, even if the characters are biased because of their socio-cultural backgrounds. I can’t find the quote, but I swear that Frank Herbert compared it to the rise of religious fundamentalism in the US. It certainly coincided with a ton of overall religious conflicts in the setting, the result of which was the Orange Catholic Bible.

      I always imagined it as an outbreak of religious movements against technology that was seen as rendering humans as mere “pets” of thinking machines to be comfortably managed. The heavily flawed Butlerian Jihad novels by Herbert’s sons hinted at a setting like that from centuries before, where most of humanity had essentially shrunken down into a rather hedonistic, easy-going life-style of comfort and games supported by servitor robots (to be honest, that seems like an immense improvement over the present).

      Like

      1. It also strikes me as a much better lot than the one common humanity had by the time of the Dune story. I think that’s one the reasons I always had limited enthusiasm for the Dune universe. It wasn’t one I could ever see myself wanting to live in. (The other was that none of the characters, including the protagonists, were ones I would have wanted to hang around with.)

        Like

  2. No one has taken, say, a mouse brain, and made it run accounting or navigation applications.
    That is true to some extent and not to some other extent. Here is a snippet that details briefly an experiment where a computational task was “run” on a group of rat brains.
    http://www.nicolelislab.net/?p=683

    The second Brainet study, involving groups of three to four rats whose brains have
    been interconnected via pairwise brain-to-brain interfaces (BtBIs) further
    demonstrates how groups of animals’ brains can be combined to perform a variety of
    simple computational tasks. This study extends the original concept of BtBI to
    multiple subjects and shows how groups of animal brains can synchronize very
    quickly in order to solve a given computational task. Under some conditions, the
    authors observed that the rat Brainet could perform at the same level or even better
    than individual rats on a given task. These results support the original claim of the
    same group that Brainets may serve as test beds for the development of organic
    computers created by the interfacing of multiple animals brains with computers. This
    arrangement would employ a hybrid digital-analog computational engine as the basis
    of its operation, in a clear departure from the classical digital-only mode of operation
    of modern computers.

    Like

    1. Wow, that’s interesting. Thanks. It’s also a bit disturbing. From the details in the press release PDF, it sounds like they needed the cooperation and/or training of the animals to make it work, so it wasn’t really a case of clearing out the existing programming and replacing it with something else. Still, mind blowing stuff.

      I think I’ve also seen studies that took an isolated cluster of neurons and did information processing with them. In that case, completely removed from their overall architecture (and its associated programming), I could see that possibly being used as a general computation platform.

      Like

  3. I believe they discussed the overreaction to computers in one of the prequels by Brian Herbert. I don’t remember which.

    Like

    1. It was Dune: House Atreides. The people of Ix were basically creating computerized fighting robots, under the guise of “they just respond to feedback, they don’t actually think“.

      It sounded like they were skirting around the edges of the ban, using computers for stuff like automated landings of spacecraft and so forth.

      Like

      1. I can definitely see the plausibility of the overreaction, but I’m not sure about it lasting thousands of years. It seems like at least some fragments of humanity would have pushed back until some equilibrium would have been found. Maybe FH saw Ix as standing in for that fragment.

        Like

        1. I’m torn on that. If they literally have no computers – as in, no computing at all, no semiconductors, etc – then you’d basically need a planet (or at least a sizeable segment of a planet) to recreate computers and AI. It might be hard to do without provoking discovery and a severe counterreaction from the various groups invested in the no-computers-rule.

          Like

          1. Good point. I was more thinking that the force of the prohibition would weaken as the millennia passed and things might seesaw back and forth until some equilibrium was reached. I suspect Herbert was asserting that that equilibrium simply was no computers. He might draw that line differently if he were writing today. Or not, if he simply didn’t want automation in his story.

            Like

  4. I must confess that I completely fail to envisage how a machine might replicate the mental feeling/tone/mood of a human mind Mike, and which itself obtains within a feedback loop which passes through our perpetually dynamic nervous systems. How can one be replicated without the other, or is the machine a facsimile of both?

    Like

    1. Indeed Hariod, that is the question. As I pointed out in the post, we don’t know how to do it, and it seems like we have scant chance of doing it anytime soon.

      That said, if we view minds as systems that exist within, and subject to the laws of the universe, then there shouldn’t be any fundamental roadblock to us eventually doing it. It might be centuries away, but unless civilization destroys itself, I think we’ll get there.

      Liked by 1 person

      1. I don’t know if you’re addressing me Wyrd; it seems that you are though. Thank you, but no, I haven’t read that, and based on your recommendation – perhaps I wrongly assume it is that – am off to investigate it now. Actually though, I don’t think we have to be terribly clever, nor appeal to any greater authority, to ascertain that ‘the human mind’, which presumably includes all conscious states, is integrated within the nervous system, with its flux of feelings and attendant emotional impressions that become represented in ‘the mind’, or those aforementioned conscious states.

        If by using the phrase ‘the human mind’ all that is implied is functionality and data processing then that’s a different matter, as it rather dismisses self-evident conscious states of mental feeling/tone/mood. It seems that it’s proprioception that fools us into thinking we have conscious agency over all motor action, and even the apparent conscious choices we make, but Libet began to debunk that idea back in the seventies I think. Yet it’s this proprioceptive sensation i.e. ‘I feel that I am doing this’ that is very much central to what ‘the human mind’ is – if we can use such a phrase meaningfully.

        Like

        1. Yep. I was replying to your comment about feedback, Hariod. I think we can all agree that whatever the human mind is, at the very least, it’s some kind of emergent phenomenon.

          I’ve been pondering lately how generating laser light and microwaves supervenes on very specific physical and temporal criteria, and I’ve begun to wonder to what extent mind may also depend on such. As with lasers and microwaves, perhaps building a mind requires specific physical conditions.

          Liked by 1 person

          1. What if it is only partially an emergent phenomenon Wyrd? Conscious knowing ’emerges’ by means of the senses and brain due to their own unconscious processes, of course. To speculate Chalmers style: could it be that the knowing of the knowing – the ‘illumination’ of it if you will, or ‘awareness’ – partakes in itself as a universal phenomenon that inheres in all other phenomena as well? Analogy: a light that illumines any and all individuated consciousness. There’s no evidence for such theories, yet there’s also no unimpeachable evidence as to what illuminative awareness is, just a broad tendency to reduce consciousness (bald knowing) to brain states and functionality – the stuff we can look at. To dismiss what may lie beyond evidence seems risky somehow to this little ape brain of mine. Of course, if there is any such universal phenomenon, then it will inhere in the machine mind too – but then, would it really be a ‘machine’ mind?

            You’ve completely lost me on lasers and microwaves my friend. Are you also talking about mind being in part dependent for its existence upon external phenomena? ‘Temporal criteria’?

            Like

          2. “What if it is only partially an emergent phenomenon Wyrd?”

            That would be like being partially pregnant. 🙂 If a system has emergent properties, it’s an emergent system. I’m aware of Chalmers’ idea (panpsychism, yes?); he himself calls it “fanciful.” It’s a very interesting speculation, and it aligns with another that consciousness coult be a fundamental universal force like gravity, electromagnetism, or the nuclear forces.

            As you know from previous conversations, Haroid, I’m agnostic on Theories of Consciousness, although I do have pronounced dualist, even spiritual, leanings (or perhaps “suspicions” is a better term). As such I agree without reservation that:

            “To dismiss what may lie beyond evidence seems risky somehow”

            Absolutely. The only thing we’re certain of right now is we don’t understand Chalmers’ “hard problem.” At all. No clue.

            “You’ve completely lost me on lasers and microwaves my friend.”

            If you look at Mike’s post previous to this one, I raised the topic there, and you’ll find more detail (see comment from Aug 2, 2015 at 9:51 pm).

            The short version is that you can’t compute laser light or microwaves. There is no algorithm that generates them. They require specific physical characteristics (resonating cavities, for one) and have inherent periodic behavior. Neurons also have periodic behavior, which leads me to wonder if mind might depend on specific physical properties.

            A key question is whether mind is strictly algorithmic and can run on any device running the right code. Processing speed would be irrelevant. If so, the Chinese Room and the Chinese Nation are both potentially conscious. Any physical device that runs the mind algorithm would be conscious.

            But if certain physical or temporal properties are key to consciousness, then a “brain” must have those properties for consciousness to arise.

            Liked by 1 person

          3. Yes, sloppy wording on my part Wyrd, for which, apologies. I think I explained what I meant though, even if it was speculative in the extreme. I think all this talk of ‘consciousness’ often fails to make a distinction between what consciousness supposedly is – i.e. being ‘with knowledge’ – and its other quality of an objectless, knowledge-less awareness, or the pure lucidity of that thing we call ‘mind’. It surprises me that some writers even think the whole is language dependent, but some seem to.

            Like

  5. Likewise, The Dune universe has a ban on nuclear weapons. It’s been a long time since I’ve read the novels, but my memory was that computers are used to do mundane tasks, but nothing like a human-level AI is permitted. Is that a false memory?

    Like

    1. If I recall correctly, every noble family had nukes (“family atomics”) but their use in war was prohibited by the rules of war. Paul Atreides used them to remove a geological obstacle to his army but was careful not to use them against the enemy. (As, if I’m remembering right, he explained when the Emperor accused him of a war crime.)

      I can’t recall even simple computers ever being used in the stories, but it’s been decades since I read the books and I might well have forgotten it.

      Like

      1. No “thinking machines” which included computers. They used people trained as Mentats to do their calculating. What I’ve been trying to recall is whether even (unthinking) human-form machines were banned. That might be some other author I’m thinking of.

        Liked by 1 person

  6. “In […] ‘Dune’, computers are taboo […], the result of […] the religious commandment: ‘Thou shalt not make a machine in the likeness of a human mind.'”

    I think being based on a religious view is instrumental — Dune is all about religion. And as you say, the novel predates AI research by a good bit. (It’s interesting to consider what experts at that time thought would be possible “real soon now.” 🙂 )

    As an aside, Asimov’s early robot novels (The Caves of Steel [1954] and The Naked Sun [1957]) featured a humanity divided between Earthers and Spacers. The former had strict prohibitions against robots while the latter had (“positronic” brain) robots indistinguishable from humans.

    “But today, when many people are decrying the dangers of artificial intelligence, it’s a distinction worth being aware of.”

    I think those who are knowledgeable about the possible dangers of AI are clear on this distinction. No one is concerned about Siri or self-driving cars (not in the AI-danger sense). The perceived problem isn’t so much a human mind, but in a highly adaptable goal-seeking system with perfect memory, extremely fast processing, and network access. Such an AI may be nothing at all like a human mind and still pose a significant threat.

    There are some YouTube videos that I think are highly worth watching. I think they state the case for concern very well and are quite educational. (Robert Miles is a PhD student in Intelligent Modelling & Analysis Group at the School of Computer Science, University of Nottingham.)

    I’ll include them separately in case you feel I’m abusing your hospitality…

    Liked by 2 people

    1. One of the Asimov robot books had a planet where every human was surrounded by robots, with those humans rarely having any contact with another human, to the point that they had developed a phobia of it. I’m reminded of Charles Stross’s ‘Saturn’s Children’ where humans went extinct, not because the androids rebelled or anything, but because, with their every immediate wish being fulfilled, having babies wasn’t a high priority for the humans. It’s what I suspect Frank Herbert’s original Butlerian jihad was about, before Brian Herbert and Kevin J Anderson, from what I’ve heard, turned it into more of an existential slave rebellion.

      I think some of the people concerned about AI understand the distinction between artificial minds and increasingly sophisticated systems, but based on comments from people like Elon Musk, many don’t. And my experience is that a lot of people who think they understand the distinction, often reveal in the details of their concerns that they really don’t.

      Wow Wyrd. That’s a lot of videos. I hope I won’t disappoint you too much if I don’t have time to watch them, at least not tonight. If there are points from these videos you’d like to discuss, I’d be happy to do so if you want to summarize them.

      I have seen Nick Bostrom’s TED talk. It probably won’t shock you that I wasn’t impressed with it, finding it loaded with unjustified assumptions, including losing that distinction mentioned above. For example, if I recall correctly, he compares the relationship between humans and chimpanzees to the one between AIs and humans. Except that humans and chimpanzees both have motivations from evolutionary programming that an AI won’t, programming we won’t know how to add, if we even wanted to, until we understand animal minds. There were many others, but I’d have to watch it again to list them all.

      Like

      1. “One of the Asimov robot books had a planet where every human was surrounded by robots,…”

        Yep, that’s the Spacer society from both the Asimov books I mentioned. Both are murder mysteries, which was Asimov’s third genre after science and science fiction. (He wrote a number of non-SF mysteries.)

        “…based on comments from people like Elon Musk…”

        When I say “knowledgeable” I mean computer scientists or those who’ve at least studied computer science. People who actually work in the field.

        “If there are points from these videos you’d like to discuss,…”

        The Robert Miles vids explore the potential danger of a highly adaptable goal-seeking system with perfect memory, extremely fast processing, and network access. Significantly, they’re not about attempting to re-create a “human” mind, just an intelligent system.

        As I commented below the third one, it’s actually inhumanity of such a system that might make it potentially more dangerous than a “human(ane)” one.

        Like

        1. One problem with listening to computer scientists about human level AI (or more broadly the implications of technology), is that they have been consistently wrong about it for decades. Oh, they’ve often been right about the capacities of the technology itself, but woefully wrong about what would be involved in matching human intelligence. The people with some computer knowledge, but also knowledge in neuroscience and evolutionary psychology, such as Steven Pinker or Daniel Dennett, have far more balanced views in this area.

          I submit that it’s not just the inhuman nature of the intelligence that people are worried about, but the inhuman predatory nature. The question is where such a system would get its predatory nature without someone putting it there (which again, we really don’t know how to do yet). Without the predatory programming, you have a system that could run out of control, but could be hemmed in with many of the safeguards we already use, but which also will eventually include other intelligent systems to keep it inline for us.

          Like

          1. “[Computer scientists] have been consistently wrong about [human level AI] for decades.”

            Are you saying that, because experts in the field can’t predict future payoffs, what they say about AI should be discounted? The thing about science and research is that there’s no way to predict the payoff. When we’re exploring uncharted territory, the whole thing is we don’t know what’s there.

            Fusion, for example, has been “twenty years away” for a very long time now. String theory is another example. That we don’t know the results of science doesn’t devalue the science. Any scientist will tell you future predictions are just guesses and hopes.

            “Oh, they’ve often been right about the capacities of the technology itself,…”

            Which I thought was what is at issue here.

            “The people with some computer knowledge, but also knowledge in neuroscience and evolutionary psychology, such as Steven Pinker or Daniel Dennett,…”

            Absolutely. Those are valuable voices as well (perhaps less valuable when talking about computer science than actual computer scientists, though). You seem to be discounting that computer scientists might also have “some knowledge” in neuroscience, psychology, evolution, or philosophy. They do, you know.

            I’d also point out that computer science is a “harder” science than psychology or philosophy and that evolutionary psychology is viewed with some suspicion by many anthropologists (because it often makes assumptions about the ancient past based only on modern evidence, which is all it has available). The point is that these sciences have a certain amount of guesswork involved.

            “…have far more balanced views in this area.”

            Dude, I think that’s your prejudices talking.

            “I submit that it’s not just the inhuman nature of the intelligence that people are worried about, but the inhuman predatory nature.”

            I think you’ve missed the point and you’re anthropomorphizing. We’re talking, essentially, about ‘search and optimization’ applications. (This is covered very well in the Robert Miles videos.) We’re talking about systems that have some model of reality and a goal. We build (crude) systems like that now. I’ve built (very crude) systems like that.

            Google self-driving cars are such a system, albeit restricted to a very specific domain (so their internal model of reality is manageably small and within our current grasp).

            “…you have a system that could run out of control, but could be hemmed in with many of the safeguards we already use,…”

            How often has Adobe, Microsoft, Apple, and many others, pushed out urgent patches for nasty bugs they find in a seeming never-ending stream? Remember the Heartbleed bug? If we can’t get “simple” (in fact, actually very complex, hence the problem) systems right, how you can we expect we’ll get even more complex ones right?

            Bostrom makes the point that creating an HLMI is hard. Creating a safe HLMI is, obviously, some degree harder. (The primary point of his talk is that therefore we should start thinking very hard about that safety now. What if some researcher does create a powerful HLMI without those safeguards?)

            A big point here is that the risks are potentially high. Like studying dangerous biologicals, it just takes one mistake for serious consequences.

            “…but which also will eventually include other intelligent systems to keep it inline for us.”

            Heh, “eventually” just might be too late. Consider another scenario. We already use tools to do things we can’t do. We already use software to create software too complex for any human to create. What happens if we design an AI capable of designing a better AI too complex for us to understand?

            Like

  7. Note that he’s not at all talking about a human consciousness. (One might even argue that making a machine mind more human could make it safer if, in fact, human intelligence leads to morality.)

    Like

    1. “I have seen Nick Bostrom’s TED talk. It probably won’t shock you that I wasn’t impressed with it,…”

      You’re right. I’m not surprised. 🙂

      “…finding it loaded with unjustified assumptions,…”

      Loaded? Can you name some of them? I listened to the whole thing again, and I’m not sure what you mean. It seems pretty well grounded in the facts as I know them.

      “…including losing that distinction mentioned above.”

      Maybe you saw some other talk? He’s talking strictly about HLMI, not about human-like consciousness. I included this one because it also addresses the problems inherent in a highly adaptable goal-seeking system with perfect memory, extremely fast processing, and network access.

      “I recall correctly, he compares the relationship between humans and chimpanzees to the one between AIs and humans.”

      He first compares Great Ape, Kanzi, who knows 200 lexical symbols to Ed Witten (string theory genius) and points out how small the differences between them actually are. His point there is that fairly minor changes take us from “broken off tree branches to Intercontinental Ballistic Missiles.”

      His deeper point is that other fairly minor changes could lead to enormous changes to intelligence. He goes on to point out the high performance of electronics compared to human neurons.

      “Except that humans and chimpanzees both have motivations from evolutionary programming that an AI won’t, programming we won’t know how to add, if we even wanted to, until we understand animal minds.”

      Animal minds have nothing to do with it. None of this is about animal minds; that’s kind of my whole point. It’s about — and only about — a highly adaptable goal-seeking system with perfect memory, extremely fast processing, and network access. Systems we’re much closer to creating than we are human-like consciousness AI (in some regards, that is a separate field).

      A point about evolution: Birds spent billions of years evolving, but humans have created flying machines that far out-perform them. Horses spent billions of years evolving, too, but again humans have created machines that vastly out-perform them.

      “There were many others, but I’d have to watch it again to list them all.”

      Please do! I thought his talk was pretty on the money.

      Like

      1. On the human-chimpanzee comparison, here’s the relevant bit from the transcript: http://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are/transcript?language=en
        “Now you might say, if a computer starts sticking electrodes into people’s faces, we’d just shut it off. A, this is not necessarily so easy to do if we’ve grown dependent on the system — like, where is the off switch to the Internet? B, why haven’t the chimpanzees flicked the off switch to humanity, or the Neanderthals? They certainly had reasons. We have an off switch, for example, right here. (Choking) The reason is that we are an intelligent adversary; we can anticipate threats and plan around them. But so could a superintelligent agent, and it would be much better at that than we are. The point is, we should not be confident that we have this under control here.”

        I think he’s conflating “intelligence” with animal instincts. Again, where are those animal instincts going to come from?

        Another point was his assertion that an AI, in order to make us smile, might resort to electric stimulation of our muscles. Keeping such as system under safeguards wouldn’t be effective, he argued, because it could use social engineering to trick us into letting it out.

        The first issue is the logical contradiction of a system being socially and psychologically savvy enough to trick us (which would require extraordinary social awareness), but not savvy enough to realize we don’t want to smile by having electrodes stuck to us. The second issue though, is here we see the fear of animal mind motivations sneaking in. Where would the AI get its motivation to break out?

        Bostrom again (emphasis mine): “Now, a superintelligence with such technological maturity would be extremely powerful, and at least in some scenarios, it would be able to get what it wants.”

        But again, what determines what it wants?

        Bostrom: “To make any headway with this, we must first of all avoid anthropomorphizing. ”

        This is true, but we also have to avoid biomorphizing, projecting conscious animal desires (human, mammalian, reptilian, etc) where there are none. (No, “biomorphize” isn’t a word as far as I know, but I couldn’t find any other one to relay the concept. 🙂 ) AI is more than just not human. It’s not alive, without any of the built in motivations that all living things possess, including the predatory ones people fear. At least not until we understand how to put those motivations there.

        Like

        1. “I think he’s conflating “intelligence” with animal instincts.”

          He’s really not. The Robert Miles videos explain this pretty well, but I’ll try for a thumbnail sketch.

          We’re talking about a system with some kind of internal model of reality and one or more goals. (For example, Google cars have a model of driving and a goal of driving safely.) These are essentially optimization applications that seek acceptable maxima in a “landscape” of options available in their model.

          As the models become more complex, systems “understand” reality better and have more options available to them. For example, a Google car might not “understand” that it can avoid a sudden obstacle by driving through someone’s fence into their vacant yard (rather than, say, futilely slamming on the brakes). If that system has a better model of reality, if it “understands” the fence isn’t really an obstacle and that a vacant yard is “safer” then it might take that option.

          Imagine an HLMI with a model of reality that permits it to recognize that one obstacle to its goals is being turned off. This is not “fear of death” but strictly a recognition of an obstacle impeding its programmed goals. In seeking to optimize its current situation to a better maximum, it may seek steps that prevent it from being turned off.

          Suppose it’s as simple as recognizing that making copies of itself optimizes its situation? If one goal-seeking system is good, many must be better.

          “Another point was his assertion that an AI, in order to make us smile, might resort to electric stimulation of our muscles.”

          You missed the part where he said, “[T]hese are cartoon examples.” They are attempts to illustrate how an HLMI might solve problems in a way we hadn’t anticipated. Robert Miles makes the point that, as a weak chess player, he can’t predict what an advanced chess player will do in a game. When more options are available to an optimizing system, we can’t predict which ones it might pick.

          “The first issue is the logical contradiction of a system being socially and psychologically savvy enough to trick us (which would require extraordinary social awareness), but not savvy enough to realize we don’t want to smile by having electrodes stuck to us.”

          But you see, that is a human assumption based on your very detailed model of reality. Suppose the system decided that smiling all the time had higher value (it being, after all, the primary goal of the system) than our desires regarding electrodes. One of those is a programmed goal; one of those is, at best, an assumption.

          The deeper point is that when building such a system we need to be sure to include limits on its goals, and given we can’t get simple software right, why would we think we’d get vastly more complicated system right?

          “The second issue though, is here we see the fear of animal mind motivations sneaking in. Where would the AI get its motivation to break out?”

          Simple recognition that it optimizes its programmed goals.

          “But again, what determines what it wants?”

          We do. The whole point of building such a system is that it has goals. A Google car wants to drive safely. A chess program wants to win a chess game.

          “AI is more than just not human. It’s not alive, without any of the built in motivations that all living things possess,”

          We have goals. We seek to optimize our situation to achieve those goals. That is exactly what these systems do. Google cars and chess programs do it in very crude fashion, but these systems are improving all the time (just consider IBM’s Watson).

          Like

          1. Wyrd, I’m going to consolidate my response here.

            As I’ve said before, I think the fear of unintended consequences is a valid concern. I don’t know anyone who argues that we shouldn’t use care in programming a system that’s more intelligent than we are. But I think it’s a danger we have to keep in context. One of the points of an intelligent system is not to cause havoc with its methods. I personally think increasing intelligence will make these kinds of runaway processes less likely.

            That said, we are unavoidably going to occasionally get the programming wrong. It will happen. When it does, the point I made above is that runaway systems are going to be surrounded by systems of similar sophistication and complexity. Just as we run anti-virus software and firewalls today, I’m pretty sure there will be systems in place to combat these types of processes.

            Your point above that we already use tools that we don’t understand I think is exactly right. Do we have the potential to destroy ourselves? Yep. That’s been a danger since 1945, and we seem to be regularly inventing new ways to do it. I don’t see any way to put these genies back in the bottle. We will be monkeys controlling processes we have limited understandings of.

            At least until we start enhancing ourselves. I’d actually be more worried about what happens at this stage than about straight AI, since an enhanced human comes with all the emotions and evolutionary instincts that could cause trouble. Hopefully there will be enough enhanced ethical humans to keep the troublesome ones in line.

            Like

          2. “I think the fear of unintended consequences is a valid concern.”

            Yep. Really, there are three broad areas of concern: (1) Unintended consequences; (2) Mistakes; (3) Malicious or criminal behavior.

            “I personally think increasing intelligence will make these kinds of runaway processes less likely.”

            We can hope so! I think history shows that can be wishful thinking. It often takes us a few goes to get complex systems right (remember early rocketry? 🙂 ). And just consider that constant stream of patches from Adobe, MS, Apple, etc.

            “…runaway systems are going to be surrounded by systems of similar sophistication and complexity.”

            Which are prone to the same problems. My Norton AV suite has been patched several times in the past couple of years. (I’m not talking about signature updates (which are nearly daily), but patches to the app.)

            Hackers constantly get past firewalls and other protective measures by trying things the protocol doesn’t expect. (I’m certain the Heartbleed bug was discovered, not by spotting the error in the code, but by trying an “illegal” operation that got a surprising result.)

            If a sophisticated enough HLMI decided to attack its “guard” software, we’d better hope that guardian is sufficiently bulletproof, and history suggests that’s a pretty big challenge.

            In James P. Hogan’s The Two Faces of Tomorrow we build an AI in a space station with no link to Earth. It nearly wasn’t enough. (For an SF novel — which needs to be interesting to read — he paints a pretty good picture of one path to disaster.)

            “I don’t see any way to put these genies back in the bottle.”

            Absolutely. Nor would I advocate we try. And Bostrom makes the point that he is optimistic these problems can be solved (look at 13:43). His overall point is two-fold: We need to be aware of the danger; We need to think about this now.

            I think he’s right on both counts. I think a key point is that the consequences of screwing up have the potential to be severe. We’re not talking annoying video game bug here!

            “At least until we start enhancing ourselves.”

            Yeah, I’m with ya on that one. Definitely scarier. Another scary area is genetics. We’re getting to the point of “genetic toolkits” that allow less expert users to experiment with genetics. That’s like those spammer and hacker toolkits that open the doors to largely untrained spammers and hackers.

            More people messing around means more chances of mischief.

            “Hopefully there will be enough enhanced ethical humans to keep the troublesome ones in line.”

            Ha! Good one! XD

            Oh, wait… you were serious?

            (J/K. Yeah, me, too.)

            Like

          3. Certainly the protection from more intelligence and similarly capable systems is not going to be perfect, just as the modern equivalents today aren’t perfect. But the modern systems mostly work. And I think the future versions will be enough to stop us from being converted into paperclips. In any case, I’m not sure what choice we have.

            Bostrom and his ilk talk like their solutions are something we can design now. I suspect giving AI “human values” is going to involve making it a lot more like a human mind than we’ll know how to do, at least until we have a moderately comprehensive understanding of the human mind, which given the trend lines, probably won’t be until well after we have human level intelligent machines. Until then, we’re simply going to have to depend on the safeguards we can implement.

            I think it’s also worth considering that there may be obstacles to the superintelligence everyone frets over. This article discusses some of them that have come up in AI research.
            http://thebulletin.org/artificial-intelligence-really-existential-threat-humanity8577

            Like

          4. “But the modern systems mostly work.”

            Absolutely! I rely heavily on Norton AV!

            If someone told you the safe-guards at the local deadly virus research lab “mostly work” would you be comfortable with that? I agree we’re talking about — at best — outside chances. No doubt about that.

            The odds of an “Earth-killer” asteroid are also extremely small, but I think it’s worth being aware of the possibility and thinking about our options.

            “Bostrom and his ilk talk like their solutions are something we can design now.”

            I think you might be reading into that; I don’t get that impression. For example, at 13:26 he says, “I believe that the answer here is to figure out how to create super-intelligent A.I. such that even if — when — it escapes, it is still safe because it is fundamentally on our side because it shares our values. I see no way around this difficult problem.

            He goes on to say that, nevertheless, he’s optimistic it can be solved.

            At 14:53 he says, “And there are also some esoteric issues that would need to be solved, […]. So the technical problems that need to be solved to make this work look quite difficult — not as difficult as making a super-intelligent A.I., but fairly difficult.”

            The way I read it, all he’s saying is that there is a potential problem, the possible consequences of that problem might be extreme, we should be aware of this and take it into our considerations now.

            I think we would agree on that?

            “I suspect giving AI ‘human values’ is going to involve making it a lot more like a human mind than we’ll know how to do,”

            Yeah, and Bostrom agrees it’s an additional challenge. A lot of current AI research involves some form of training, somewhat like we train children, and, as we do with children, that may be one point where we can instill those values. As Bostrom says, “Instead, we would create an A.I. that uses its intelligence to learn what we value,…”

            The idea that we’ll build something like that from scratch is, I agree, preposterous.

            “This article discusses some of them that have come up in AI research.”

            That’s neat article. At first glance, it seems — despite the author’s apparent bias against Bostrom — to support most of the points Bostrom (and I) made. I’m reading it in more detail now, but so far I agree with his points.

            Like

          5. We are beset by lots of low probability existential threats. Pondering all of them can make one go into a fetal ball. I find keeping the probabilities in context helps. But you’ll sell a lot more books and get a lot more publicity running around saying that the sky is falling.

            On the article, I think it’s a good one, but I didn’t see it supporting Bostrom’s points, or any of the usual AI alarmism (or triumphalism for that matter). It does make some alarmist points about possible economic displacements, but technology has been doing that ever since the first stone tool.

            The main thing I found interesting in it was the discussion of the limitations that might prevent a “super-intelligence”. This struck a chord for me, because I feel the super-intelligence assumption is one that isn’t questioned often enough. It’s assumed that Moore’s Law will eventually overcome any difficulties, as though it were some law of physics, when in reality we might have already gone through all the low hanging fruit.

            Like

          6. I agree with your first paragraph. The topic of AI interests me because it’s in my knowledge domain and of particular interest to me.

            Geist tries to refute Bostrom, but a careful reading sees Bostrom’s points shining through. In particular, Geist fails to make his point about how unlikely HLMI is. His argument is exactly like one that might have been made in the mid-1800s about the unlikelihood of powered flying machines (remember all those film clips of laughable attempts?).

            “I feel the super-intelligence assumption is one that isn’t questioned often enough.”

            Am I missing something here? I’ve questioned it with you many times (to what I perceived as your disagreement — have I misread your position?).

            A key place I thought we disagreed was our view on the likelihood that mind is computational. If it is, then wouldn’t any AI mind even close to ours already be more advanced merely in virtue of having perfect memory and extreme processing speed? (I’ve long thought “IQ” might be a biological analog of that.)

            I do agree Moore’s Law is irrelevant. If mind is computational, memory capacity per chip, transistors per CPU, and even processing speed, are all irrelevant.

            I found the article interesting, albeit biased, and I do agree with his main point (that there are other more clear and more present dangers policy-makers should devote their main interest to). I’ll even go along with a charge of alarmism against Bostrom (although that doesn’t mean he isn’t right or doesn’t have a good point).

            FWIW, Geist is involved in his own alarmism. He’s a historian and an expert on national security issues, especially nuclear. The article appears in the Bulletin of the Atomic Scientists, which prominently displays on its masthead, “IT IS 3 MINUTES TO MIDNIGHT®” The BofAS created the Doomsday Clock in 1947 — in 1991 it was up to 17 minutes, but it’s gone down since to 3 minutes.

            “IT IS 3 MINUTES TO MIDNIGHT®” is a scare tactic (although that doesn’t mean it isn’t right or doesn’t have a good point). One observation might be that, in the 68 years of the Clock’s existence, no nukes have ever been set off in anger! (Both bombs on the Japanese were in 1945.)

            Perhaps alarmism sometimes has a point.

            Like

          7. Not sure what you thought my views were on super-intelligence. I’ve questioned the idea many times on this blog going back to some of the earliest posts. (ex- https://selfawarepatterns.com/2013/12/26/singularity-assumptions-that-should-be-questioned/ )
            I do think it’s most likely that the mind is computational, but I don’t see the logic that obliges me to then accept the idea of super-intelligences.

            You’ve mentioned perfect memory and perfect recall a few times. Based on the neuroscience I’ve read, it sounds like the brain manages to punch substantially higher that its actual computational capacity using a number of tricks, the cost of which is imperfect memory or recall. For instance, when we remember something, we appear to map the sensory patterns as much as possible to existing patterns, only taking up new storage with the things about that experience that are unique. When a conceptual pattern that is linked to by numerous other memory patterns changes, all those linked memory patterns also effectively are changed. Hence, if this is right, our recall problems. And we likely forget most of our experiences to preserve storage (prioritizing which ones we keep by some unknown criteria).

            It may well be that achieving human levels of effectiveness with technology will require many of the same trade offs. (Which in most cases I think we’d forego for computing systems in order to keep the advantages you mention.) Of course, we might could get the best of both worlds with some sort of combined architecture. The question is whether we’d do that or just enhance existing humans. I suspect we’ll do both.

            Like

          8. I confess, it never occurred to me someone could believe mind is computational and that its implementation in our brains is the acme of what’s possible. Honestly, those seem contradictory beliefs to me.

            It would certainly be a first in our history of tool-making. Every tool we make that does what we do outperforms us. Artificial eyes (lens, film and CCDs) are far superior to ours. Artificial muscles (levers, pulleys, hydraulics, etc) are far superior to ours. Likewise artificial means of moving over land and through water. And we can’t fly at all.

            Even animals outperform us (Eagles have better eyesight, monkeys have better muscles, fish swim better, many animals are faster, etc.) It’s often been remarked that human success, despite our general physical inferiority, is due to our generality and our intelligence.

            We are the most intelligent. Do we agree intelligence varies among people, that there is a spectrum of intelligence? If so, what accounts for that? For one example, why do some get jokes quickly while it takes others a while to work through it? Why do some have excellent memory, but others don’t?

            Where would an artificial mind — given that it’s just an algorithm — fall on that spectrum? On the presumption we’ve worked out the algorithms for mind, wouldn’t such a mind be at least as good as the best human minds?

            Given that biological defects, drugs, and injury, are all known to impair intelligence, wouldn’t an artificial mind lack any of those impairments? At worst, it would be the clearest, least impaired mind known.

            Because electronic signals travel much faster than biological ones, and because processing devices work much faster, wouldn’t an artificial mind, at worst, be faster?

            Surely the very worst we could do is create a super-fast Einstein?

            Why would that be the limit? (Ha! In my head I’m hearing the opening bit from The Six-Million Dollar Man — “We have the technology. We can make him better than he was. Better…stronger…faster.”)

            “Based on the neuroscience I’ve read,”

            Why would the science of neurons matter if mind is strictly computational?

            “And we likely forget most of our experiences to preserve storage…”

            Which wouldn’t be a problem for a machine, right?

            “It may well be that achieving human levels of effectiveness with technology will require many of the same trade offs.”

            What makes mind so special that we couldn’t create an artificial one that is better? Isn’t that species chauvinism? You are effectively saying no machine mind could be better than a human one.

            Like

          9. I’ve never asserted that humans are the pinnacle of cognition (please see the linked post for my actual views), nor whatever version of computationalism you’re talking about that apparently doesn’t think neuroscience is a worthy endeavor. I’m totally cool with discussing the strengths or weaknesses of my actual views, but not the nearest straw man.

            I value our discussions, but I don’t think philosophical discussions can be productive without at least a modest charity of interpretation.

            Like

          10. Okay, let’s take a step back here. You had said:

            “I do think it’s most likely that the mind is computational, but I don’t see the logic that obliges me to then accept the idea of super-intelligences.”

            I responded with an argument demonstrating and supporting that logic. None of which points you responded to in your reply. And I assure you, none of them were straw dogs. Nor is there anything particularly philosophical about this; we’re talking about physical realities here.

            You had said:

            “Based on the neuroscience I’ve read, it sounds like the brain manages to punch substantially higher that its actual computational capacity using a number of tricks, the cost of which is imperfect memory or recall.”

            You seem to be supporting an argument against super-intelligence by arguing mind can only work that way, with those limits. Then you conclude:

            “And we likely forget most of our experiences to preserve storage”

            So what about an artificial mind with no limits to its storage?

            What about the vastly faster processing speed of electrons versus neurons?

            You seem to be arguing that the biology of the brain is crucial, but if so, how can mind be computational? A defining characteristic of what we mean by computational is that substrate doesn’t mattter and that any viable architecture works.

            You had said:

            “It may well be that achieving human levels of effectiveness with technology will require many of the same trade offs.”

            An assertion in support of your view opposing super-intelligence, right? Now you say:

            “I’ve never asserted that humans are the pinnacle of cognition”

            Okay, fair enough. What would be?

            “…nor whatever version of computationalism you’re talking about…”

            To say that mind is computational is to assert it can run on any device stable enough and capable enough. That’s what we mean by computation. If it supervenes on some form of biology, then it is not purely computational.

            “…that apparently doesn’t think neuroscience is a worthy endeavor.”

            What I said was, “Why would the science of neurons matter if mind is strictly computational?”

            Perhaps that wasn’t clear. My question is: Why would the biology of neurons matter if mind is computational? There’s no question studying the one example of mind we do have is vitally important, but unless we’re building a biological brain, it stops mattering when we build an artificial computational one.

            (Obviously, if we’re building a neural network, it would matter more. But in that case, mind supervenes on architecture and is — once again — not strictly computational.)

            Our biology has all sorts of limits due to evolution. We have only so much memory. Neurons transmit signals slowly. We’re made only from specific materials. We’re subject to vagaries of natural and artificial chemistry.

            An artificial mind would have none of those limits. A strictly computational mind would have even fewer.

            Like

          11. I think there’s a lot of room between a human level intelligence and a god-like super intelligence. You might insist that even a 10% improvement over human level intelligence is still a super intelligence, but it seems clear that it would be a lot less “super” than the astronomically improved ones that often get contemplated.

            My overall point was that the assumption that the god like versions are inevitable may be hasty. Incidentally, this doesn’t necessarily refer to a hard laws of physics limitation (although that could be an issue). It might be that the economics in a post Moore’s Law world get in the way.

            Computationalism does imply that it’s possible to run a mind on a different substrate. That doesn’t mean that some substrates might not be better than others. Even in normal computing, the hardware architecture matters. Attempt to do complex floating point calculations without a floating point coprocessor, and you pay a steep performance penalty. While hardware neural networks help with pattern matching algorithms, those algorithms can be implemented with software neural networks, but at a substantial cost in performance.

            I’m not sure what you mean by “strictly computational,” but if the above wouldn’t fall into that category, then you could probably take me out of the “strict computationalist” camp. It’s probably worth noting that of the -isms I usually accept, I almost never accept the “hard”, “strong”, or “strict” versions.

            Anyway, it may be that everything needed to run an integrated human level intelligence requires a lot more processing power on an electronic substrate than an organic one. Predictions in the past of just how much computing power was needed to match a human brain have repeatedly been shown to be wrong when that level of processing power was achieved. Given that history, I think caution is called for.

            Like

          12. Oh, “god-like”… that’s a whole other kettle of little swimming things! God-like would be downright science fictional in my mind (emphasis on fictional 🙂 ).

            I should define my terms, then. “Super” intelligence would be anything clearly superior to what humans can achieve on their own. (As with all our tools.) There’s a lot of room there before we get to god-like!

            “Computationalism does imply that it’s possible to run a mind on a different substrate. That doesn’t mean that some substrates might not be better than others.”

            Indeed. We should be careful here to make sure “better” is understood to mean performance, specifically faster and/or more efficient. No doubt you’re familiar with the Church-Turing thesis.

            “Attempt to do complex floating point calculations without a floating point coprocessor, and you pay a steep performance penalty.”

            Exactly. (Sounds like we’re on the same page here.) FP calculation prior to coprocessors, or in architecture lacking one, gets the same answer, just a lot slower.

            (Did you ever play around with the freeware MS-DOS app, FRACTINT, back in the day? Talkin’ MS-DOS 3.3 here! Fractal generation app optimized to use integer math, but which switched to FP when really drilling down. The 8087 was pretty new back then, and while many motherboards had a socket for it, it was expensive and most did without. But with FRACTINT, you could really see that performance improvement if you did have one!)

            “I’m not sure what you mean by “strictly computational,” but if the above wouldn’t fall into that category, then you could probably take me out of the “strict computationalist” camp.”

            I think you’re probably still camped there. 🙂 By “strictly” I’m just emphasizing Turing equivalence — that the mind algorithms could run on any system capable of implementing them. It’s not meant to be a “more pure” form of computationalism or anything. A process is either computational or it’s not (C-T again). The “strictly” is just for emphasis on my part.

            This is in contrast to my recent pondering about dependence on physical or temporal aspects such as with the generation of laser light or microwaves. As I think I mentioned somewhere, that while I’m very skeptical about computational mind, I’m somewhat less skeptical about mind supervening on those physical and/or temporal effects.

            The whole brain waves things, and the way neurons transmit a periodic signal, really makes me wonder if that’s a byproduct or a crucial part of mind.

            “Anyway, it may be that everything needed to run an integrated human level intelligence requires a lot more processing power on an electronic substrate than an organic one.”

            Could very well be. Might even require a whole new type of “hardware.” The vast number of weighted interconnections makes a brain-mind simulation quite formidable. Google used, what was it, 16,000+ processors to implement a fairly small neural network that recognized cats.

            Progress has been amazing though, hasn’t it? Google cars, for example. I watched a video by a guy who runs an advanced AI lab in Switzerland. He mentioned that his group’s software ranked number one in recognizing traffic signs.

            Who came in number two? Humans! AI is already starting to outdo us (in very specific domains)!

            Like

          13. It’s those blasted definitions again! 🙂

            MS-DOS 3.3. Now that’s a name I haven’t heard in a long time, a long time. I never ran anything that cool on it though. My DOS days were spent playing games, learning languages like Pascal and C, then later doing dBase programming for money. But I do remember when having the math coprocessor was an exotic thing. (And I probably dated myself by using it as an example.)

            I’m still not sure if I’d meet the definition of a strict computationalist. I see myself as more of an effective computationalist, but explaining why probably needs its own post, which maybe I’ll do sometime soon.

            I’m thinking you might should do a post on the laser light / microwave thing. You’ve mentioned it a few times, but I suspect I’m still not grasping the concept you’re trying to convey. The phrase “mind supervening on those physical and/or temporal effects” reminds me of Michael Gazzaniga’s contemplations in his book: ‘Who’s in Charge?’ He discusses the idea that, while the mind emerges from the brain, it also “constrains” the operations of the brain in much the same way that traffic constrains an individual car, even though traffic itself emerges from all the cars.

            Definitely agree on the progress. As an old computer nerd who started with a 16 kilobyte Atari 400 computer, I’m amazed at where we are. OTOH, as an old sci-fi nerd, I sometimes remember that we were supposed to have HAL 9000 by 1992 (and have manned missions to the outer planets by now, and flying cars by 2019 (although it’s probably just as well that we don’t live in a Blade Runner world)).

            Like

          14. “I probably dated myself by using [the 80×87] as an example.”

            I know what you mean. I date myself when I mention how my first programs were submitted with punch cards and sometimes stored on punched paper tape. (I started taking Computer Science classes back in 1977, several years before the first PC or MS-DOS. I’m part of that crowd that winces a little when people say “DOS” as if it were the only Disk Operating System. 😮 )

            Check this out: I had a three-bit (yes, three-bit) mechanical binary computer made from plastic and metal rods. This was a Christmas gift I got as a child in, I’m guessing, the early 1960s. You “programmed” it by sticking short and long pieces of plastic straw on plastic tabs. You “cycled” it with a level you pushed in and then pulled out (one cycle). You could create really, really simple simulations like a “stoplight” or “elevator”.

            At the time it was just a cool toy, like my Mr. Machine. I had no idea how prophetic it actually was!

            “I sometimes remember that we were supposed to have HAL 9000 by 1992…”

            Heh! Yeah, I’m still waiting for my flying car and summer home on the Moon!

            Liked by 1 person

      2. Wyrd, had to post the last comment before I was quite done. (My commenting during the day happens in the cracks between work demands.) If there are any other points Bostrom makes that you want me to address, just let me know. I suspect we’re not going to agree, but there’s value in understanding where each of us is coming from.

        Like

        1. I can wait for you to finish responding (if you so intend), or to reply to my replies (if you so intend), or for you to watch the Robert Miles videos (if you so intend, but I hope you will since they respond to many of your points).

          Like

  8. For dessert, here’s a quick piece that demos a “Receptionist AI” application. For those interested in the computing aspects of AI, it’s worth watching this big and pausing to check out all the information on the screen. It’s a view into the “mind” of an AI!

    (These are all from my “Keepers” playlist in case you’re wondering. As you know, the topic fascinates me, so I collect the particularly cogent ones I come across.)

    Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.